[XSL-LIST Mailing List Archive Home] [By Thread] [By Date] [Recent Entries] [Reply To This Message]

Re: ChatGPT results are "subject to review"

Subject: Re: ChatGPT results are "subject to review"
From: "Piez, Wendell A. (Fed) wendell.piez@xxxxxxxx" <xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx>
Date: Fri, 7 Jul 2023 16:37:45 -0000
Re:  ChatGPT results are "subject to review"
Norm, yes indeed, they are saying 'hallucinating', which is a terrible
euphemism, because it:

- Sounds fairly benign (b/c hallucinating people are not generally destructive
or antisocial)
- Implies there is someone 'there' to 'hallucinate', which has not been
demonstrated
- Implies corrigibility, b/c hallucinating people can often be restored indeed
hallucinations often go away on their own
- implies there is some kind of 'normal' that characterizes it when it is not
'hallucinating', as opposed to the hallucinating itself being the normal and
'correct' (just not adequate) operations

In other words, what they are calling 'hallucinating' is what the system does
correctly, when it turns out that is inconvenient (or 'misaligned' in some way
detectable by and meaningful to people).

All this is wrong, but then the purpose of euphemism ("calling it nice") is
*not* to be true or accurate, quite the contrary.

When not saying "lying" for effect (which also implies subjectivity), I prefer
the term "fabulating" to describe what it is that (I am led to believe) the
bots are doing.

Bringing it painfully back on topic, there is a world of difference between an
XSLT executed by a deterministic processor built over algorithms implementing
a testable specification, and a transformation executed by an LLM in any
scenario. To my knowledge they haven't hooked the two together but they will.
A robot equipped with a validating parser and a conformant XSLT engine could
presumably do an even better job 'faking it' than one without, and it might be
harder to detect. (Or easier, if it proves unable to find human-like solutions
to making things valid and executable. And on whether you look at the code or
the output.)

Which comes back to what I just said, namely the unit tests.

Cheers, Wendell


-----Original Message-----
From: Norm Tovey-Walsh ndw@xxxxxxxxxx
<xsl-list-service@xxxxxxxxxxxxxxxxxxxxxx>
Sent: Friday, July 7, 2023 11:42 AM
To: xsl-list@xxxxxxxxxxxxxxxxxxxxxx
Subject: Re:  ChatGPT results are "subject to review"

> (Or it would be lying were it capable of lying. What it is doing is
> what it was programmed to do, namely chat with you about your topic of
> choice.)

The term of art for LLMs just making [expletive] up seems to be
bhallucinatingb.

                                        Be seeing you,
                                          norm

--
Norm Tovey-Walsh <ndw@xxxxxxxxxx>
https://norm.tovey-walsh.com/

> A man may by custom fortify himself against pain, shame, and suchlike
> accidents; but as to death, we can experience it but once, and are all
> apprentices when we come to it.--Montaigne

Current Thread

PURCHASE STYLUS STUDIO ONLINE TODAY!

Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced!

Buy Stylus Studio Now

Download The World's Best XML IDE!

Accelerate XML development with our award-winning XML IDE - Download a free trial today!

Don't miss another message! Subscribe to this list today.
Email
First Name
Last Name
Company
Subscribe in XML format
RSS 2.0
Atom 0.3
Site Map | Privacy Policy | Terms of Use | Trademarks
Free Stylus Studio XML Training:
W3C Member
Stylus Studio® and DataDirect XQuery ™are products from DataDirect Technologies, is a registered trademark of Progress Software Corporation, in the U.S. and other countries. © 2004-2013 All Rights Reserved.