Discover more from Auerstack
ChatGPT Does Not Hallucinate
Fake legal citations and the dangers of anthropomorphizing
ChatGPT spreads one lie more than any other, and that lie is that it understands what it’s saying. We assume anyone using language has a grasp on some pretty fundamental concepts, like the difference between real and imaginary and the difference between truth and falsehood. Even parrots know what’s real (even if they can’t actually express it in language). But ChatGPT doesn’t.
ChatGPT’s scandal du jour is the tale of a New York lawyer who submitted a ChatGPT-authored brief that cited (and later summarized) six nonexistent cases. After a show of white-hot fury from the judge, attorney Steven A. Schwartz filed an affidavit of profuse apology. Schwartz did say that ChatGPT “assured the reliability of its content.”
No doubt OpenAI and Microsoft will be massaging their chatbots to place even more caveats before issuing any factual information, but this problem is not going away.
Here, Facebook’s Yann LeCun has been on-point in saying that ChatGPT and its fellow LLMs simply don’t have the guardrails to avoid “hallucinating” incorrect, false, or fabricated data. Guardrails like, as LeCun says, having any idea of how the world works. Or, even more simply, having concepts.
Here’s where calling ChatGPT’s nonsense “hallucinations” is misleading. The idea of ChatGPT hallucinating implicitly carries with it the assumption that there is some difference to ChatGPT producing accurate information versus ChatGPT producing made-up nonsense. There isn’t, at least not internally. LLMs walk down their garden paths hoping to stay within the “accurate” zone, but the checks on it are purely probabilistic.
Calling those fake cases “hallucinations” implies that there’s some kind of disjunctionbetween ChatGPT telling the truth and it making stuff up. It implies that ChatGPT went down some different algorithmic route in fabricating information, that "something went wrong.” The problem is actually far deeper: the mechanisms for “true” and “hallucinatory” content are the same, and there’s no mechanism to distinguish them.
The fundamental lie occurs every time ChatGPT vouches for the accuracy of anything it says. It does not have the capacity to “know” whether anything is true or not, and so whenever it vouches for itself, it really is saying something false—not that it would know that either.
So when we say AI hallucinates, we’re already giving it more credit than it merits and projecting ourselves into it. ChatGPT’s achievement is in pulling off the simulation of knowledge so incredibly well—not in actually possessing it.
Thanks for reading Auerstack! Subscribe for free to receive new posts and support my work.
This won’t stop AI from authoring undergraduate papers. (I spoke to one Ivy undergrad last week who shrugged and admitted she’d already used it to write a paper, without consequence.) LeCun is a bit too pessimistic about the future of LLMs. They will go beyond writing assistance because there are many scenarios in which:
The “facts” are so sufficiently and consistently established in the training data (“The sky is blue,” “Shakespeare wrote Hamlet”) that ChatGPT will get them right.
Errors in the “facts” either don’t matter or will not be caught.
We already “robo-grade” millions of standardized tests. If we can grade them computationally, ChatGPT can certainly write them computationally. In that article, an Educational Testing Services employee actually says, “The facts are secondary.” That is a godsend to ChatGPT. And that is far from the only place where truth and accuracy don’t matter in writing.