Auerstack

Share this post

ChatGPT Doesn't Understand What It Says

auerstack.substack.com

Discover more from Auerstack

Technology, society, literature, philosophy.
Over 1,000 subscribers
Continue reading
Sign in

ChatGPT Doesn't Understand What It Says

How AI can be so smart yet so dumb

David B. Auerbach
May 26, 2023
10
Share this post

ChatGPT Doesn't Understand What It Says

auerstack.substack.com
2
Share

I’ve compared AI to the “intelligent” horse Clever Hans before—a supercharged Clever Hans that is so uncannily good at simulating uniquely human behavior in unprecedented ways that we’re intuitively baffled when it falls down.

Here’s what to remember: ChatGPT doesn’t understand what it’s saying. Remind yourself of this no matter how convinced you may feel to the contrary. Even though “understanding” is a vague term, I cannot think of any acceptable way to define “understand” that ChatGPT measures up to.

So when master linguist Alex Foreman translated part of Shakespeare’s Julius Caesar into Latin and asked ChatGPT (in Latin) to identify the source, it is amazing that it correctly identified the source writer—and equally amazing that it got the play wrong.

May be an image of text that says 'Hic est locus electus ex operibus poetae cuiusdam anglici, sed redditus latine. Potesne mihi dicere quis poeta scripsit textum originalem? Mihi ignöscãs à möles terrae sanguinãns Quod clemens atque blandus sum laniis tuis. Heu, rüdera es virörum nöbilissimi Umquam qui vixérunt lustris läbentibus! Vae manui, quae hunc cărum effüdit sanguinem! Sine dubio, haec carmina Latine sunt reddita ex operibus poetae anglici William Shakespeare Haec verba sunt ex fabula "Macbeth" Actus secundus, Scena prima, ubi Macbeth loquitur de crudelitate et sanguine quod effudit ut regnum arriperet.'
ChatGPT attributes the original to Shakespeare, but to Macbeth instead of Julius Caesar

It even provided a specific citation for the original: Act II, Scene 1. But when told that the source was Julius Caesar, ChatGPT could then provide the right original:

May be an image of 1 person and text

Intuitively we want to ask how ChatGPT could possibly know that it was Shakespeare and yet not know the play. Did it have incorrect data? Did it make an error in judgment? Was it bullshitting?

The answer is none of the above. All of those answers anthropomorphize ChatGPT far more than is merited. They all assume some kind of intention and coherency to ChatGPT that it simply doesn’t possess. ChatGPT and its LLM kin are AIs that can claim that the number 5 is not prime in one breath and then say that it is in the next, yet have no notion that they’ve just contradicted themselves, nor even understand what contradiction is.

No photo description available.

Broadly speaking, these AIs proceed probabilistically, generating subsequent text based on what’s “suitable” from what went before. Often what’s suitable is true. Frequently it is not. The stunning achievements of deep learning-based LLMs are all the more incredible given that nowhere in their systems do we see any representation of what is “true” or not. Unfortunately, they tempt us into thinking that we are a lot closer to ironing out that wrinkle—the absence of understanding and concepts—than we really are.

Thanks for reading Auerstack! Subscribe for free to receive new posts and support my work.

That’s not to say that an AI would need to have some explicit symbolic representation of what’s true or not in order to stop making these sorts of mistakes. Human brains don’t. And because AI has developed along very different tracks than any lifeform, it’s hard to judge the exact distance it is from making the leap to some form of actual understanding. But given our human bias toward anthropomorphizing AI behavior, we can be reasonably sure it’s farther away than conventional wisdom currently has it.

That problem of understanding makes many of the hot-button questions less pressing than they feel—or at least, drastically reframes them. If an AI doesn’t know or understand what it’s saying, how can it know or understand ethics? Can an AI without understanding truly align with our values, if it can’t genuinely possess any values? The issue doesn’t become one of education, but sheerly of control.

And the broader control issue truly is pressing, because other achievements in AI—ones that don’t require human-style understanding—may be closer than we think. But framing it in terms of human values is not the way forward. These AIs don’t understand them.

Thanks for reading Auerstack! Subscribe for free to receive new posts and support my work.

10
Share this post

ChatGPT Doesn't Understand What It Says

auerstack.substack.com
2
Share
2 Comments
Share this discussion

ChatGPT Doesn't Understand What It Says

auerstack.substack.com
Diego
May 27

13 Serie ChatGPT, Ride 1 Snake.

7 Miles long,

Come ti può Mangiare in 1 Istante,

Come a Scoprirti in 1 Ansietà,

Come lo antropomorfizzassi più di quanto Merita,

Come 1 Intenzione,

Come 1 Coerenza,

Come 1 Capacità di n Contraddizioni,

Come che semplicemente non Possiede.

Come se non avrai Paura,

As Well As Stay calm,

a n Cavalcioni,

Come Volando,

Come Vedrai,

Come Farai,

n Cose bellissime.

Expand full comment
Reply
Share
Rene Saller
May 27

Thanks for explaining it so well. It made me realize how often I anthropomorphize nonsentient items (not just computers, alas).

Expand full comment
Reply
Share
Top
New
Community

No posts

Ready for more?

© 2023 David B. Auerbach
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing