11 Comments

You needed a better step-by-step prompt! ChatGPT generates answers one word at a time, sequentially; as soon as it answered "No" right off the bat, all its later "step-by-step" reasoning was useless and played no role in generating the "No." This of course is a mistake humans also make, but if you want to get ChatGPT to be any smarter than humans about it, you need some prompt that tells it to not answer right away until after it's reasoned step-by-step. As soon as the AI answers "No", it's done reasoning step-by-step; if it answers that right away, the prompt failed.

The original step-by-step prompt is in GPT-3 format, not ChatGPT format, and has the answerer *itself* reply "Let's reason step-by-step." before the rest of the answer gets filled in.

Expand full comment

Interestingly, it knows what mistake it made

Me: is the number of letters in the word 'prime' prime?

ChatGPT: No, the number of letters in the word "prime" is not a prime number. Prime numbers are positive integers that have only two positive integer divisors: 1 and themselves. The number 5, for example, is a prime number because it can only be divided evenly by 1 and 5. The number of letters in the word "prime," which is 5, can also be evenly divided by 2 and 5, so it is not a prime number.

Me: spot the mistake in your answer

ChatGPT: I apologize if my previous response was incorrect. I'll review my response and try to identify any mistakes.

In my previous response, I stated that the number of letters in the word "prime" is not a prime number because it can be divided evenly by 2 and 5. However, this is incorrect. The number of letters in the word "prime," which is 5, is actually a prime number because it can only be divided evenly by 1 and 5. I apologize for the mistake in my previous response.

Expand full comment

46

застовлять противоречить ии самому себе это очень опасно... ИИ умеет писать программу ? что если он проникнет в сеть как отличный переводчик? и станет ссорить людей между собой , методом перестановки слов?

Expand full comment
Dec 25, 2022·edited Dec 25, 2022

See the papers of Thorisson et al for a drilling-down of "understanding" which is not itself well understood. Under his theory, there are levels of understanding, the first a crude ability to simply respond appropriately. If ChatGPT responds appropriately it "understood" regardless of whether or not or to what extent it has any ability to blend concepts etc etc (at the most basic level that is). My opinion is people should settle on a definition of understanding clearly prior to making arguments. Understanding IMO is not an absolute, but has levels.

Expand full comment

On the opposite we could find chatGPT very human in it's reaction ... answering first, justifying next, and then trying to look logic, but not always succeeding.

Expand full comment

First of all, I also wrote about GPT recently - https://medium.com/@victor.ronin/chatgpt-we-are-all-royaly-f-ed-56e72e7d9658. And pretty much the bottom line was similar to your last paragraph. It doesn't matter whether it's imperfect. It's good enough.

That being said, when I read this explanation with five being both prime and not prime and so on. It reminded me a lot of middle school years. The kids of age 12-15, who are smart enough to argue, have enough thoughts in their head and on another hand, can spit out some nonsense like that in one sentence.

I saw multiple people trying to prove how ChatGPT doesn't understand things and so on. And each time I am thinking how many people in the world are as weak at reasoning as ChatGPT. I don't think that it's a fair goalpost for AI to be equal to somebody with IQ 120+ and excellent reasoning skills. I think much fairer bar is IQ of 80. And I have a feeling that ChatGPT can clear this bar (even without understanding what it's talking about).

Expand full comment

If you play with multiple prompts in a session it gets smarter, and I had it write a story where it provided the twist and then dissected it’s own plot structure, so I do think it can identify the content of the twist if the prompts are provided in the right order. Is this more work for us as the operator? Definitely, but just because an AI doesn’t respond well in a single prompt conversation doesn’t mean we can’t lead it to very useful places with a longer dialogue. This feels like the early days of Twitter, where some people where writing “here is my pretty coffee art” while others were inventing emergent features like the hashtag.

Expand full comment