Bots can imitate language but don’t necessarily understand it

Yes, bots can imitate language – but no, they don’t understand what they’re saying

Every technological revolution seems to bring its own hype, and the boom of generative artificial intelligence (GenAI) is no exception. There’s no shortage of hype, for instance, about large language models (LLMs), the technology behind chatbots and voice assistants. This technology is impressive, and it’s true that these bots can imitate language, even if not perfectly. However, there are some claims that these bots can also understand language.

Take this claim by Sam Altman, CEO of OpenAI (the company that brought us ChatGPT), who appears to believe that these bots are on the verge of becoming Artificial General Intelligence (AGI)—the speculation that AI will be able to outperform human intelligence on practically any cognitive task:

We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

Now that bots can imitate language, are they really becoming ‘superintelligent’?

Of course, not all experts in AI agree with Altman. Cognitive scientist Gary Marcus, a vocal critic of much hype around LLMs, didn’t hold back:

Bots can imitate language but not understand words or reason with them

Researchers like Marcus have provided robust critiques about why LLMs are definitely not going to be ‘superintelligent’ anytime soon, so there’s no need to reiterate those arguments here. Instead, I thought I’d show a couple of ways you can see for yourself how this technology does not necessarily understand language—and why it’s not as intelligent as a lot of hype tries to make out.

For example, these bots struggle at spelling words, as well as using words to reason logically.

Bots struggle to spell

Ask a bot to perform an amusing linguistic task, such as reversing the order of words in a sentence or spelling a word backwards. Often, the bot won’t succeed. To illustrate, I asked ChatGPT to reverse the word order of the following sentence, as well as reverse the letter order of each word in the sentence: “I eat ice cream only on Sundays.” As you can see, the AI bot couldn’t do it.

 

To show how chatbots can imitate language but not necessarily understand it, I asked ChatGPT to reverse the word order of a sentence, and reverse the letter order of each word in the sentence. It didn't succeed at this simple linguistic task.
ChatGPT could not spell “Sundays” backwords in this example.

The reason for this failure to spell backwards is that bots don’t identify words as linguistic units composed of letters. Rather, this technology is programmed to identify what are called tokens, or strings of text that can correspond to parts of words. (In general, one token corresponds to four characters or letters, at least in the English language.)

Bots struggle to reason logically

Beyond spelling words, try asking a bot to perform a logical reasoning task with those words. For instance, I asked Microsoft Copilot: “If I have 5 apples, but then give away 3 of them, and then I get 3 new apples, but these three new apples are only half the size of the previous apples, then how many apples do I have total right now?” Surprisingly, the bot couldn’t answer that question correctly.

Bots often can't reason using language and logic. In this example, I asked MS Copilot: If I have 5 apples, but then give away 3 of them, and then I get 3 new apples, but these three new apples are only half the size of the previous apples, then how many apples do I have total right now? The bot didn't get the right answer.
MS Copilot did not understand that 1 small apple is still 1 apple, even if it’s only half the size of 1 large apple.

In fact, this failure in logic is nothing new. It’s been known for a while that bots are frequently incapable of logical reasoning. They struggle not only at spelling words but also using those words to do basic logic. Again, the technology is impressive in how bots can imitate language—and sometimes even sound like a person in conversation. But just remember: These bots literally have no idea what they’re saying.


Related posts

Another hype cycle: Why I think generative AI is overvalued, if not overhyped

AI chatbots: artificial general intelligence or cognitive automation? 

Jailbreaking: a feature, not a bug, of general-purpose chatbots 

Leave a Comment