Why genius or superintelligent AI won’t come from chatbots

Will GenAI chatbots lead to genius or superintelligent AI?

Chatbot: a sign of genius or superintelligent AI, or not?
Are chatbots a sign of a genius or superintelligent AI to come? [Image source: Robink23, CC BY-SA 4.0, via Wikimedia Commons]

Conversations with generative AI (GenAI) chatbots are becoming more commonplace. Many of these bots are sounding more human. Some may even seem pretty intelligent. Hence, there’s speculation that this technology might lead to a kind of genius or superintelligent AI. Of course, what counts as ‘genius’ or ‘superintelligent’ is often up for debate, but we usually think of it as exceptional intellect or creativity.

Are chatbots a sign of genius or superintelligent AI to come? I remain skeptical, and here’s why.

What GenAI chatbots say comes from statistical patterns

GenAI chatbots can effectively imitate human conversation, even though the technology itself does not really understand human language. How is that possible? In short, through the statistical modeling of language. This sort of statistical modeling happens through large language models (LLMs), the innovation behind many GenAI chatbots.

  • First, an LLM identifies patterns across large datasets that serve as textual input, like online books, articles, messages, and discussions.
  • Then, the technology uses these patterns to form generalizations, such as common combinations of words, phrases, and sentence structures.
  • Finally, it uses those generalizations to generate textual output, including human-like content or text that can convincingly mimic what someone might say in conversation.

In effect, LLMs work by predicting the next most likely word, phrase, or sentence. It’s a prediction based on a statistical model of language, which identifies patterns in textual input to generate plausible-sounding output. However, since LLMs work through this type of statistical modeling, the patterns identified in their input, as well as the content generated by their output, will tend to show biases toward common statistical patterns.

Statistical patterns don’t necessarily lead to genius or superintelligent AI

Generating content based on common statistical patterns can produce human-like text, which may even look quite intelligent. Nevertheless, because that kind of text is based on, and likely biased toward, these patterns, it often favors frequently occurring words and phrases, standard sentence structures, and widely held viewpoints.

Granted, LLMs can sometimes produce surprising or unexpected results, especially when tweaked or prompted in specific ways. And under the right circumstances, those results could be interpreted as ‘creative’ (loosely speaking, since statistical models don’t possess imagination per se). But for the most part, groundbreaking ideas aren’t what typically come out of the statistical modeling of language.

For this reason, the responses from LLMs can appear very generic or formulaic. In other words, what they say may feel mediocre—almost like a statistical average of language usage. To be fair, that kind of output can be helpful for several tasks—for instance, automating templates and responses for routine kinds of messages. Still, it probably won’t generate genius-level works of literature, like those of Dante Alighieri, William Shakespeare, or James Joyce.

After all, genius or superintelligent abilities, such as exceptional intellect and creativity, usually entail breaking certain patterns and going beyond averages to evoke avant-garde insights and novel ideas. So the speculation that LLM-powered chatbots will somehow generate content that reflects genius or superintelligent AI strikes me as extremely unlikely.

 

Leave a Comment