Another hype cycle: Why I think generative AI is overvalued, if not overhyped

Is generative AI going through a hype cycle?

Technological innovation often goes through what’s called a hype cycle. Represented graphically, there’s a peak of inflated expectations about some supposedly revolutionary technology, followed by a trough of disillusionment. Then, there’s a gradual slope of enlightenment, or understanding what the technology realistically can and can’t do. Finally, there’s a plateau of productivity, in which a limited but viable application of the technology takes place.


The hype cycle graphically represented as a technology trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity
The hype cycle of technological innovation [Image source: Olga Tarkovskiy, CC BY-SA 3.0, via Wikimedia Commons]

To me, it looks very much like generative AI is going through this cycle. If it’s not overhyped, generative AI is almost certainly overvalued. I’ll explain why I think that. But first, let’s define what we mean by generative AI.

What is generative AI?

Generative AI is artificial intelligence that can create content, such as text, images, or video. It does this by finding patterns in large sets of data (its input) and using those patterns to make additional information (its output). For example, AI chatbots look for patterns in written or spoken words online (their input) to create text or voice interactions that closely imitate human conversation (their output).

However, these input-output patterns are based on statistical probabilities, meaning that they look for past correlations to predict future answers. Of course, those correlations may be spurious, which means that the answers may not be accurate. (As the old saying goes, correlation does not necessarily imply causation.) For these reasons, generative AI can—and probably always will—generate bogus content, known as ‘hallucinations.’

Hallucinations: a feature or bug of AI?

A hallucination is what happens when generative AI fabricates false information. AI hallucinations are kind of like confabulations. For instance, AI chatbots (such as ChatGPT) may give completely inaccurate answers to various questions, spreading misinformation and disinformation.

Can we fix generative AI so that it doesn’t hallucinate in this way? Unfortunately, probably not. As cognitive scientists like Gary Marcus have pointed out, hallucinations will likely remain a problem with generative AI. As mentioned, this technology works by guessing answers from patterns of data, based on correlations that may be spurious. It doesn’t actually perform contextual reasoning or creative inference (even if it can imitate it in certain cases).

Furthermore, there also may be a problem with the data that generative AI ends up using, including data taken from the web. Obviously, not all data are valid, and there’s no shortage of garbage on the Internet. In fact, computer scientists have a saying: “garbage in, garbage out!” In other words, the quality of a system’s output will only be as good as its input. And simply adding more data won’t solve that problem. If anything, it might make the problem worse.

Consider the following scenario: If generative AI creates content by finding patterns in human-generated data (its input) in order to make AI-generated information (its output), then that output will become its new input. In this way, generative AI may produce a negative feedback loop of dubious information (also known as “data poisoning“), becoming more and more unreliable over time (a scenario referred to as an AI “model collapse”).

So, in all likelihood, hallucinations will continue to be a feature—not a bug—of generative AI.

What happens with generative AI after its hype cycle?

With hallucination as a persistent problem, the inflated expectations around generative AI (“Look, it can create amazing, high-quality, original content!”) will likely give way to disillusionment (“Oh crap, look at all those hallucinations and bad data!”). Eventually, there will be a more enlightened understanding, and productive implementation, of what generative AI can realistically do.

My hunch is that, once this hype cycle runs its course, we’ll simply use generative AI as another form of automation, typically for narrow, business-related processes. Indeed, Microsoft appears to have gone in this direction with its AI chatbot, designed to specifically help people use Microsoft products.

In such cases, the goal would be to use generative AI for limited purposes, in which input and output are tightly controlled, thereby reducing the potential for hallucinations.


Related posts

AI chatbots: artificial general intelligence or cognitive automation?

Looking back at ‘Gulliver’s Travels’: satire and science fiction of technological innovation 

Technological innovation and the future of work: synopsis of ‘The Second Machine Age’ 

 

1 thought on “Another hype cycle: Why I think generative AI is overvalued, if not overhyped”

  1. I appreciate your thoughtful analysis of the current state of generative AI, Christopher. Your exploration of the hype cycle is particularly insightful, and I tend to agree that the technology might be experiencing a phase of inflated expectations.

    Reply

Leave a Comment