Why David Hume remains relevant in the age of AI

AI and the problem of induction: insight from David Hume

David Hume
David Hume, one of Scotland’s most famous philosophers [Public domain image via Wikimedia Commons]
I recently traveled through Scotland, home to one of my favorite Enlightenment philosophers, David Hume. In fact, right in the capital city of Edinburgh, where Hume once lived, there’s a statue of him.

Statue of David Hume in Edinburgh
Statue of David Hume, one of my favorite Enlightenment thinkers, in Edinburgh, Scotland

Hume wrote back in the 18th century, but his philosophical thought remains relevant today, including for artificial intelligence. For example, one well-known problem he pointed out was the mistake of trying to form generalizations from inadequate evidence or examples. It’s an insight commonly known as the problem of induction.

To see how this problem applies to artificial intelligence, let’s briefly define this technology, as well as some related terms. Then, we’ll consider a couple of limitations with the technology to see how the problem of induction enters the picture.

AI, ML, and GenAI chatbots

Artificial intelligence (AI) refers to technology that can perform tasks previously done by people. In effect, AI means automating human work or activities, especially in ways that imitate our intelligence and creative capabilities.

Take generative artificial intelligence (GenAI), which can create new content seemingly from scratch, such as images or text. Common examples nowadays are chatbots. These bots appear to learn and converse in human language by answering questions and responding to requests on software applications. For instance, you can ask a chatbot about something on a website or mobile app, and the bot will reply with (hopefully) accurate information.

ChatGPT GenAI chatbot
ChatGPT, a popular example of GenAI chatbot technology

Chatbots illustrate a type of machine learning (ML) software. ML is a subcategory of AI that appears to learn about online content—for example, text on apps and websites. Here’s how:

  • ML works by finding patterns in information, like common keywords and phrases. Often, ML makes use of various algorithms and statistical models programmed to detect correlations among data.
  • Then it uses those information patterns and data correlations to form generalizations about the content, such as grammatical rules about language use.
  • Finally, it uses those generalizations to generate content, or AI-generated text. In this way, it can mimic what a person might say in conversation.

For instance, a chatbot may learn about various greetings and responses by finding patterns in text from online books, web articles, and short messaging services. Then, when it receives a familiar greeting (such as “Hello, how are you today”), it’ll automatically give a familiar response (like “I’m well, thanks). So, although chatbots don’t understand language like humans do, these technologies can effectively imitate our linguistic capabilities.

ChatGPT
GenAI chatbots can effectively imitate our linguistic capabilities, even if this technology doesn’t actually understand language in the way humans do.

David Hume and AI

Now what does the insight of David Hume have to do with AI, ML, and GenAI chatbots. To see how the problem of induction applies to this technology, we need to consider two of its limitations.

Limitations 1: Inaccurate input

One limitation is that the technology may find patterns in information from online content that’s not accurate. Obviously, not all content found on the Internet is of the best quality and may have errors in grammar, style, or spelling. And there’s plenty of online content that’s not truthful or factual, from misinformation or disinformation to fake news.

If the AI’s input is inaccurate, then the AI’s output will likely be inaccurate too. (Computer scientists often refer to this problem as GIGO: garbage in, garbage out.)

Limitation 2: Insufficient information

However, even if the input is accurate, the output that follows may still not be accurate. That’s because the input may not provide sufficient information to make valid generalizations. When the information is insufficient, the generalizations about existing content used to generate new content may be faulty or biased. This sort of bias may result from limited samples of data, such as inadequate evidence or examples.

The problem of induction in AI: making mistaken generalizations from inaccurate input or insufficient information

Because of these limitations, the technology can make mistakes when attempting to form generalizations from input that’s inaccurate or information that’s insufficient. That’s why we have the problem of induction with AI.

Of course, if he were alive today, David Hume would probably not be surprised.


Posts on other philosophers

Heidegger on the essence of technology: What is technology, really?

Stoic virtue in the digital age: Seneca on outrage and distraction

Would Buddha buy a smartphone or use social media? 

 

Leave a Comment