AI Hallucinations

AI Hallucinations

Artificial Intelligence is, as its name indicates, artificial. It is the product of human work. (Just how “intelligent” it is can be debated… but that’s another post!) However, it also shows emergent properties, or features that weren’t originally programmed or intended. One of those is the ability to hallucinate.

The word “hallucinate” is used in describing an artificial intelligence tendency to create original, non-real, responses… or, in other words, to make things up. “Hallucinate” is an anthropomorphic word, which seems to make AI more human-like. In reality, it is an artificial process and a by-product of how AI is created and trained. Some research has indicated that this behaviour is inevitable, given the nature of current AI models.

AI hallucinations are cases when the AI produces false or misleading information in response to a prompt or query. This can be a basic factual error, or can include completely fabricated information. A few examples:

  • Google’s Bard incorrectly said that the James Webb Space Telescope took the first photograph of an exoplanet. (Exoplanets were photographed at least 15 years earlier than the JWST’s launch.)
  • A lawyer used ChatGPT to identify cases that set precedent for his lawsuit, and at least six of those cases were completely made up by the AI.
  • Meta’s Galactica AI tool for scientific research was shut down only three days after it was launched because of multiple hallucinations, including citing academic research papers that didn’t exist.

I recently had a personal experience of how ChatGPT can fabricate information. In response to the prompt, “Can you explain how to use AI in studying and class work?”, ChatGPT listed a variety of AI tools that could help students in their studying …including tools that didn’t employ AI!

In this video, ChatGPT demonstrates its hallucinations by listing “AI tools” that don’t use AI at all!

AI hallucinations are often attributed to inadequate training data. If the AI model does not have enough, or high-quality, data on a topic, it may create information in response to a query. This is because the AI doesn’t really understand or interpret information, it looks for patterns and predicts what information will come next. If the pattern indicates something will result, the AI may create that result if it doesn’t really exist.

Studies have shown that hallucinations happen fairly frequently, as much as 25% of the time. All tech companies producing AI tools, and may AI tools themselves, warn that information produced by the AI should be verified. ChatGPT’s input page explicitly states, “ChatGPT can make mistakes. Consider checking important information.

While tech companies are working to reduce or eliminate AI hallucinations, they continue to be a real issue.


For more information on artificial intelligence, check out my online course
Conquer Computer Science: AI, Automated Systems, and Databases

John Iglar

John Iglar has taught Computer Science to students aged 3 to 83. He has taught in some of the finest international schools around the world, covering the IB Diploma, IGCSE, and many more curricula! No longer tied to a school, he tutors, blogs, freelances, and creates online courses.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.