What is AI Hallucination?
When an AI model confidently generates false or fabricated information that is not supported by its training data or provided context.
Definition
AI hallucination occurs when a language model produces outputs that are factually incorrect, invented, or inconsistent with the provided context — while presenting them with apparent confidence. Hallucination is an inherent property of generative models: they produce likely-sounding text rather than verified facts. It is one of the central challenges of deploying AI in high-stakes contexts like legal, medical, or financial applications.
Why it matters
Hallucination is why AI outputs cannot be used blindly. In professional contexts, a hallucinated legal citation, fabricated statistic, or invented code API can cause real harm. Understanding hallucination — how it happens, when it is more or less likely, and how to mitigate it — is essential for anyone building or using AI products professionally.
How it works
LLMs generate the next token based on probability distributions learned during training. They do not have a lookup table of facts — they are sophisticated pattern matchers. When asked about something outside or at the edge of their training distribution, they generate plausible-sounding completions that may not be true. Hallucination is more likely for: obscure facts, recent events past the training cutoff, numerical data, citations and URLs.
Examples in practice
Hallucinated legal citations
A lawyer used ChatGPT to generate legal citations for a brief. The AI invented several plausible-looking but entirely fake case references. The brief was filed with fabricated precedents.
