Hallucination
13. april 2026 · 1 min · glossary
When an AI model produces confident, plausible-sounding output that is factually wrong or entirely fabricated.
In code generation, this takes the form of invented functions, non-existent API methods, or subtly broken logic that compiles but behaves incorrectly. In agent self-reports, it appears as claiming completion of work that was absent or broken.
Hallucination rates have improved dramatically, but the failure mode remains relevant - particularly in automated pipelines where no human reviews the output.