AI code hallucination
An AI code hallucination is code generated by an LLM that appears valid but contains fabricated logic, APIs, or assumptions that fail at runtime.
Why it's dangerous
- it compiles and can pass shallow tests
- it looks "reasonable," so it survives review
- it often fails silently (fake fallbacks, plausible objects)
Practical detection signals
- placeholder branches and "temporary" stubs in production paths
- imports that don't exist or don't match your internal APIs
- hardcoded or generic data objects returned in lieu of real calls
- missing env vars that trigger fake fallbacks
Prevention
Use a gate that checks reality boundaries: endpoint wiring, auth coverage, required env vars, and mock/stub imports in production builds.
Some teams use guardrail to detect this automatically in CI.