Comprehensive and Detailed Explanation From Exact AWS AI documents:
Hallucinations occur when an AI model generates incorrect, fabricated, or misleading outputs that appear plausible but are factually wrong.
AWS generative AI guidance identifies hallucinations as:
A common limitation of generative models
A risk when models generate numerical or factual data
A key reason for validation and human review in critical use cases
Why the other options are incorrect:
Safety (B) relates to harmful or restricted content.
Interpretability (C) refers to understanding how a model makes decisions.
Cost (D) concerns operational expenses.
AWS AI document references:
Generative AI Risks and Limitations
Responsible Use of Foundation Models
Model Validation Best Practices