Which of the following BEST addresses risk associated with hallucinations in AI systems?
A.
Recursive chunking
B.
Automated output validation
C.
Content enrichment
D.
Human oversight
The Answer Is:
D
This question includes an explanation.
Explanation:
AAISM prescribes human-in-the-loop (HITL) controls as the primary safeguard for high-impact generative AI use cases to mitigate hallucination risk. Human oversight ensures critical outputs are reviewed, corrected, and approved before use, with accountability, escalation, and documented decision trails. Automated validators and enrichment help reduce errors but are secondary; recursive chunking is a prompting tactic, not a governance control.
[References: AI Security Management™ (AAISM) Body of Knowledge: Responsible AI & Human Oversight; Generative AI Risk Controls—Approval Workflows and Human Review; AAISM Study Guide: Hallucination Risk Treatment with HITL and Approval Gates., ===========]
AAISM PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"