“Hallucinations” is a term coined to describe when LLM models produce what?
A.
Outputs are only similar to the input data.
B.
Images from a prompt description.
C.
Correct sounding results that are wrong.
D.
Grammatically incorrect or broken outputs.
The Answer Is:
C
This question includes an explanation.
Explanation:
In the context of LLMs, “hallucinations” refer to outputs that sound plausible and correct but are factually incorrect or fabricated, as emphasized in NVIDIA’s Generative AI and LLMs course. This occurs when models generate responses based on patterns in training data without grounding in factual knowledge, leading to misleading or invented information. Option A is incorrect, as hallucinations are not about similarity to input data but about factual inaccuracies. Option B is wrong, as hallucinations typically refer to text, not image generation. Option D is inaccurate, as hallucinations are grammatically coherent but factually wrong. The course states: “Hallucinations in LLMs occur when models produce correct-sounding but factually incorrect outputs, posing challenges for ensuring trustworthy AI.”
[References: NVIDIA Building Transformer-Based Natural Language Processing Applications course; NVIDIA Introduction to Transformer-Based Natural Language Processing., ]
NCA-GENL PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"