Which of the following tests explains why AI output could be inaccurate?
A.
Model poisoning
B.
Social engineering
C.
Output handling
D.
Prompt injections
The Answer Is:
A
This question includes an explanation.
Explanation:
Model poisoning occurs when an attacker manipulates the training data or the training process of an AI model so that its predictions are deliberately inaccurate or biased. In the SecurityX CAS-005 objectives, this is part of understanding emerging technology threats, specifically AI/ML vulnerabilities. This differs from:
Social engineering, which manipulates humans rather than AI models.
Output handling, which deals with how outputs are processed but doesn’t cause inaccuracy at the model level.
Prompt injections, which manipulate the model at query time, not during training.Because model poisoning directly corrupts the AI model itself, it is the clearest reason AI outputs could be inaccurate.
CAS-005 PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"