Which of the following controls would BEST help to prevent data poisoning in AI models?
A.
Increasing the size of the training data set
B.
Implementing a strict data validation mechanism
C.
Establishing continuous monitoring
D.
Regularly updating the foundational model
The Answer Is:
B
This question includes an explanation.
Explanation:
The most direct preventative control against data poisoning is robust data validation/ingestion gating: provenance checks, schema and constraint validation, anomaly/outlier screening, label consistency tests, and whitelist/blacklist source controls before data reaches training pipelines. Larger datasets (A) don’t inherently prevent poisoning; monitoring (C) is detective; updating a foundation model (D) does not address tainted inputs entering the pipeline.
[References: AI Security Management™ (AAISM) Body of Knowledge — Adversarial ML Threats and Training-Time Attacks; Secure Data Ingestion and Validation Controls. AAISM Study Guide — Poisoning Prevention: Provenance, Validation, and Sanitization Gates., ]
AAISM PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"