Which of the following BEST describes an adversarial attack on an AI model?
A.
Attacking the underlying hardware of the AI system
B.
Providing inputs that mislead the AI model into incorrect predictions
C.
Reverse engineering the AI model using social engineering techniques
D.
Conducting denial-of-service (DoS) attacks against AI APIs
The Answer Is:
B
This question includes an explanation.
Explanation:
In AAISM, an adversarial attack is defined by maliciously crafted inputs designed to cause erroneous model behavior (misclassification, targeted misprediction) while the model architecture and parameters remain unchanged. Hardware compromise (A) and DoS (D) are system-level attacks, not adversarial ML per se. Social engineering to obtain model information (C) is an information-gathering or security breach vector, distinct from input-space adversarial manipulation.
[References:AI Security Management™ (AAISM) Body of Knowledge: Taxonomy of AI Threats—Adversarial Examples and Evasion; Distinction from System/Network Attacks.AAISM Study Guide: Definitions and Threat Models in Adversarial ML; Attack–Defense Mapping for Model Predictions., ]
AAISM PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"