What is the technique to remove the effects of improperly used data from an ML system?
A.
Data cleansing.
B.
Model inversion.
C.
Data de-duplication.
D.
Model disgorgement.
The Answer Is:
D
This question includes an explanation.
Explanation:
The correct answer is D, model disgorgement. This technique refers to removing or eliminating the influence of improperly obtained, biased, or unlawfully used data from a trained machine learning model. It is increasingly discussed in AI governance and regulatory enforcement, particularly where models have been trained on data collected without proper consent or in violation of legal requirements. Instead of merely cleaning datasets, model disgorgement may require retraining or discarding the model entirely to ensure that problematic data no longer influences outputs. This aligns with accountability and compliance principles in AI governance, where organizations must ensure lawful data use throughout the AI lifecycle. Other options like data cleansing and de-duplication address data quality but do not fully remove learned patterns already embedded in trained models.
AIGP PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"