Which design policy addresses harmful content creation by generative AI?
A.
quantum resistant encryption
B.
watermarking
C.
retrieval augmented generation
D.
human in the loop
The Answer Is:
D
This question includes an explanation.
Explanation:
The creation of harmful content (such as hate speech, misinformation, or malicious code) by generative AI models is a major concern in modern security design. The most effective design policy to mitigate this is theHuman-in-the-loop (HITL)approach. This involves integrating human oversight and intervention at various stages of the AI's operation, particularly during the verification of the model's output before it is published or acted upon.
According to Cisco SDSI objectives regarding AI security, HITL ensures that automated decisions are subject to ethical judgment and contextual awareness that AI currently lacks. Humans can provide "Reinforcement Learning from Human Feedback" (RLHF) to tune the model's safety filters, ensuring it refuses to generate toxic or prohibited content. WhileWatermarking(Option B) helps identify content as AI-generated after the fact, it does not prevent thecreationof harmful material.Retrieval Augmented Generation (RAG)(Option C) is a technique for grounding AI in specific data to reduce "hallucinations" but doesn't inherently filter for harmful intent.Quantum resistant encryption(Option A) is a cryptographic standard unrelated to content moderation. HITL remains the primary safeguard for ensuring AI outputs align with safety guidelines and organizational requirements.
========
300-745 PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"