Option C is the correct solution because it uses a single, well-tuned Amazon Bedrock guardrail that applies different actions to different content types, which is the recommended approach for minimizing false positives while enforcing strong policy controls.
Setting content filters to medium rather than high reduces overblocking of benign customer conversations while still preventing harmful content. Amazon Bedrock guardrails are designed to balance precision and recall, and medium sensitivity is commonly recommended for customer-facing financial services use cases.
Denied topics explicitly prevent the assistant from discussing investment advice, which is a regulatory requirement. Including definitions and sample phrases improves detection accuracy and reduces ambiguity.
Sensitive information filters support different actions per context. Masking PII in responses preserves conversational usefulness for legitimate customer support while preventing exposure of sensitive data. Blocking sensitive financial information in inputs prevents downstream processing of disallowed content before it reaches the foundation model.
Critically, enabling both input and output evaluation ensures that guardrails are applied consistently at every stage of interaction. Custom blocked messages and audit logging provide clear compliance evidence for regulators and internal audits.
Option A causes excessive false positives by blocking all PII outright. Option B introduces unnecessary complexity and is not how Bedrock guardrails are intended to be applied. Option D uses orchestration logic that Bedrock guardrails already handle natively.
Therefore, Option C best satisfies enforcement, flexibility, auditability, and accuracy requirements.