AWS recommends Amazon SageMaker Model Monitor as the native service for detecting data drift, model drift, and bias drift in deployed ML models. Model Monitor continuously compares incoming inference data against a baseline dataset captured during training.
When Model Monitor detects drift beyond configured thresholds, it can emit Amazon CloudWatch events. These events can trigger an AWS Lambda function, which is a common AWS-documented pattern for orchestrating automated workflows such as model retraining.
This Lambda function can then initiate a SageMaker Pipeline execution, starting a retraining job with updated data. This architecture aligns with AWS best practices for building automated, event-driven ML pipelines.
Option A is incorrect because AWS Glue is designed for data cataloging and ETL, not for ML-specific drift detection. Option B is unnecessary and overly complex for this use case. Option D is incorrect because Amazon QuickSight anomaly detection is intended for business intelligence analytics, not ML model monitoring.
AWS documentation explicitly positions SageMaker Model Monitor + Lambda automation as the recommended approach for continuous ML monitoring and retraining.
Therefore, Option C is the correct and AWS-verified answer.