The evaluation stage of the generative AI model lifecycle involves testing the model to assess its performance, including accuracy, coherence, and other metrics. This stage ensures the model meets the desired quality standards before deployment.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The evaluation phase in the machine learning lifecycle involves testing the model against validation or test datasets to measure its performance metrics, such as accuracy, precision, recall, or task-specific metrics for generative AI models."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed Explanation:
Option A: DeploymentDeployment involves making the model available for use in production. While monitoring occurs post-deployment, accuracy testing is performed earlier in the evaluation stage.
Option B: Data selectionData selection involves choosing and preparing data for training, not testing the model’s accuracy.
Option C: Fine-tuningFine-tuning adjusts a pre-trained model to improve performance for a specific task, but it is not the stage where accuracy is formally tested.
Option D: EvaluationThis is the correct answer. The evaluation stage is where tests are conducted to examine the model’s accuracy and other performance metrics, ensuring it meets requirements.
[References:, AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle, Amazon SageMaker Developer Guide: Model Evaluation (https://docs.aws.amazon.com/sagemaker/latest/dg/model-evaluation.html), AWS Documentation: Generative AI Lifecycle (https://aws.amazon.com/machine-learning/), , , , ]