Comprehensive and Detailed Explanation From Exact AWS AI documents:
The correct technique is fine-tuning, which is explicitly supported by Amazon Bedrock for customizing foundation models using high-quality labeled datasets.
Fine-tuning involves:
Starting with a pre-trained foundation model
Training it further using domain-specific, labeled data
Improving accuracy for specialized tasks, such as product classification, image-based understanding, and specification extraction
In this use case:
The company has labeled data
They want to customize model behavior
They require high accuracy and domain adaptation
These conditions match the definition of fine-tuning, not prompt-only methods.
Why the other options are incorrect:
A. Continued pre-training typically requires massive unlabeled datasets and is not the standard customization method exposed in Amazon Bedrock.
B. Creating an agent orchestrates model interactions and tools but does not customize the model’s learned parameters.
D. Prompt engineering improves responses through prompt design but does not modify the underlying model weights, making it insufficient for deep domain adaptation.
AWS AI document references (for exact extracts):
Amazon Bedrock Documentation — section on Model customization and fine-tuning
AWS Generative AI Study Guide — comparison of prompt engineering vs fine-tuning
Foundation Models on AWS — explanation of fine-tuning with labeled datasets