In which scenario is soft prompting appropriate compared to other training styles?
A.
When there is a significant amount of labeled, task-specific data available
B.
When the model needs to be adapted to perform well in a domain on which it was not originally trained
C.
When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training
D.
When the model requires continued pretraining on unlabeled data
The Answer Is:
C
This question includes an explanation.
Explanation:
Comprehensive and Detailed In-Depth Explanation:
Soft prompting adds trainable parameters (soft prompts) to adapt an LLM without retraining its core weights, ideal for low-resource customization without task-specific data. This makes Option C correct. Option A suits fine-tuning. Option B may require more than soft prompting (e.g., domain fine-tuning). Option D describes pretraining, not soft prompting. Soft prompting is efficient for specific adaptations.
OCI 2025 Generative AI documentation likely discusses soft prompting under PEFT methods.
1z0-1127-25 PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 65% Discount on All Products,
Use Coupon: "ac4s65"