When deploying LLMs in production, what is a common strategy for parameter-efficient fine-tuning?
A.
Using external reinforcement learning to adjust the model's parameters dynamically.
B.
Freezing the majority of model parameters and only updating a small subset relevant to the task
C.
Training the model from scratch on the target task to achieve optimal performance.
D.
Implementing multiple independent models for each specific task instead of fine tuning a single model
The Answer Is:
B
This question includes an explanation.
Explanation:
Parameter-efficient fine-tuning (PEFT) strategies, like LoRA or adapters, freeze most pretrained parameters and train only lightweight modules, reducing computational costs while adapting to new tasks. This preserves general knowledge, prevents catastrophic forgetting, and enables quick deployments in resource-constrained settings. For LLMs, it's crucial for efficiency in production, allowing specialization without retraining billions of parameters. Security-wise, it minimizes exposure to new data risks. Exact extract: "A common strategy is freezing the majority of model parameters and updating only a small task-relevant subset, ensuring efficiency in fine-tuning for production deployment." (Reference: Cyber Security for AI by SISA Study Guide, Section on Efficient Fine-Tuning in SDLC, Page 90-92).
CSPAI PDF/Engine
Printable Format
Value of Money
100% Pass Assurance
Verified Answers
Researched by Industry Experts
Based on Real Exams Scenarios
100% Real Questions
Get 60% Discount on All Products,
Use Coupon: "8w52ceb345"