Transfer learning is a machine learning technique that reuses knowledge learned from previous tasks to improve training efficiency and performance on new tasks. AWS documentation explains that transfer learning allows models to start from pretrained weights or representations, reducing training time and the amount of data required.
In this retail application scenario, the company wants to leverage information from prior tasks to increase learning speed, which is a defining characteristic of transfer learning. AWS emphasizes that transfer learning is especially effective when tasks are related, such as customer behavior analysis, product recommendations, or demand forecasting.
By initializing a model with learned features from an existing task, transfer learning enables faster convergence and improved accuracy compared to training from scratch. AWS frequently recommends this approach when computational efficiency and rapid iteration are important.
The other options do not satisfy the requirement. Supervised learning defines how labels are used but does not reuse prior knowledge. Hyperparameter tuning optimizes model configuration but does not leverage previous task outputs. Regularization techniques reduce overfitting but do not accelerate learning through knowledge reuse.
AWS documentation positions transfer learning as a foundational concept in modern ML workflows, particularly for retail, personalization, and natural language processing use cases. Therefore, transfer learning is the correct solution.