Comprehensive and Detailed Explanation (from UiPath Agentic Automation documentation):
The correct approach isC, as it best reflects thefew-shot prompting pattern, which is a well-documented and recommended technique in both UiPath Autopilot™ and broader agentic AI design for improvingintent classificationaccuracy.
InUiPath Agentic Automation, especially inPrompt Engineering, few-shot examples serve to "ground" the Large Language Model (LLM) with task-specific context. Providingstructured input-output pairs(as shown in option C) allows the model to learn from the context and mirror the expected output more reliably — enhancing classification precision.
For instance, UiPath recommends using clearly formatted training examples in this structure:
Input: "[Text]"
Output: "[Label]"
This aligns with UiPath’s guidance under thePrompt Engineering Framework, which highlights that usingfew-shot exemplars with clear task demonstrationsignificantly improves model performance over zero-shot or ambiguous input formats (as in options A or B). Option D also underperforms due to insufficient grounding.
UiPath emphasizes the importance oflabel clarity,format consistency, andexplicit instruction— all of which are satisfied in Option C. This method also supportspromptgeneralizationfor new inputs by modeling how categorization should happen, not just what categories exist.
This technique is crucial in real-world agentic workflows where LLMs handle noisy, unstructured data (like emails), and are expected to trigger appropriate downstream actions such as ticket creation, escalation, or filtering.