The core requirement is to guarantee that the chatbot only uses information from the company's official documentation and does not rely on its general knowledge base. This is crucial for ensuring factual accuracy, relevance to the company's specific products, and preventing the generation of fabricated or incorrect information (hallucinations).
The specific technique designed to address this challenge is Grounding. Grounding is the process of connecting the Large Language Model's (LLM's) responses to a trusted, verifiable source of information, such as an organization's internal documents, databases, or live data feeds. When an LLM is grounded, it is forced to base its answers only on the provided context, effectively preventing it from drawing on its broad, generalized training data. Grounding is often implemented using a method called Retrieval-Augmented Generation (RAG), particularly with tools like Google Cloud's Vertex AI Search, which indexes the official documentation and feeds the relevant snippets to the model.
Options A, B, and C address different aspects of model output: Role prompting sets the model's persona, adjusting temperature controls creativity, and prompt chaining manages conversation history, but none of these techniques restrict the model's source of truth to the official documentation. Therefore, Grounding is the correct and most effective technique for this requirement.
===========