The correct answer is A. Accidental loss of internal data
Unauthorized use of public LLMs creates a risk that employees may paste sensitive company information into external AI services. This can include internal documents, source code, customer data, security details, architecture diagrams, incident information, or confidential business content.
Because the LLM services are not approved by IT, the organization may not have controls for data handling, retention, monitoring, contractual protection, or data loss prevention. The broadest and best description of the risk is accidental loss of internal data.
B is incorrect because public disclosure of intellectual property is possible, but it is a narrower example of internal data loss.
C is incorrect because employee credentials could be exposed, but the question does not indicate credential theft or active exfiltration.
D is incorrect because prompt injection is an attack against LLM behavior. The scenario describes unauthorized use of public LLM services, not manipulation of an LLM through malicious prompts.
In PenTest+ terms, this falls under Information Gathering and Vulnerability Scanning, specifically identifying unauthorized services, shadow IT, data exposure risks, and AI/LLM-related security concerns.