The correct answer is C. Prompt injection
Prompt injection occurs when an attacker crafts input that causes an AI system to ignore, bypass, or override its intended instructions, safety rules, or output restrictions. In this scenario, the chatbot accepts harmful or policy-violating variations, including malicious links, encoded data leakage, and abusive content. These are indicators that user-supplied prompts can manipulate the chatbot’s behavior.
A is incorrect because container escape involves breaking out of an isolated container environment to access the host or other containers. The question is about manipulating chatbot responses, not container isolation.
B is incorrect because output fuzzing is a testing technique that varies inputs to observe how outputs change. It may be used to discover the issue, but it is not the vulnerability being described.
D is incorrect because model manipulation generally refers to changing or poisoning the model itself, such as altering training data, weights, or behavior at the model level. The scenario describes malicious user inputs affecting responses, which is prompt injection.
In PenTest+ terms, this falls under Attacks and Exploits, specifically AI/LLM-related attacks, prompt injection, and testing for unsafe chatbot behavior.