A Kubernetes Service is the abstraction that defines a logical set of Pods and the policy for accessing them, so C is correct. Pods are ephemeral: their IPs change as they are recreated, rescheduled, or scaled. A Service solves this by providing a stable endpoint (DNS name and virtual IP) and routing rules that send traffic to the current healthy Pods backing the Service.
A Service typically uses a label selector to identify which Pods belong to it. Kubernetes then maintains endpoint data (Endpoints/EndpointSlice) for those Pods and uses the cluster dataplane (kube-proxy or eBPF-based implementations) to forward traffic from the Service IP/port to one of the backend Pod IPs. This is what the question means by “logical set of Pods” and “policy by which to access them” (for example, round-robin-like distribution depending on dataplane, session affinity options, and how ports map via targetPort).
Option A (Selector) is only the query mechanism used by Services and controllers; it is not itself the access abstraction. Option B (Controller) is too generic; controllers reconcile desired state but do not provide stable network access policies. Option D (Job) manages run-to-completion tasks and is unrelated to network access abstraction.
Services can be exposed in different ways: ClusterIP (internal), NodePort, LoadBalancer, and ExternalName. Regardless of type, the core Service concept remains: stable access to a dynamic set of Pods. This is foundational to Kubernetes networking and microservice communication, and it is why Service discovery via DNS works effectively across rolling updates and scaling events.
Thus, the correct answer is Service (C).
=========