To retrieve CPU and memory consumption for nodes or Pods, you use kubectl top, so C is correct. kubectl top nodes shows per-node resource usage, and kubectl top pods shows per-Pod (and optionally per-container) usage. This data comes from the Kubernetes resource metrics pipeline, most commonly metrics-server, which scrapes kubelet/cAdvisor stats and exposes them via the metrics.k8s.io API.
It’s important to recognize that kubectl top provides current resource usage snapshots, not long-term historical trending. For long-term metrics and alerting, clusters typically use Prometheus and related tooling. But for quick operational checks—“Is this Pod CPU-bound?” “Are nodes near memory saturation?”—kubectl top is the built-in day-to-day tool.
Option A (kubectl cluster-info) shows general cluster endpoints and info about control plane services, not resource usage. Option B (kubectl version) prints client/server version info. Option D (kubectl api-resources) lists resource types available in the cluster. None of those report CPU/memory usage.
In observability practice, kubectl top is often used during incidents to correlate symptoms with resource pressure. For example, if a node is high on memory, you might see Pods being OOMKilled or the kubelet evicting Pods under pressure. Similarly, sustained high CPU utilization might explain latency spikes or throttling if limits are set. Note that kubectl top requires metrics-server (or an equivalent provider) to be installed and functioning; otherwise it may return errors like “metrics not available.”
So, the correct command for retrieving node/Pod CPU and memory usage is kubectl top.
=========