The CrashLoopBackOff state in Kubernetes indicates that a container inside a Pod is repeatedly starting, crashing, and then being restarted by the kubelet with increasing backoff delays. This is typically caused by application-level issues such as misconfiguration, missing environment variables, failed startup commands, application crashes, or incorrect container images. Proper troubleshooting focuses on identifying why the container is failing shortly after startup.
The most effective and recommended approach is to first use kubectl describe pod . This command provides detailed information about the Pod, including its current state, restart count, container statuses, and—most importantly—the Events section. Events often reveal critical clues such as image pull errors, failed health checks, permission issues, or failed command executions. These messages are generated by Kubernetes components and are essential for understanding the failure context.
After reviewing the events, the next step is to inspect the container’s logs using kubectl logs . Container logs typically capture application output written to standard output and standard error. For a crashing container, these logs often show stack traces, configuration errors, or explicit failure messages that explain why the process exited. If the container restarts too quickly, logs from the previous run can be retrieved using the --previous flag.
Option A is incorrect because kubectl exec usually fails when containers are repeatedly crashing, and /var/log/kubelet.log is a node-level log not accessible from inside the container. Option C is incorrect because reapplying the Pod manifest does not address the underlying crash cause. Option D focuses on resource usage and scaling, which does not resolve application startup failures.
Therefore, the correct and verified answer is Option B, which aligns with Kubernetes documentation and best practices for diagnosing CrashLoopBackOff conditions.