Kubernetes & Helm apps debugging 101
We’ve recently ported our application from Ansible & Docker Compose deployment to Kubernetes & Helm. Although the change brings almost only positives (at least from my point of view), I have to admit that it is a little bit more challenging for debugging for a person with zero experience with Kubernetes. Since I don’t believe that it’s necessarily more complicated, I’ve decided to compile a list of useful commands and workflow to debug the most common issues.
helm template chart-name chart-path -f values.yaml --debug
Helm is usually (unlike Ansible) excellent in telling you what’s wrong with your syntax. But suppose not all components are deployed correctly (e.g., some Kubernetes resources are missing because of wrongly evaluated condition, or you get the wrong YAML syntax error). In that case, it’s helpful to render helm templates locally.
helm get manifest chart-name
helm template will help you if you are deploying the Helm Chart manually. When deploying with an orchestration tool like Terraform, using that command directly might not be convenient since some of your helm values might be defined by Terraform variables.
helm get manifest will show you already deployed manifest.
helm get hooks chart-name
The same command as the previous one, only for hooks, which are not part of
get manifest output.
kubectl get events / kubectl describe resource-type/resource-name
Let’s say your chart has no syntax error, and everything looks ok when you check your rendered template, but your pod is still unable to start (remains pending). This issue can occur for many reasons (from the top of my head, not enough permissions to access docker image, issues while mounting volumes …). You can list all namespace events with the command
kubectl get events or information about a single resource (e.g., pod) with the command
kubectl describe pod/pod-name showing you the reason for that pending state.
kubectl logs pod-name
Your pod is finally started but crashes soon. This command is equivalent to
docker logs container-name, which shows container logs. In the case of multiple containers in a pod, a non-default one can be selected with -c parameter.
kubectl exec -it pod-name -- bash
Your pod starts, but containers are not behaving correctly, making it necessary to debug container runtime. This command executes an interactive bash shell inside a pod. In the case of multiple containers in a pod, a non-default one can be selected with -c parameter. It is the equivalent to
docker exec -it container-name bash. There are some limitations compared to the Docker variant. E.g., it is impossible to run a command as a particular user, which makes debugging more complicated for containers running with non-admin users.
kubectl debug -it pod-name --image=busybox --target=container-name
Sometimes you need to debug container runtime, but it doesn’t contain any shell. Or it would help if you were root, but your container is running as a non-admin account. Debug allows you to start an ephemeral container attached to the target container process namespace. The additional benefit of this approach is that you can use your debugging image containing all the tools you need. You can access container data in
Workflows for debugging Kubernetes Apps and Helm Charts are similar to those with configuration management tools and Docker Compose. Kubernetes brings added complexity with the number of tools that need to be adopted. On the other hand, Kubernetes also brings advanced functionality, making debugging more powerful, which counts when the shit hits the fan.