Introduction
Docker is a powerful container deployment and management tool, but its abstractions might sometimes cause a headache when faced with the need to debug issues with docker applications. I wrote this blog post to alleviate some of mine, structured as a sort-of checklist.
When working with Docker, it is useful to think in terms of these 3 aspects of dockerisation: build, container and the docker runtime (or daemon) itself.
Debugging the Build
This is the most straightforward, but I include it for the sake of completeness.
Docker images are built using the docker build command:
docker build
This picks up a Dockerfile
from the current working directory, and executes the instructions in that file to build a docker image.
Some build scripts run this command with a -q
option, which suppresses the build logs from stdout
, like so:
docker build -q
In this case, simply remove the -q
from the command, maybe temporarily. This prevents docker from suppressing the build logs, which should pinpoint very quickly where the build error lies.
Debugging the Container
Docker images that have been built need to be run. For those unfamiliar with the concept, think of a docker image as a blueprint. It defines the behavior of an application, but that application still needs to be “spun up”. This is done with the docker run command, which spins up the application as a container.
Run without detached mode
The general way to run docker is to use the following command:
docker run
This command itself will generate logs, unless it is run in detached mode, like so:
docker run -d
So if a container fails to run, it can be helpful to check if it is running in detached mode first, and then switch it off. This will allow you to see what kind of logs are being thrown up when the container starts.
Inspect the container logs
There is also another way to view these container logs. This can be useful if
- The docker container is being spun up by a script we do not want to touch.
- The docker container is spun up by a docker compose file.
- The docker container is actually a long running process throwing up errors, but maybe these errors aren’t killing the container, or we just want to see the application logs directly.
First we get the container id of the application we want to inspect:
docker ps
Then we use the container id obtained and run the following command:
docker logs <container_id>
This will allow us to inspect the stdout
and stderr
of the container’s running process. Note that this only works if the application itself prints to these stdout and stderr streams. If it doesn’t, try the next approach.
Inspect the application log files in the container itself
Sometimes applications do not print to either stdout
or stderr
, but rather choose to pipe their logs to a running process. In this case it can be helpful to read the contents of the logfile directly. The general command to do this would look like the following:
docker exec -it <container_id> <read_command> /path/to/logfile.log
For example, if we want to inspect the logs of a running Apache Airflow container, with the image based off of an Apache Airflow Python image, we would run the following command
docker exec -it <airflow_container_id> cat /opt/airflow/logs
Debugging the Docker runtime itself
The docker runtime or daemon is the brain behind the whole operation. This is also the program that runs each docker container. Debugging this layer can be useful for identifying higher level concerns not related to the application in question. A good tell that this is where your problem lies is usually if docker logs <container_id>
generates logs that suddenly cut off without any error message being thrown. To access the docker runtime / daemon logs, use the following commands.
Linux based operating systems:
journalctl -xu docker.service
macOS:
cat ~/Library/Containers/com.docker.docker/Data/log/vm/dockerd.log
cat ~/Library/Containers/com.docker.docker/Data/log/vm/containerd.log
In the case of macOS, dockerd.log
and containerd.log
can be treated simply like the logs of 2 different layers. For practical purposes, and at the risk of oversimplification, we can consider dockerd as wrapping containerd. By my experience, inspecting the dockerd.log
file is generally enough, but it doesn’t hurt to look at the containerd.log
if all else fails.
For a comprehensive list of updated commands, or in the event that you use neither Linux or macOS, you may look up the docs here.
For an example of how looking at the docker runtime / daemon has helped solved a real life problem, check out this stackoverflow post. In this case, the docker user was able to use the daemon logs to identify the time at which the container in question was going down, and the context in which it was happening (a cronjob gone rogue).
Tip – Check your allocated docker memory
Sometimes error messages from the docker daemon logs can be cryptic. For example, in a recent situation, the only hint I got for not having a container start was this message:
time="2022-08-31T09:25:24.837393700Z" level=info msg="ignoring event" container=4e4c8f232e2ef2bd0e4f6146ae91b20da919912bf6e552568f2f467a57f1094a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
That was it.
It turns out that the above was caused by a lack of allocated docker memory. All I did was increase it from the default 4 GB of RAM (find out how here), and that solved the above problem for me.
Hope this helps!