Prepare your containers for the worst: a DFIR case

Luis Toro (aka @LobuhiSec)
4 min readJun 14, 2024

--

Recently, while talking to a colleague, he explained that he was facing a forensic case involving a service hosted in a container. The client, as is often the case, panicked, stopped the container, created a disk image, and sent it for analysis.

In the search for evidence on the disk, no application logs or traces of the container were found. There were many conjectures about the container’s whereabouts, but even the most meticulous find commands failed to locate any strings related to the container, until he found the docker compose file.

First lesson: Save the logs

The nature of containers is ephemeral, and their construction using Linux namespaces creates their own filesystem, separate from the host where they are hosted, leading to a problem: data persistence.

You have probably seen or experienced how easy it is to bring up any type of database with Docker and how in a few seconds, with just a few environment variables, you can have any version of Mongo, Redis, or MySQL running and ready to receive data. But what happens if the container is deleted? You lose the database and all its information.

To address this issue, we need a solution that allows us to persistently store the database information so that it doesn’t disappear every time we delete the container. This solution is called volumes.

Volumes are essentially filesystems mounted inside the container that are actually part of the host. In other words, it’s a folder or path shared by the host and the container, so even if the container is deleted, the information will remain persistent on the host and can be mounted repeatedly in subsequent container runs.

This company already understood the importance of volumes and used them to ensure data persistence for the application and to exchange configuration files. However, there is an essential path -overlooked in this case- when creating a container: /var/log (or wherever your app stores its logs).

Of course, it’s not enough to simply create a volume for /var/log; you need to manage these logs, ingest them into a SIEM or monitoring tool, and periodically remove them. If a container is compromised, its logs will be too.

It’s not necessary to create a volume to ingest logs into a SIEM, but it’s the best approach because the alternative would be to add the SIEM or monitoring agent inside the container, which would increase the image size and affect its performance. Additionally, it’s easier to manage these agents at the node level, extracting data from /var/log, rather than generating as many agents as there are containers, which could become chaotic.

The first lesson is this: If you care about the security of your containerized applications and want visibility into everything that’s happening, safeguarding the logs should be one of your priorities.

Second lesson: Do NOT delete your container, just stop it

In this scenario, another issue was the complete disappearance of the container. The lifecycle of a container is not limited to being or not being; there are other states. However, it’s common, especially in environments where we don’t want to have dozens of completed or stopped containers wasting storage, to launch them with the --rmparameter, which will delete the container once its execution is complete. If we don’t use this parameter, the container will remain as Stopped or Terminated when its execution is finished, allowing us to relaunch it or even recover its information under the path /var/lib/docker/containersfor Docker.

But in this case, we were not using Docker, but Docker Compose, and its operation is slightly different. Docker Compose does not have a runcommand, but up, which brings up all the images defined in the YAML file. What’s the opposite of up? Down? Indeed, the go-to command in Compose to stop execution is docker compose down, but this not only stops the containers but also deletes them, and their information will no longer be available even at the disk level, as it will disappear from /var/lib/docker/containers.

Lesson learned: During the stress of a DFIR involving containers, we shouldn’t rush to use the first command that comes to mind without considering the consequences. In this case, docker compose down deleted the container and all the logs generated by the application. When it comes to Docker alone, the stop option won’t save us if we started the container with the --rm option, but in Compose, the stop option should gracefully stop the containers and leave their trace as stopped containers in /var/lib/docker/containers, from which we could recover some logs.

Conclusion

Containers are not inherently prepared to be persistent in a DFIR case, and it is the responsibility of administrators and the security team to establish minimum standards to ensure log persistence and prevent the complete deletion of containers before stopping them to halt the attack and create a usable disk image.

--

--