Using Docker Volume Mounts Without Overwriting the Image Contents

06 Jul 20223 min read

Docker volume mounts are commonly used to allow specific files and directories to be persisted outside of the container lifecycle. In other words, volume mounts provide a way of letting data persist even after a container has been deleted. The other convenient effect of this behaviour is that it can allow data to be shared between multiple containers.

This is often leveraged in web application configurations where one container may be running software responsible for the generation of static assets, while another container may be be responsible for the serving of those assets (e.g. Nginx or Caddy).

version: "3.7"

services:
  caddy:
    image: caddy:2.5.1-alpine
    restart: unless-stopped
    ports:
      - "80:80"
    volumes:
      - ./public:/usr/share/caddy:ro
      - ./caddy_data:/data
  app:
    build:
      context: .
    restart: unless-stopped
    volumes:
      - ./public:/app/public

In the above docker-compose.yml example, Caddy and another service share a directory which is also mounted on the host machine at ./public. If either of these containers stop, the files will remain, and will be accessible to any replacement containers.

Another common pattern with Docker Images is to generate static assets at build time, such that the assets are a part of the resulting Docker Image. The downside of this approach is that it is incompatible with the volume mounting strategy described above. If your Docker Image contained assets at /app/public and you tried to create a volume mount targeting that same directory, the container's directory would get obscured by the mount, effectively making all the assets disappear.

The good news is that there's a simple way of allowing files within a Docker Image to be made available to volume mounts!

The Docker VOLUME instruction can be used to explicitly declare directories which, when targeted for a volume mount, will have their contents copied into the newly created volume.

FROM alpine

RUN mkdir -p /app/public
RUN echo "Hello, world!" > /app/public/welcome.txt
VOLUME /app/public

ENTRYPOINT sleep infinity

This sample Dockerfile creates a small text file in a directory targeted as a mount point. There are two ways we can leverage this:

Mounting All Volumes from Another Container

It is possible to automatically mount all volumes present on another container. That is, if the source container had five different VOLUME instructions throughout its Dockerfile, you'd end up with five different directories shared. This can be achieved using --volumes-from in docker run (or volumes_from in Docker Compose). Using the above Dockerfile as an example, the following two examples are functionally equivalent:

docker build -t welcome . # run from directory containing the example Dockerfile
docker run --rm -d --name=welcome welcome
docker run --rm --volumes-from=welcome alpine cat /app/public/welcome.txt
version: "3.7"

services:
  test:
    image: alpine
    command: "cat /app/public/welcome.txt"
    volumes_from:
      - app
  app:
    build:
      context: . # ensure the Dockerfile is present in the same directory
    restart: unless-stopped

# and then run docker compose up

There are two potential caveats of this approach:

  1. Directories must match across containers: there's no ability to have /app/public on one container, and /something/else on another. In the case of something like Nginx or Caddy being used for static file serving, this would mean you'd have to introduce additional configuration to ensure that the static file server either serves from a non-default directory, or you'd have to customise your asset pipeline to explicitly output to a directory suitable for the static file server.
  2. All volumes are mounted: sometimes a container may have many volumes, and you might only want a subset of them mounted into another container (perhaps you have several running containers, each mounting subsets of one another's volumes). volumes-from does not grant this level of granularity.

Explicitly Creating Volumes to Share Between Containers

When a VOLUME instruction is used on a particular directory, the contents of the directory within the image automatically get copied into a volume during docker run. This means we can use ordinary volume mounting between containers:

docker build -t welcome . # run from directory containing the example Dockerfile
docker run --rm --name welcome -v welcome:/app/public -d welcome
docker run --rm -v welcome:/greeting alpine cat /greeting/welcome.txt
version: "3.7"

services:
  test:
    image: alpine
    command: "cat /greeting/welcome.txt"
    volumes:
      - public:/greeting
  app:
    build:
      context: . # ensure the Dockerfile is present in the same directory
    restart: unless-stopped
    volumes:
      - public:/app/public
volumes:
  public:

# and then run docker compose up

The above two examples are again functionally equivalent, using Docker CLI and Docker Compose, respectively.

The two caveats mentioned in the --volumes-from are not an issue here: we can selectively mount exactly what we want, where we want.

The Three Different Methods of Volume Mounting

To summarise, there's a choice of three methods when it comes to volume mounting:

  1. Host + Container mounts: allows you to share files/directories with a host machine, and then to share those directories with other containers. Replaces the files/directories within the containers with that of the host's.
  2. Inter-container mounts via the VOLUME instruction and --volumes-from: no binding to host filesystem, but provides a relatively simple way of sharing entire files/directories between containers, preserving image contents along the way.
  3. Inter-container mounts via the VOLUME instruction and explicit Docker Volumes: no binding to host filesystem, but allows fine-grained control over containers having access to other containers, while additionally allowing image contents to be preserved.