Docker volume mounts are commonly used to allow specific files and directories to be persisted outside of the container lifecycle. In other words, volume mounts provide a way of letting data persist even after a container has been deleted. The other convenient effect of this behaviour is that it can allow data to be shared between multiple containers.
This is often leveraged in web application configurations where one container may be running software responsible for the generation of static assets, while another container may be be responsible for the serving of those assets (e.g. Nginx or Caddy).
version: "3.7" services: caddy: image: caddy:2.5.1-alpine restart: unless-stopped ports: - "80:80" volumes: - ./public:/usr/share/caddy:ro - ./caddy_data:/data app: build: context: . restart: unless-stopped volumes: - ./public:/app/public
In the above
docker-compose.yml example, Caddy and another service share a directory which is also mounted on the host machine at
./public. If either of these containers stop, the files will remain, and will be accessible to any replacement containers.
Another common pattern with Docker Images is to generate static assets at build time, such that the assets are a part of the resulting Docker Image. The downside of this approach is that it is incompatible with the volume mounting strategy described above. If your Docker Image contained assets at
/app/public and you tried to create a volume mount targeting that same directory, the container's directory would get obscured by the mount, effectively making all the assets disappear.
The good news is that there's a simple way of allowing files within a Docker Image to be made available to volume mounts!
VOLUME instruction can be used to explicitly declare directories which, when targeted for a volume mount, will have their contents copied into the newly created volume.
FROM alpine RUN mkdir -p /app/public RUN echo "Hello, world!" > /app/public/welcome.txt VOLUME /app/public ENTRYPOINT sleep infinity
This sample Dockerfile creates a small text file in a directory targeted as a mount point. There are two ways we can leverage this:
It is possible to automatically mount all volumes present on another container. That is, if the source container had five different
VOLUME instructions throughout its Dockerfile, you'd end up with five different directories shared. This can be achieved using
docker run (or
volumes_from in Docker Compose). Using the above Dockerfile as an example, the following two examples are functionally equivalent:
docker build -t welcome . # run from directory containing the example Dockerfile docker run --rm -d --name=welcome welcome docker run --rm --volumes-from=welcome alpine cat /app/public/welcome.txt
version: "3.7" services: test: image: alpine command: "cat /app/public/welcome.txt" volumes_from: - app app: build: context: . # ensure the Dockerfile is present in the same directory restart: unless-stopped # and then run docker compose up
There are two potential caveats of this approach:
- Directories must match across containers: there's no ability to have
/app/publicon one container, and
/something/elseon another. In the case of something like Nginx or Caddy being used for static file serving, this would mean you'd have to introduce additional configuration to ensure that the static file server either serves from a non-default directory, or you'd have to customise your asset pipeline to explicitly output to a directory suitable for the static file server.
- All volumes are mounted: sometimes a container may have many volumes, and you might only want a subset of them mounted into another container (perhaps you have several running containers, each mounting subsets of one another's volumes).
volumes-fromdoes not grant this level of granularity.
VOLUME instruction is used on a particular directory, the contents of the directory within the image automatically get copied into a volume during
docker run. This means we can use ordinary volume mounting between containers:
docker build -t welcome . # run from directory containing the example Dockerfile docker run --rm --name welcome -v welcome:/app/public -d welcome docker run --rm -v welcome:/greeting alpine cat /greeting/welcome.txt
version: "3.7" services: test: image: alpine command: "cat /greeting/welcome.txt" volumes: - public:/greeting app: build: context: . # ensure the Dockerfile is present in the same directory restart: unless-stopped volumes: - public:/app/public volumes: public: # and then run docker compose up
The above two examples are again functionally equivalent, using Docker CLI and Docker Compose, respectively.
The two caveats mentioned in the
--volumes-from are not an issue here: we can selectively mount exactly what we want, where we want.
To summarise, there's a choice of three methods when it comes to volume mounting:
- Host + Container mounts: allows you to share files/directories with a host machine, and then to share those directories with other containers. Replaces the files/directories within the containers with that of the host's.
- Inter-container mounts via the
--volumes-from: no binding to host filesystem, but provides a relatively simple way of sharing entire files/directories between containers, preserving image contents along the way.
- Inter-container mounts via the
VOLUMEinstruction and explicit Docker Volumes: no binding to host filesystem, but allows fine-grained control over containers having access to other containers, while additionally allowing image contents to be preserved.