Thank you for joining for another segment of Docker Compose skill building! In our last segment, we saw how to share a volume between two containers. We demonstrated the shared volume by showing that the shared volume directory in both containers was empty; we then added a file (“dog.png”) to one of the containers, and showed that the same file was now present in the second container!
But there’s still one more step beyond this. Sometimes, you want to build one or more Docker containers to process data you’ve already collected outside of the Docker environment. In this case, you want to transfer the data within a local directory to the volume shared between the containers. This is exactly what we’ll tackle in this tutorial.
The setup: Directories
Since we’re adding a new capability to our shared volume, we’ll need to set things up a bit different in our files. Let’s start by walking through each file. Initially, our folder structure will look like this. We have a folder containing three items: (1) a data directory named “smee”, containing a file called “hyper.txt”, (2) a Docker file named “dockerfile”, and (3) a Docker Compose file named “docker-compose.yml”.
The setup: Docker file
For this demonstration, we’ll boil the Docker file down quite a bit; all we need for this tutorial is an image.
FROM alpine:3.14 AS build
The setup: Docker Compose file
For the most part, our Docker Compose file looks very similar. The first line defines the version number, and we have two services, container1 and container2 — each of these will be defined as independent (albeit identical) Docker containers after the Docker Compose file runs. Furthermore, each container is defined by our Docker file above, which resides in our current directory.
version: '3.9' # Define services services: container1: container_name: first_container build: context: . dockerfile: dockerfile volumes: - type: bind source: "/C/Users/apung/Desktop/docker_intro/smee" target: /data command: ["sleep",inf] container2: container_name: second_container build: context: . dockerfile: dockerfile volumes: - type: bind source: "/C/Users/apung/Desktop/docker_intro/smee" target: /data command: ["sleep",inf]
In the previous tutorial, we explicitly defined a volume shared between two containers. In this case, however, our shared volume will be the data folder the containers share. That is, any changes made within the containers will also be stored in the the “smee” data folder in real time. So there’s a clear paradigm shift using this technique, which first becomes evident in our Compose file. You’ll notice we’ve explicitly removed the high level
Volumes key, and the Volumes definition within each container also looks different.
Now, our Volume definition consists of an array (denoted by the hyphen) of properties. The
type: bind property tells Docker that we plan on binding a
source folder (our external folder) to a
target folder. The
source folder will be our data directory, and the
target folder will be the folder within each container where the shared data will be stored. Since we are on Windows and building for Linux, we will explicitly state the path of our data folder.
Demonstrating the shared volume
To make sure we start from a clean slate, let’s go ahead and clean up anything we may have lying around.
# Make sure Docker Compose is spun down vscode ➜ /comdocker $ docker-compose down -v Removing network comdocker_default WARNING: Network comdocker_default not found. Removing volume comdocker_prometheus-data WARNING: Volume comdocker_prometheus-data not found. # Remove stopped containers, unused networks, etc. vscode ➜ /com.docker $ docker system prune WARNING! This will remove: - all stopped containers - all networks not used by at least one container - all dangling images - all dangling build cache Are you sure you want to continue? [y/N] y Total reclaimed space: 0B # Double checking to make sure no old volumes persist vscode ➜ /com.docker $ docker volume ls DRIVER VOLUME NAME local vsCodeServerVolume-docker_intro-condescending_hofstadter # Make sure there are no containers lying around vscode ➜ /com.docker.devenvironments.code $ docker-compose ps Name Command State Ports ------------------------------
It certainly appears we have a clean slate, so let’s go ahead and run the Docker Compose script with the
--detached options activated.
vscode ➜ /com.docker $ docker-compose up --build -d Building container [+] Building 1.3s (5/5) FINISHED => [internal] load build definition from dockerfile 0.0s => => transferring dockerfile: 107B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/alpine:3.14 1.2s => [1/1] FROM docker.io/library/alpine:3.14@sha256:06b5d462c92fc39303e6363c65e074559f8d6b1363250027ed5053557e3398c5 0.0s => => resolve docker.io/library/alpine:3.14@sha256:06b5d462c92fc39303e6363c65e074559f8d6b1363250027ed5053557e3398c5 0.0s => => sha256:06b5d462c92fc39303e6363c65e074559f8d6b1363250027ed5053557e3398c5 1.64kB / 1.64kB 0.0s => => sha256:a6bb9fc72d69d4ae3bc76875a7614c271db2ee25cfbdd8b7ed36ffa3e1c903be 528B / 528B 0.0s => => sha256:e04c818066afe78a0c9379f62ec65aece28566024fd348242de92760293454b8 1.47kB / 1.47kB 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:9c2e2769fd2b591617febce4a519e7614d282960b3c9a1fc78b4df3c6dc1e53c 0.0s => => naming to docker.io/library/comdocker_container 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Building container2 [+] Building 0.3s (5/5) FINISHED => [internal] load build definition from dockerfile 0.0s => => transferring dockerfile: 37B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/alpine:3.14 0.2s => CACHED [1/1] FROM docker.io/library/alpine:3.14@sha256:06b5d462c92fc39303e6363c65e074559f8d6b1363250027ed5053557e3398c5 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:9c2e2769fd2b591617febce4a519e7614d282960b3c9a1fc78b4df3c6dc1e53c 0.0s => => naming to docker.io/library/comdocker_container2 0.0s Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them Creating first_container ... done Creating second_container ... done
At this point, both of our containers have been built and both are still running due to the
command: ["sleep","inf"] Compose command. We could confirm the existence of both containers with
docker-compose ps, but we’re more interested in the contents of the
/data directory of both containers.
Let’s go ahead and poke around in container1 by navigating to its shared volume directory and looking at any existing files. Remember, we expect to find a directory titled “data” which contains a single file (“hyper.txt”).
NOTE: In the rest of the commands, an octothorp (#) will usually appear between the current directory and the command (ex.
/data # exit). The # is important, because it tells us that we are exiting these commands with root privileges! However, when re-writing the code here for demonstration, the code interpreter (Bash/Shell) interprets “#” as “okay, a comment begins here”. For that reason, the “#” has been replaced with a “>” so that the previous syntax would read as:
/data > exit.
# Step inside container1 vscode ➜ /com.dockere $ docker-compose run container1 sh Creating comdocker_container1_run ... done # Change to data directory and list files / > cd data; ls hyper.txt # Exit container /data > exit
This matches what we see in our Windows Explorer window for the same directory:
Now, let’s do the same thing with container2. Except this time, we’ll add another file, “dog.png” after we validate the existence of “hyper.txt” within the
# Step inside container2 vscode ➜ /com.docker $ docker-compose run container2 sh Creating comdocker_container2_run ... done # Change to the shared volume directory / cd data; ls hyper.txt # Add another file /data > touch dog.png # List files in data folder /data > ls dog.png hyper.txt # Exit container /data > exit
If the shared volume is truly shared, then the newly added file should also be visible in container1 and it should be visible in the Windows Explorer window:
# Step inside container1 vscode ➜ /com.docker $ docker-compose run container1 sh Creating comdocker_container_run ... done # Change to data directory and list files / # cd data; ls dog.png hyper.txt
…and here is a screenshot of the Windows Explorer:
Great! So here we can see that the file has been added to the shared directory of
container1 indicating that the volumes are truly linked. Additionally, we can see the new file in the Windows Explorer window allowing us to run other non-containerized processes or move or backup the data as we’d like.
In this tutorial, we’ve shown how to define a shared volume between two containers where the storage volume is initially populated by a local data folder. Similar to the previous tutorial on volume sharing, we also demonstrated that both containers were able to see the contents and changes within the volume.
The capabilities we’ve started to acquire are quite powerful. In the next tutorial, we’ll discuss how processes can be automatically run within a container during its compilation in the Docker Compose process. Until then, thanks again for stopping by. I hope you find my content helpful — if so, please feel free to leave a comment or subscribe!
Get new content delivered directly to your inbox.
(Header image: Business background by BiZkettE1)