In this guide, you will learn everything you need to know about Docker Compose and how you can use it to run multi-container applications.
With applications getting larger and larger as time goes by it gets harder to manage them in a simple and reliable way. That is where Docker Compose comes into play. Docker Compose allows us developers to write a YAML configuration file for our application service which then can be started using a single command.
This guide will show you all the important Docker Compose commands and the structure of the configuration file. It can also be used to look up the available commands and options later on.
Why care about Docker-compose
Before we get into the technical details let's discuss why a programmer should even care about docker-compose in the first place. Here are some reasons why developers should consider implementing Docker in their work.
Docker Compose lets you bring up a complete development environment with only one command: docker-compose up, and tear it down just as easily using docker-compose down. This allows us developers to keep our development environment in one central place and helps us to easily deploy our applications.
Another great feature of Compose is its support for running unit and E2E tests in a quick a repeatable fashion by putting them in their own environments. That means that instead of testing the application on your local/host OS, you can run an environment that closely resembles the production circumstances.
Multiple isolated environments on a single host:
Compose uses project names to isolate environments from each other which brings the following benefits:
You can run multiple copies of the same environment on one machine
It prevents different projects and service from interfering with each other
Common use cases
Now that you know why Compose is useful and where it can improve the workflow of us developers let's take a look at some common use cases.
Single host deployments:
Compose was traditionally focused on development and testing but can now be used to deploy and manage a whole deployment of containers on a single host system.
Compose provides the ability to run your applications in an isolated environment that can run on any machine with Docker installed. This makes it very easy to test your application and provides a way to work as close to the production environment as possible.
The Compose file manages all the dependencies (databases, queues, caches, etc) of the application and can create every container using a single command.
Automated testing environments:
An important part of Continues integration and the whole development process is the automated testing suite which requires an environment in which the tests can be executed. Compose provides a convenient way to create and destroy isolated testing environments that are close to your production environment.
Compose can be run on nearly any operating system and is very easy to install so let's get into it.
Windows and Mac:
Compose is included in the Windows and Mac Desktop installation and doesn't have to be installed separately. The installation instructions can be found here:
Now you just need to apply the executable permissions to it.
sudo chmod +x /usr/local/bin/docker-compose
After that, you can check your installation with the following command:
Structure of the Compose file
Compose allows us developers to easily handle multiple docker containers at once by applying many rules which are declared in a docker-compose.yml file.
It consists of multiple layers that are split using tab stops or spaces instead of the braces we know in most programming languages. There are four main things almost every Compose-File should have which include:
As you can see this file contains a whole Wordpress application including the MySQL database. Each of these services is treated as a separate container that can be swapped in and out when you need it.
Now that we know the basic structure of a Compose file let's continue by looking at the important concepts.
Concepts / Keywords
The core aspects of the Compose file are its concepts which allow it to manage and create a network of containers. In this section, we will explore these concepts in detail and take a look at how we can use them to customize our Compose configuration.
The services tag contains all the containers which are included in the Compose file and acts as their parent tag.
Here we create a service for a website and add the starting command using the command tag. This command will be executed after the container has started and will then start the website.
For more information about CMD, RUN, and Entrypoint you can read this article which discusses the details and compares their functionality.
Volumes are Docker's preferred way of persisting data which is generated and used by Docker containers. They are completely managed by Docker and can be used to share data between containers and the Host system.
They do not increase the size of the containers using it, and their context is independent of the lifecycle of the given container.
There are multiple types of volumes you can use in Docker. They can all be defined using the volumes keyword but have some minor differences which we will talk about now.
The normal way to use volumes is by just defining a specific path and let the Engine create a volume for it. This can be done like this:
# Just specify a path and let the Engine create a volume
You can also define absolute path mapping of your volumes by defining the path on the host system and mapping it to a container destination using the: operator.
Here you define the path of the host system followed by the path of the container.
Another type of volume is the named volume which is similar do the other volumes but has it's own specific name that makes it easier to use on multiple containers. That's why it's often used to share data between multiple containers and services.
Dependencies in Docker are used to make sure that a specific service is available before the dependent container starts. This is often used if you have a service that can't be used without another one e.g. a CMS (Content Management System) without its database.
Here we have a simple example of a Ghost CMS which depends on the MySQL database to work and therefore uses the depends_on command. The depends_on command takes an array of string which defines the container names the service depends on.
Environment variables are used to bring configuration data into your applications. This is often the case if you have some configurations that are dependent on the host operating system or some other variable things that can change.
There are many different options of passing environment variables in our Compose file which we will explore here:
Setting an environment variable:
You can set environment variables in a container using the "environment" keyword, just like with the normal docker container run --environment command in the shell.
In this example, we set an environment variable by providing a key and the value for that key.
Passing an environment variable:
You can pass environment variables from your shell straight to a container by just defining an environment key in your Compose file and not giving it a value.
Here the value of NODE_ENV is taken from the value from the same variable in the shell which runs the Compose file.
Using an .env file:
Sometimes a few environment variables aren't enough and managing them in the Compose file can get pretty messy. That is what .env files are for. They contain all the environment variables for your container and can be added using one line in your Compose file.
Networks define the communication rules between containers, and between containers and the host system. They can be configured to provide complete isolation for containers, which enables building applications that work together securely.
By default Compose sets up a single network for each container. Each container is automatically joining the default network which makes them reachable by both other containers on the network, and discoverable by the hostname defined in the Compose file.
Specify custom networks:
Instead of only using the default network you can also specify your own networks within the top-level networks key, allowing to create more complex topologies and specifying network drivers and options.
You may also define extra aliases for your containers that services can use to communicate with each other. Services in the same network can already reach one another. Links then only define other names under which the container can be reached.
In this example, the web container can reach the database using one of the two hostnames (db or database).
All the functionality of Docker-Compose is executed through its build in CLI, which has a very similar set of commands to what is offered by Docker.
build Build or rebuild services
help Get help on a command
kill Kill containers
logs View output from containers
port Print the public port for a port binding
ps List containers
pull Pulls service images
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services
stop Stop services
restart Restart services
up Create and start containers
down Stops and removes containers
They are not only similar but also behave like their Docker counterparts. The only difference is that they affect the entire multi-container architecture which is defined in the docker-compose.yml file instead of a single container.
Some Docker commands are not available anymore and have been replaced with other commands that make more sense in the context of a completely multi-container setup.
The most important new commands include the following:
Using Multiple Docker Compose Files
The use of multiple Docker Compose files allows you to change your application for different environments (e.g. staging, dev, and production) and helps you run admin tasks or tests against your application.
Docker Compose reads two files by default, a docker-compose.yml file, and an optional docker-compose.override.yml file. The docker-compose.override file can be used to store overrides of the existing services or define new services.
To use multiple override files, or an override file with a different name, you can pass the -f option to the docker-compose up command. The base Compose file has to be passed on the first position of the command.
docker-compose up -f override.yml override2.yml
When you use multiple configuration files, you need to make sure that the paths are relative to the base Compose file which is specified first with the -f flag.
Now let's look at an example of what can be done using this technique.
# original service
command: npm run dev
# new service
command: npm run start
Here you override the old run command with the new one which starts your website in production instead of dev mode.
When you use multiple values on options like ports, expose, DNS and tmpfs, Docker Compose concatenates the values instead of overriding them which is shown in the following example.
# original service
# new service
Compose in production
Docker Compose allows for easy deployment because you can deploy your whole configuration on a single server. If you want to scale your app, you can run it on a Swarm cluster.
There are still things you probably need to change before deploying your app configuration to production. These changes include:
Binding different ports to the host
Specifying a restart policy like restart: always to avoid downtime of your container
Adding extra services such as a logger
Removing any unneeded volume bindings for application code
After you have taken these steps you can deploy your changes using the following commands:
docker-compose up --no-deps -d
This first rebuilds the images of the services defined in the compose file and then recreates the services.
Now that we have gone through the theory of Compose let's see some of the magic we just talked about in action. For that, we are going to build a simple Node.js application with a Vue.js frontend which we will deploy using the tools we learned about earlier.
Let's get started by cloning the repository with the finished Todo list application so we can directly jump into the Docker part.
All right, let's understand what's going on here by walking through the code:
First, we define the base image using the FROM keyword
Then we set the directory we are going to work in and copy our local package.json file into the container
After that, we install the needed dependencies from the package.json file and expose the port 3000 to the host machine
The CMD keyword lets you define the command which will be executed after the container startup. In this case, we use it to start our express server using the node server.js command.
Now that we have finished the Dockerfile of the backend lets complete the same process for the frontend.
RUN npm install -g http-server
COPY package*.json ./
COPY .env ./
RUN npm install
COPY . .
RUN npm run build
CMD [ "http-server", "dist" ]
This file is similar to the last one but installs an HTTP server which displays the static site we get when building a Vue.js application. I will not go into further detail about this script because it isn't in the scope of this tutorial.
With the Dockerfiles in place, we can go ahead and write the docker-compose.yml file we learned so much about.
First, we define the version of our Compose file (in this case version 3)
After that, we start defining the services we need for the project to work.
The nodejs service uses the Dockerfile of the backend which we created above and publishes the port 3000 to the host machine. The service also depends on the mongo service which means that it lets the database start first before starting itself.
Next, we define a basic MongoDB service which uses the default image provided on DockerHub.