Docker swarm with compose files

In a previous post we showed how you can create compose files to launch multiple dockerized web services at once. We even created a little set of services that were "load balanced" behind a dockerized instance of nginx. In that post though everything was run on a single machine so our "load balanced" services were still actually constrained by the resources of a single machine and if that machine died, all of our services would die with it. Not very useful in production.

Today we will show you how to use Docker swarm in combination with a docker-compose file to ensure that you services are deployed to multiple machines at once.

Let's start by looking at the two web services that we will create in this post. The first one will be the 'echo service'. It will be a simple java web app written in Spark that returns whatever string you send to it in a POST request.

The code for this service is very simple thanks to Spark.

    public static void main(String[] args) {
        final String randomUUID = UUID.randomUUID().toString();
        port(4568);

        post("/echo", (req, res) -> {
            res.header("Instance-UUID", randomUUID);
            return req.body();
        });
    }

We are generating a random UUID when the app starts that we will make sure to place in a response header for every echo response the service returns. This will allow us to see via curl that our repeated requests are actually handled by different instances of our service running within our swarm.

We also set the port number to something other than Spark's default since we will have two different spark services running and letting them bind to their host's ports. The meat of the service simply sets the response header and then returns the request body.

The second service will be the 'ping service', it will respond to POST requests with a 204 as long as it is healthy.


public static void main(String[] args) {
        final String randomUUID = UUID.randomUUID().toString();
        port(4567);

        post("/ping", (Request req, Response res) -> {
            res.header("Instance-UUID", randomUUID);
            res.status(204);
            return "";
        });

    }

We generate a random UUID again for the same purposes that we did in our echo service. We then set the response to 204 and return an empty string. We will build each of these services with maven and use the maven jar and assembly plugins to produce a jar file that we can run with all of its dependencies bundled inside. I won't dive into detail on using maven or plugins in this post but the code for the project is located here.

Now that we have some services built, let's dockerize them. Our dockerfile will be very simple:


FROM openjdk:8

COPY pingservice.jar /home/pingservice.jar

CMD ["java", "-jar", "/home/pingservice.jar"]

Our base image is simply the official openjdk:8 image. We then copy our jar into the home directory and execute it. Once we have dockerized the app we will need to build the image and then push it to a repository so we can reference it in our docker-compose file.
This is important because when we deploy to our swarm each machine will have to pull the image from a central repository. The good news is that dockerhub will let us host free public repositories and even give us one free private repository.
If having your images available publicly isn't an option for you then you can sign up for a paid plan on dockerhub or one of the other major cloud service providers. Personally,I am using of google cloud's container registry.

I usually make a bash script that will build the service, create a docker image, and then push that image to the repository.
Here is what that looks like for our ping service

#!/usr/bin/env bash

mvn -f ../ package

cp -fr ../target/pingservice.jar pingservice.jar

docker build -t abnormallydriven/pingservice:latest \
    -t us.gcr.io/$GCLOUD_DOCKER_REGISTRY/abnormallydriven/pingservice:latest .

docker push us.gcr.io/$GCLOUD_DOCKER_REGISTRY/abnormallydriven/pingservice

Once we have pushed our docker image to our repository we can move on to creating our docker-compose file. We will be making use of version 3 features for docker-compose since that is required for working with docker swarm.

version: '3.0'

services:
        ping-service:
                image: us.gcr.io/${GCLOUD_DOCKER_REGISTRY}/abnormallydriven/pingservice:latest
                ports:
                        - "0.0.0.0:4567:4567"
                networks:
                        - swarm-services
                deploy:
                        replicas: 2
                        update_config:
                                parallelism: 1
                                delay: 2s
                                monitor: 30s
                        restart_policy:
                                condition: on-failure
                                max_attempts: 3

        echo-service:
                image: us.gcr.io/${GCLOUD_DOCKER_REGISTRY}/abnormallydriven/echoservice:latest
                ports:
                        - "0.0.0.0:4568:4568"
                networks:
                        - swarm-services
                deploy:
                        replicas: 3
                        update_config:
                                parallelism: 1
                                delay: 2s
                                monitor: 30s
                        restart_policy:
                                condition: on-failure
                                max_attempts: 3

networks:
        swarm-services:
                driver: overlay

Let's go through this docker-compose file and explain what each piece does to help us launch our services in a swarm.

The first line of the file specifies that this is a version 3 docker-compose file. We need use version 3 in order to take advantage of the swarm capabilities.

Next, we declare our services. You can see that we have a 'ping-service' and an 'echo-service'. Each of them declares the location of image we want to use, then they declare the ports they want to expose to the outside world.
We will be running these services in our own docker network that we have named 'swarm-services'.
This network is declared at the bottom of the file with an 'overlay' driver. We need to make sure that we give it the overlay driver so that the network will exist on each node in our swarm. The default driver would not work in a multi host swarm setup.

The next part of the yml file is where we get to the heart of how we want our services deployed within the swarm. Here we can state how many instances of the service we want to have deployed with the replicas property. We give the ping service a replica count of 2. This means that within our swarm, there will be two instances of the ping service running on two different nodes.

The update_config property is where we declare how we want the update process to be handled.
First we set parallelism, which will let docker know how many of these machines to update at a single time. We are setting it to 1 because we just want to update one machine at a time during the update process, but you could set it as high as your replica count for a given service if you wanted. We set a delay of 2 seconds which just tells docker how long it should wait before it moves on to updating the next machine/set of machines.
Finally we set a monitor time of 30 seconds to let docker know that it should watch the container for 30 seconds after the update is complete to decide if it failed or not.

The last property within the deploy section is the restart_policy. This is where we set the condition for restarts as well as how many times to attempt a restart before giving up.

Now that we have our docker-compose.yml file in order, let's look at how we can setup our docker swarm.
First, you'll need to find yourself some machines that you can install docker on. The swarm I setup while creating this post consisted of 4 small linux servers. You can spin a few up easily with a service like Linode, AWS, or Google Cloud. Once the machines are setup you will want to install docker on each one and then choose at least one of them to be the "master" node.
Docker swarms consist of master nodes and worker nodes. Master nodes have special management tasks that they carry out for the swarm, and they are where you will want to issue various commands from. The swarm I setup had 1 master node and 3 worker nodes. Initializing the docker swarm is as easy as running this command on the machine you wish to make the first master node

docker swarm init --advertise-addr

That command will startup the swarm and then output a second command that you can then copy down and execute on each of the other machines that you wish to have in your swarm.
It will look something like this

docker swarm join \
    --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
    192.168.99.100:2377

Don't worry about losing the token that this command produces, you can always get it back by logging into a master node and running this:

docker swarm join-token worker

Once you've initialized your swarm and all of the machines you wish to have in it have joined, you are ready to start the services you've defined inside of your docker-compose.yml file.

Log back into your master node and run this command:

 docker stack deploy --compose-file <path to your compose file> <name for this 'stack' of services>

This will automatically deploy your services to the swarm according to the rules you have laid out in your compose file.
Once it is deployed you can view a list of all the "stacks" you have deployed in your swarm with docker stack ls. You can then inspect a particular stack with docker stack ps <stack name>. This will show you a list of all the service instances running under this stack.

The 'docker stack deploy' command is really just helping us out by launching multiple services for us all at once. We could also do it ourselves manually using the docker service command. You can see that the docker stack really just deployed services by using this command:

docker service ls

It will list all of the services that are running in your swarm. This will include services you launched independently as well as those deployed as members of a "stack".

You can inspect and see details of an individual service with

docker service inspect <service name>

Now that we have our services running in our swarm you might be wondering how we can perform an update?
Luckily for us docker has made that really easy. We have already defined an update policy in our compose file so all we need to do is make changes to our service's code, run our build script to make sure the changes are placed inside a new docker image and pushed to our repository, then we just execute the same docker stack deploy command that we used to launch our services to our swarm initially.
Docker will see that these services are already running and that the repository has a new image available for them and proceed to update according to the policy we defined in our compose file.

I want to close with a quick note about swarm's request routing. The swarm by default will have its own internal routing mechanism, so even if you send a request to a particular worker in the swarm, it may not be serviced by that worker. The internal swarm router may send it to another worker running the same service and process your request in that instance.
In the small swarm I setup for this blog post I made sure that none of my services were ever running on my master node, but I could point all of my requests at the master node and get the requests serviced by instances running in my swarm anyway.

As always I am posting a few links that I found helpful when setting up my first set of swarm services below.

Pushing and Pulling To Google Cloud Container Registry

Advanced Authentication Methods For Google Cloud Container Registry

Compose file reference

Docker Stack reference

Docker Service reference

Using Compose with Swarm