Mind Matters Natural and Artificial Intelligence News and Analysis
containers-stockpack-adobe-stock.jpg
containers
containers

Part 4: Docker—An Introduction to Container Orchestration

This tutorial will focus on Docker’s swarm because it comes installed with Docker and uses the same standard Docker files
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In this installment we are going to look at “container orchestration” for Docker. In the previous installment, we just looked at how to run an individual container. However, most applications are a combination of services which are orchestrated together to make an application.

While in theory all the pieces of an application could be built into a single container, it is better to split an application into its relevant services and run a separate container for each service. There are several reasons for this, but the biggest one is scalability.

Remember, the containers don’t care if they all run on the same physical machine or different machines. By splitting the services into different containers, we can tell them all to run together on the same machine (for instance, our development machine if we want to test the whole app on our local computer). Or we can run each component on its own machine (as we might do when we deploy our app for the world to use). Even further, there might be parts of our app that can scale across multiple machines. For instance, we might only want one database but we might have a whole slew of web servers out front hitting that database. By splitting up the app into different containers for each service, we get the ability to choose how our app scales and even scale different parts of our application in differing amounts. Docker refers to a group of services that are orchestrated together as a “stack.”

If you are not familiar with building applications by coordinating different basic infrastructure services (databases, caches, etc.), you should also see my book Building Scalable PHP Web Applications Using the Cloud. The basic architecture described in this article is comparatively simple. The book provides additional considerations and architectures that can be helpful when building large-scale applications.

Getting Started with Swarm

There are many different tools that we can use for container orchestration. This tutorial will focus on Docker’s swarm because it comes installed with Docker and uses the same standard Docker files that other tools use as well.

To start the swarm, type the command

docker swarm init

This creates a “swarm” with a single host—the machine you are currently using. Note that the output of this command tells you how to add more hosts (i.e., machines) to the current swarm if you wanted to scale your swarm to multiple machines. Within the swarm you will deploy your orchestration of services as a “stack.”

Orchestrating a Single Container

First, we will orchestrate a single container. Although orchestration is not necessary for a single container, this demonstration will introduce you to the tools and files you will need to perform more complex orchestrations. Here, we will deploy the simplified web server from the previous tutorial.

The backbone of any orchestration is the docker-compose.yml file, which uses YAML syntax (if you are unfamiliar with the general YAML syntax, see this link). This file contains the information that docker-compose will need to build and deploy your orchestration.

Below is an extremely simple docker-compose.yml file:

version: "3"

services:
 my-service:
  image: johnnyb61820/simple-web-server
  ports:
   - "8080:8070"
  deploy:
   mode: replicated
   replicas: 6

This file says that this orchestration consists of a single service, which we have named “my-service”, that uses the Docker image johnnyb61820/simple-web-server. It then says that we should proxy port 8070 on the container to port 8080 on the host server. Then, to run this orchestration, you go to the directory that contains this file, and run

docker stack deploy -c docker-compose.yml mystack

This will create a swarm named mystack that is running your service. It will then create a network for this orchestration to run on. Then, it will go through each of the services (just one, my-service, in this example) and start them up, using the options specified. Additionally, the name of the service can be used to identify that service on the swarm network if we have multiple services that we are trying to coordinate.

Note that under “deploy” it says that it is “mode: replicated” and “replicas: 6”. That means that it will start 6 copies of our web service and requests to port 8080 on the host will be served to port 8070 on one of the replicas.

If you had more than one host in your swarm, then it would create replicas in both hosts, and forward incoming requests to each replica container, no matter which host it is on.

So, to see everything in action, you can use the following commands:

docker stack ls

This will list all of the deployed stacks.

docker stack services mystack

This will list all of the services we defined for the mystack stack.

docker service ls

This will list all of the services for all stacks.

docker service inspect NAMEOFSERVICE

This will show the configuration of a given service as shown in docker service ls.

docker service ps NAMEOFSERVICE

This will show all of the containers running within the given service.

docker network ls

This will list all of the networks that Docker has created to manage both the stack and the general Docker infrastructure.

If there are modifications to your stack, you can re-run the docker stack deploy command above. Finally, to remove your stack, do docker stack rm mystack.

Coordinating Services

We have done a lot with our simple web server. In the next example, we are going to kick the complexity up a few notches. We will orchestrate an application that has both a database and a PHP app that connects to the database. This will require multiple, coordinated containers. We will have a single database container and a number of replica PHP containers.

Note that each container can reference the other services by service name. Internally, they have what amounts to a “fake” DNS record that converts the service names to internal IP addresses within Docker’s internal container network. They can’t reference the containers specifically, because, for instance, there are multiple PHP app containers. However, referring to the service will give an IP address that connects to the whole cluster of them.

Therefore, in the docker-compose.yml file, we will name the services, and use those names to link them together.

The docker-compose.yml file is below:

version: "3"
services:
 myphpserver:
  image: myphpimage
  ports:
   - "8080:80"
  deploy:
   mode: replicated
   replicas: 6
 mydbserver:
  image: postgres
  environment:
   POSTGRES_DB: mydata
   POSTGRES_PASSWORD: abc123

This uses two Docker images—a standard image that runs the PostgreSQL database service (the postgres image), and a custom image that we will build, which we will call myphpimage, which is based on a standard PHP image. Docker has a lot of pre-built standard images. To see a list of all of them, go here.

So, naming the database service mydbserver means that we can reference the database using this name in our PHP code. Additionally, the postgres image can be customized through environment variables, which are set using the environment keyword in the docker-compose.yml file. Here, we set POSTGRES_DB to be the name of the database on the server, which we will call mydata. POSTGRES_PASSWORD sets the password to the database. Note that you can also set environment variables using external files using the env_file key to specify the filename to read them from. More documentation about this image is available from DockerHub.

Then, our PHP code will be as follows. We will first have a PHP file to create the database, as shown below (we will call this file create.php):

<?php
 $dbh = new PDO("pgsql:host=mydbserver;port=5432;dbname=mydata;user=postgres;password=abc123");
 $dbh->exec("CREATE TABLE mytable (id serial, name text)");
 $dbh->exec("INSERT INTO mytable (name) values ('Bob')");
 $dbh->exec("INSERT INTO mytable (name) values ('Jim')");
 echo "Complete";
?>

This will create the table and then add two records into it. Next, we need to do our actual application code (we will call this file index.php):

<?php
 $dbh = new PDO("pgsql:host=mydbserver;port=5432;dbname=mydata;user=postgres;password=abc123");
 $sth = $dbh->prepare("SELECT name FROM mytable WHERE id = ?”);
 $sth->execute(array($_GET[“id”])); $result = $sth->fetch();
 echo “The name is " . $result["name"];
?>

Note that in both of these, we are using mydbserver as the host to connect to. Again, this hostname will be setup by Docker when we deploy our stack, because that is the name of the database service in the file.

Now, we need to create a Dockerfile that will create an image that will pull in the standard PHP image, add the database plugins, and then add our code into the image. That looks like this:

FROM php:5.6-apache
RUN apt-get update && apt-get install -y libpq-dev && docker-php-ext-install pdo pdo_pgsql
COPY create.php index.php /var/www/html/

The first line is the base Docker image we are using, which is setup for serving PHP files through the Apache web server. The second line installs the software we will need to connect PHP to our database. The third line copies our application to the correct directory to be picked up by the web server.

The fact that you should copy your PHP files to /var/ww/html/ can be found by reading the documentation about the PHP image.

Now, if all these files are in the same directory, then, to deploy this application, you need to run the following commands:

docker build -t myphpimage .

(builds the PHP image from the Dockerfile)

docker stack deploy -c docker-compose.yml mystack

(deploys the stack to the swarm)

Now, if you point your browser to http://localhost:8080/create.php it will create the database for you. Then, you can go to http://localhost:8080/index.php?id=1 and it will show you the name in record #1.

Managing Data with Volumes

One problem with the above application is that, because Docker containers are ephemeral, if we were to redeploy the stack (or even stop it and start it), it would potentially wipe out the database and replace it with a brand new one.
To address this problem, Docker also provides a way to permanently retain storage outside of its containers, using “volumes.” A volume is essentially a file folder that exists outside of the container. The volume can be used on a single container or shared among multiple containers.

To create a volume named myvolume, run docker volume create myvolume. Then, to attach it to a new container, use the –mount flag when creating the container. For instance, docker run -it –mount source=myvolume,destination=/a/b/c ubuntu will create an Ubuntu container and then attach (i.e., mount) the volume myvolume to the path /a/b/c on the new container.

Note that if the volume doesn’t exist (i.e., you haven’t created it yet), then, when you ran the container, it would have created the volume for you, and then copied any data that was already at that path in the container’s image into the volume before using it.

Volumes can be specified in a docker-compose.yml file, too. To add an external volume to our previous stack, you can use the following file:

version: "3"

services:
 myphpserver:
  image: myphpimage
  ports:
   - "8080:80"
  deploy:
   mode: replicated
   replicas: 6
 mydbserver:
  image: postgres
  volumes:
   - mydbdata:/var/lib/postgresql/data
  environment:
   POSTGRES_DB: mydata
   POSTGRES_PASSWORD: abc123

volumes:
 mydbdata:

At the bottom of the file, it lists the volumes that need to be created. Then, in the definition of mydbserver, I specified that it should use that volume, and the place in the filesystem it should be mounted (this information was taken from the documentation about the image itself). This keeps the data outside of my containers so that, even if I remove and re-add the stack, it will keep my data in the mydbdata volume. So, for this application, if you re-deploy it you will not have to re-run the create.php script again.

In this tutorial, we took our knowledge of Docker and containers and learned to orchestrate them into complete technology stacks. These stacks can be as simple as the two-tier system shown here, or they can be as involved and complicated as you want to make them. Docker stacks offer many additional features as well, but, hopefully, this introduction gives you a feel for how a stack can be orchestrated and what it looks like.

In the next installment, we will finish by showing how to deploy a stack to third-party infrastructure services so that you can focus on your application and not your server room infrastructure.


Here are the three previous installments in this series:

Part 1: How the Docker revolution will change your programming, Since 2013, Docker (an operating system inside your current operating system) has grown rapidly in popularity. Docker is a “container” system that wraps the application and the operating system into a single bundle that can be easily deployed anywhere. In this series, we are looking under the hood at Docker, a infrastructure management tool that has rapidly grown in popularity over the last decade.

Part 2: A peek under the covers at the new Docker technology The many advances that enable Docker significantly reduce a system’s overhead. Docker, over and above the basic container technology, also provides a well-defined system of container management.

Part 3: Working with Docker: An Interactive Tutorial Docker gives development teams more reliable, repeatable, and testable systems, deployed at massive scale with the click of a button. In this installment,, we look at the commands needed to start and run Docker, beginning with containers.


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

Part 4: Docker—An Introduction to Container Orchestration