A Docker deployment workflow

This post will explain the Docker image based deployment workflow used for this site. Everything is trigged after a push into the master branch of this site’s repository, creating a stateless usable image.

Some background

The site is created by hugo, a very fast static site generator. The generated files are built into a Docker image which uses nginx to serve them to the web.

To make the containers from these images available to the general web browsing public, the production server is running nginx as a reverse proxy alongside docker-gen. docker-gen listens to Docker events and writes a configuration file for nginx before reloading it. Each time a container is started (or stopped), the reverse proxy is updated with the IP address and port to the container. You can read more about this here, and here if you are using a Docker Swarm cluster.

The CI/CD service being used is from Shippable, but any of the others out there can also be used. Shippable has Docker image build support and pushing to the Docker hub built in, but this post will describe doing the image building on the server where it is used.

Stack overview

  • Docker
  • hugo (for site generation)
  • docker-gen (as Docker image)
  • nginx reverse proxy (as Docker image)
  • nginx with HTML files from hugo (as Docker image)
  • Shippable (CI/CD)
  • docker-compose (orchestration)
  • bash script (image/container management)

Development

The repository contains everything required to do a deployment and it has two directories at its root; one to seperate build information & scripts (build) and the other the actual site code (hugo). The hugo directory contains a standard hugo site setup.

before.no
├── build
│   ├── dev
│   │   └── docker-compose.yml
│   └── prod
│       ├── deploy.sh
│       ├── docker
│       │   ├── default
│       │   ├── Dockerfile
│       │   └── html
│       └── docker-compose.yml
├── hugo
└── shippable.yml

build/dev/

Ideally I would want my development and production images as indentical as possible, but in this case it isn’t necessary. It is very quick and easy to spin up a production image in a development environment if the need ever exists.

hugo comes with a built-in webserver which will watch the filesystem for changes and auto-update my web browser after every file save. docker-compose is used to spin up a development container.

build/dev $ cat docker-compose.yml 
hugo:
  image: justadam/hugo:0.13
  volumes:
    - ../../hugo:/content
  ports:
    - 1313:1313
  command: /usr/bin/hugo server --watch --baseUrl=http://before.dev --source=/content --destination=/hugo --buildDrafts 
build/dev $ docker-compose up -d

And then I can start work.

build/dev $ cd ../../hugo/content/post
hugo/content/post $ vim post-name.md

Once the work is finished, commited and then moved into the master branch, it will be in production within a few minutes.

Deployment

A commit into the repository triggers a hook at Shippable which starts my CI/CD process:

  • Build HTML files from markdown (hugo)
  • Send files (in build/prod) to Docker image build server (image building and pushing could insted be done on your CI server)
  • Run deploy.sh script (build image, start container, stop container, cleanup)

CI/CD

shippable.yml contains the following instructions;

install:
  - sudo curl -Ls https://github.com/spf13/hugo/releases/download/v0.13/hugo_0.13_linux_amd64.tar.gz | tar xzf -
  - sudo mv hugo_0.13_linux_amd64/hugo_0.13_linux_amd64 /usr/local/bin/hugo
  - sudo chmod +x /usr/local/bin/hugo 
  - sudo rm -rf hugo_0.13_linux_amd64

script:
  - hugo -s hugo/ -d html/
  - rm -rf build/prod/docker/html && mv hugo/html build/prod/docker

after_success:
  - rsync -az --delete build/prod/* build@before.no:~/before.no
  - ssh build@before.no 'bash -s' < build/prod/deploy.sh

build/prod/docker/ contains a Dockerfile which expects the folder html/ to contain all the web content to serve. Otherwise nginx is running the default configuration, but this can of course be customized for your image. It is the nginx reverse proxy and docker-gen which allow access to the container from the outside world.

High-level it looks something like this:

                    Incoming requests           
                   XXXXXXXXXXXXXXXXXXX          
                              X  X  X           
                              X  X  X           
                              X  X  X           
       +----------------------X--X--X----------+
+------+ Docker host          X  X  X          |
|      |                      X  X  X          |
|      |  +-----------+     +-X--X--X---+      |
|      |  |           |     | nginx     |      |
|      |  |docker-gen +-----+ reverse   |      |
+---------+           |     | proxy     |      |
       |  +-----------+     +-----+-----+      |
       |                          |            |
       |                          |            |
       |                          |            |
       |                          |            |
       |        +-----------------+-------+    |
       |        |  nginx                  |    |
       |        |  -e VHOST=www.before.no |    |
       |        +-------------------------+    |
       |                                       |
       +---------------------------------------+

Image building

This job could be completed on the CI server, but as not all CI services support this so I will demonstrate the concept by building the image on the production server. Docker’s image layer caching makes this task go very quickly.

  • All relevant files are transfered to the server (rsync)
  • Build image
  • Tag image with branch/build/commit ID
  • Push image to Docker hub or private repository
  • Start container from new image
  • Stop and remove old container
  • Remove old image

Some of these steps will be different depending on your particular workflow, how fast you want to rollback updates and how much history you want to keep.

I am running a tiny site on a single Docker host; so I have decieded not to push my images to a repository or keep any old images or containers on the server. If I was running a much bigger and important site, and perhaps on a Docker cluster; then I would be pushing my images to a registry and keeping an old image and container or two available for quick rollbacks.

deploy.sh

I use docker-compose for orchestration but it doesn’t have “start then stop” container functionality built in, so I have improvised a little with the help of a bash script.

  • Find the ID of the current image being used
  • Find the ID of the current container in use
  • Build new image
  • Start new container from the image
  • Stop old container and remove the old container
  • Remove the old image
#/usr/bin/env bash
OLD_IMAGE=$(docker images | grep beforeno | awk '{print $3}')
CURRENT_CONTAINER=$(docker ps | grep beforeno | awk '{print $1}')
cd ~/before.no
docker-compose build
docker-compose run -d beforeno
docker stop $CURRENT_CONTAINER && docker rm $CURRENT_CONTAINER
docker rmi $OLD_IMAGE

docker-compose.yml looks like this:

beforeno:
  build: ./docker
  restart: always
  hostname: before.no
  domainname: www.before.no
  environment:
   - VHOST=www.before.no
   - DLE_TOKEN=3ea8a1ae-2d0c-4e27-8034-9c97e13ee0f0

And the Dockerfile used to build the image to serve this site:

FROM ubuntu:14.04

MAINTAINER JustAdam <adambell7@gmail.com>

RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get clean && \
    DEBIAN_FRONTEND=noninteractive apt-get install -y nginx

# Log to stdout/stderr
RUN sed -i -e "/access_log/i log_format site_combined '\$host> \$http_x_forwarded_for \$remote_addr - \$remote_user [\$time_local] \"\$request\" \$status \$body_bytes_sent \"\$http_referer\" \"\$http_user_agent\"';" /etc/nginx/nginx.conf
RUN sed -i -e "s/\/var\/log\/nginx\/access.log/\/dev\/stdout site_combined/g" /etc/nginx/nginx.conf
RUN sed -i -e "s/\/var\/log\/nginx\/error.log/\/dev\/stderr/g" /etc/nginx/nginx.conf
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf

ADD html /usr/share/nginx/html 
RUN chown -R www-data:www-data /usr/share/nginx/html

EXPOSE 80

CMD ["nginx"]

And it all works because this post is available for reading.

Adam Bell-Hanssen

maybe, someday .. just another code and ops guy

Oslo, Norway