Docker in 2020: Did Someone Say Isolation? (Part II)

Categories: Docker

In Part I, we ended with a basic WordPress stack composed of a WordPress web server container, and a database container. In Part II, we’ll move this towards a more production ready setup. Specifically, we’ll be focusing on proxy configuration, HTTPS support, and internal/external networks. For reference, here is the compose file we left with:

version: '3.1'

services:

  wordpress:
    image: wordpress
    restart: always
    ports:
      - 8080:80
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: exampleuser
      WORDPRESS_DB_PASSWORD: examplepass
      WORDPRESS_DB_NAME: exampledb
    volumes:
      - wordpress:/var/www/html

  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: exampledb
      MYSQL_USER: exampleuser
      MYSQL_PASSWORD: examplepass
      MYSQL_RANDOM_ROOT_PASSWORD: '1'
    volumes:
      - db:/var/lib/mysql

volumes:
  wordpress:
  db:

If we look at the compose file, specifically the ports configuration on the WordPress container, we’ll see 8080:80. This is mapping port 8080 on the host machine, to port 80 in that particular container. We can really choose any external port, so long as it isn’t already in use. If we wanted to deploy another WordPress stack, we would have to expose it on a different port, something like 8081:80. This is not what we’re looking for, and this is where a reverse proxy comes in. At a high level, this reverse proxy would listen externally on port 80, and proxy requests to the different web containers based on the domain name. This way, we can host as many sites as we’d like, and no ports are being directly exposed on those containers.

A first thought might be to run a reverse proxy on the Docker host itself. For example, we could set up Apache or NGINX on the Docker host and then configure reverse proxy entries pointing to each web server container. Although completely doable, an easier approach would be to run the reverse proxy inside a Docker container! There are numerous containers available on Docker hub designed specifically for this use case. The one we’ll look at today is nginx-proxy.

From the official documentation, “nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.” In a nutshell, nginx-proxy handles all proxy configuration. When a new container comes online, the nginx-proxy container is alerted, and automatically creates a route to the new stack based on a VIRTUAL_HOST directive specified in the new container’s environment variables. Here’s an examples showing how to link our example WordPress stack to the nginx-proxy container. To keep it simple, I have included the nginx-proxy container in the WordPress stack. In a production environment, nginx-proxy should be configured in it’s own stack, which we’ll see a little later on.

version: '3.1'

services:

  nginx-proxy:
    image: jwilder/nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

  wordpress:
    image: wordpress
    restart: always
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: exampleuser
      WORDPRESS_DB_PASSWORD: examplepass
      WORDPRESS_DB_NAME: exampledb
      VIRTUAL_HOST: wordpress.example.com
    volumes:
      - wordpress:/var/www/html

  db:
    image: mysql:5.7
    restart: always
    environment:
      MYSQL_DATABASE: exampledb
      MYSQL_USER: exampleuser
      MYSQL_PASSWORD: examplepass
      MYSQL_RANDOM_ROOT_PASSWORD: '1'
    volumes:
      - db:/var/lib/mysql

volumes:
  wordpress:
  db:

Comparing this to the compose file we began with, the only differences are moving the exposed port to the nginx-proxy container, and defining a VIRTUAL_HOST environment variable in the WordPress container. When the nginx-proxy container starts up, it look for all VIRTUAL_HOST environment variables, and then creates internal mappings for all incoming traffic using server name indication. This is really all there is to it for a basic reverse proxy configuration. If we wanted to configure another service that ran on a nonstandard port (say a nodejs app on port 8080), we would just need to set VIRTUAL_PORT:8080 in addition to the VIRTUAL_HOST variable.

The last element we’re missing is support for HTTPS. This is where the letsencrypt-nginx-proxy-companion container comes into play. This containers integrates directly with the nginx-proxy container, and automatically handles SSL certificates through Lets Encrypt. Here is a full fledged nginx-proxy configuration with SSL support. The biggest thing to note here is the use of a dedicated web-servers network. By default, Docker stacks each get their own isolated networks, unable to communicate with outside containers. Here, we need to define an external Docker network called web-servers, not specifically managed by any stack. This is done by running docker network create web-servers, and then attaching the different stacks to this network.

version: '3.1'

services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - conf:/etc/nginx/conf.d
      - vhost:/etc/nginx/vhost.d
      - html:/usr/share/nginx/html
      - dhparam:/etc/nginx/dhparam
      - certs:/etc/nginx/certs:ro
      - /var/run/docker.sock:/tmp/docker.sock:ro
    networks:
      - web-servers

  letsencrypt:
    image: jrcs/letsencrypt-nginx-proxy-companion
    container_name: nginx-proxy-le
    volumes_from:
      - nginx-proxy
    volumes:
      - certs:/etc/nginx/certs:rw
      - acme:/etc/acme.sh
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - web-servers

networks:
  web-servers:
    external: true

volumes:
  conf:
  vhost:
  html:
  dhparam:
  certs:
  acme:

After the nginx-proxy stack is up and running, we can bring up additional stacks serving sites. Below is our example WordPress stack. Not much has changed from before. There is a new environment variable in the wp container called LETSENCRYPT_HOST. This is the domain name that the letsencrypt container uses for the certificate. Additionally, the db container has been given it’s own private network shared with the wp container. This adds an additional layer of security as the db container now has no direct route to the internet. Another benefit is that additional wp containers (in other stacks) will resolve ‘db’ to the correct container IP address. Since the web-servers network is external, all containers on that network share a DNS namespace. If all the db containers sat on the same network, the wp containers would almost certainly try connecting to the wrong ones since they all share the same name. Note that we can get away with multiple WordPress containers having the same name, ‘wp’, because the nginx-proxy container routes directly to the IP of the WordPress containers. In other words, no name resolution is done when routing web traffic to and from the WordPress containers.

version: '3.1'

services:

  wp:
    image: wordpress
    environment:
      WORDPRESS_DB_HOST: db
      WORDPRESS_DB_USER: exampleuser
      WORDPRESS_DB_PASSWORD: examplepass
      WORDPRESS_DB_NAME: exampledb
      VIRTUAL_HOST: wordpress.example.com
      LETSENCRYPT_HOST: wordpress.example.com
    volumes:
     - wordpress:/var/www/html
    networks:
     - web-servers
     - default

  db:
    image: mysql:5.7
    environment:
      MYSQL_RANDOM_ROOT_PASSWORD: '1'
      MYSQL_DATABASE: exampledb
      MYSQL_USER: exampleuser
      MYSQL_PASSWORD: examplepass
    volumes:
      - db_data:/var/lib/mysql
    networks:
     - default

networks:
  default:
    internal: true
  web-servers:
    external: true

volumes:
  db_data:
  wordpress:

To summarize, we have an nginx-proxy stack that is responsible for routing all traffic to and from the different stacks. When a new container is launched (whether solo or in a stack), the nginx-proxy stack will look for two environment variables, VIRTUAL_HOST, and LETSENCRYPT_HOST. If these are found, the nginx-proxy stack will automatically register a reverse proxy entry for the new stack, and attempt to automatically register an SSL certificate. In Part III, we’ll take a look at Docker Swarm, and show how to add fault tolerance.

«
»

    Leave a Reply