Skip to main content
Keksit
Käytämme evästeitä analytiikkaan, markkinointiin ja sen kohdentamiseen. Voit lukea selosteen täältä.
31.1.2024 | Teknologia

5 ways you can improve your docker compose deployments

Docker compose is a powerful tool to run your containers. Here are a few tips & tricks on how you could improve your productivity.

Some background

Here at Kvanttori we prefer to self host the majority of our own internal services. We decided on using docker compose for quick and painless deployment & migration.

Wait, what about kubernetes?

While kubernetes is great for public-facing services that require zero downtime and need to be scalable, it is often overkill for smaller projects, for which a single node might very well be more than enough.

Thinking about our use case, we just want to run a handful of services with a maximum of 100 people using them at the same time. A single 10€ compute node at Hetzner for example would be more than enough to fit our needs, while also being simple to manage and migrate.

As for uptime, we really only use our internal services during standard working hours. This gives us the freedom to perform maintenance during weekends and evenings, which is why we don't really need high availability to ensure 99.999% uptime.

Tip #1: Backups, backups, backups

Backups are the heart of every good deployment - you should always be prepared for data corruption and nodes failing. The 3-2-1 backup principle is a good rule of thumb for ensuring you are backing up data correctly. It states that:

  • You should always keep at least 3 copies of your data
  • You should keep your data in at least 2 different formats
  • At least 1 of your copies should be off-site

How we do it

One simple and reliable way we can tick off all three of these is by using an external service provider to store our backups. For example AWS S3. You could also use any other object-based storage provider such as Wasabi or even self-host one yourself by using something like min.io.

Our preferred and automated solution for this is to use the open-source docker-volume-backup container, which automatically backs up docker volumes to our object storage, while also ensuring data consistency by shutting down containers while they are being backed up.

Github contributors of repository offen/docker-volume-backup

Using docker-volume-backup

Using docker-volume-backup is very straight forward. You just need to add it to your compose file, and fill in the credentials for your storage provider. Here is an example on how you could back up services that uses two separate volumes to store data:

### Rest of your docker-compose.yml ###
backup:
    # In production, it is advised to lock your image tag to a proper
    # release version instead of using `latest`.
    image: offen/docker-volume-backup:[current-version]
    restart: always

    environment:
      # This signifies the number of days you want to keep
      # your backups, you may omit this if you want to keep your
      # backups indefinitely
      BACKUP_RETENTION_DAYS: 30
      AWS_S3_BUCKET_NAME: [aws-bucket-name]
      AWS_ACCESS_KEY_ID: [aws-user-key-id]
      AWS_SECRET_ACCESS_KEY: [aws-user-secret-key]

    volumes:
     # Here we bind our volumes into the /backup directory
     # in the docker volume backup container, with read only access
     # to ensure no harm can be done to the original files
      - [docker-volume-1]:/backup/[docker-volume-1]:ro
      - [docker-volume-2]:/backup/[docker-volume-2]:ro

      # Mounting the Docker socket allows the script to stop and restart
      # the container during backup. You can omit this if you don't want
      # to stop the container.
      - /var/run/docker.sock:/var/run/docker.sock:ro

Tip #2: Docker based networking

One of the superpowers of Docker is the ability to automatically create internal networks between containers and connect between them by simply using image names. Here is an example for how one could connect a service to a Mongo instance purely through docker networks, without relying on opening ports on the local machine:

version: '3'
services:
    your-service:
        image: your-service-image:latest
        depends_on:
            - mongo-db
    mongo-db:
        restart: always
        image: mongo:latest

After this, you can access the mongo-db container through the your-service container with the connection string mongodb://mongo-db:27017

Why this works

Assuming your app is in a directory called "my-app", when you run docker compose up, the following happens:

  • A network called my-app_default is created.
  • A container is created using your-service’s configuration. It joins the network my-app_default under the name your-service.
  • A container is created using mongo-db’s configuration. It joins the network my-app_default under the name mongo-db.

If you are interested in reading up more about Docker networking in docker compose, you can refer to their documentation here to get a deeper understanding on how it works.

Tip #3: Mounting volumes directly on the network

Often when you want to run containers, especially ones processing data, you might want to access data from a network resource. Your first idea might be to mount the volume on your host, and then bind mount the mounted directory to your container. There is, however, a simpler and more elegant way to do this: mounting the network volumes directly in docker.

There are two major file protocols docker supports for volume mounting, NFS and CIFS/SAMBA.

NFS

Here's how one could host a nextcloud instance that stores files on a separate NAS running on 192.168.0.2 and has a exposed directory at /nextcloud/nextcloud_nextcloud

services:
    app:
        ports:
            - 8085:80
        image: nextcloud:apache
        restart: always
        volumes:
            - nextcloud:/var/www/html
volumes:
    nextcloud:
        driver: local
        driver_opts:
            type: nfs
            o: nfsvers=4,addr=192.168.0.2,rw
            device: ':/nextcloud/nextcloud_nextcloud'

SAMBA/CIFS

Here's the same scenario, but instead with a SAMBA/CIFS share

services:
    app:
        ports:
            - 8085:80
        image: nextcloud:apache
        restart: always
        volumes:
            - nextcloud:/var/www/html
volumes:
    nextcloud:
        driver: local
        driver_opts:
            type: cifs
            o: addr=192.168.0.2,username=xxxxx,password=xxxxx,file_mode=0777,dir_mode=0777
            device: '://192.168.0.2/nextcloud/nextcloud_nextcloud'

Tip #4: Automatically updating your images through watchtower

Github contributors of repository containrrr/watchtower

One problem many people often come across when developing their applications is how to automatically deploy them. Docker containers are a good way to run your applications, but you would have to set up a complex webhook or pipeline process to first copy the content you want on to the server, after which you want to re-start the containers. Setting all of this up by yourself could be quite daunting and would probably take some time. Luckily there is a better way.

Enter Watchtower

At its core, Watchtower works in quite a simple manner. It simply checks if there is an updated version of any image available, and if that is the case, it recreates the container with the new image. While this isn't really the best idea to do for external images, it is great for images you build yourself:

version: '3'
services:
watchtower:
environment:
# Makes it so that Watchtower only updates containers we explicitly tell it to
WATCHTOWER_MONITOR_ONLY: true
WATCHTOWER_POLL_INTERVAL: 1800 #Checks for new updates every half an hour

image: containrrr/watchtower
volumes:
    - /var/run/docker.sock:/var/run/docker.sock

Next we configure our service, whilst remembering to add the label to enable Watchtower updates:

version: '3'
services:
    yourservice:
        image: yourservice:latest
        labels:
            - 'com.centurylinklabs.watchtower.enable=true'

That's it, from here on out, watchtower will periodically check for any new updates to your image and restart the container when a new version is detected.

Git set up

Now that we have Watchtower listening for changes to our image, we want to automatically build and deploy it when it updates. GitHub has a great tutorial on how to do just that using their actions, you can find it here. Similar solutions exist for pretty much all other CI/CD providers, since building and pushing images to registries is so common place.

Tip #5: Manage all routing through Traefik

Github contributors of repository traefik/traefik

Traefik is a container-first proxy, which makes it easy to route traffic to your containers securely. With Traefik you can define your routing configuration straight in the docker compose files for each service. This, with the addition of automated certificate management and renewal, makes Traefik an excellent addition to many deployments.

Setting up the Traefik container

The first step in setting up Traefik is to spin up the root container. Below is an example taken from the Traefik getting started docs for how to set up your root container. Please note that this is just an example for a minimal Traefik set up. In production use you would probably want to secure the web UI, add https redirection, and add automated SSL certificate generation.

version: '3'

services:
    reverse-proxy:
        # The official v2 Traefik docker image
        image: traefik:v2.10
        # Enables the web UI and tells Traefik to listen to docker
        command: --api.insecure=true --providers.docker
        ports:
            # The HTTP port
            - '80:80'
            # The Web UI (enabled by --api.insecure=true)
            - '8080:8080'
        volumes:
            # So that Traefik can listen to the Docker events
            - /var/run/docker.sock:/var/run/docker.sock

Setting up a service container

After we have our Traefik container running, our next step it to add services it can route traffic to. Continuing with the getting started example given by Traefik, we can use the whoami docker container as a simple example for how one might configure an http service:

version: '3'

services:
    whoami:
        # A container that exposes an API to show its IP address
        image: traefik/whoami
        labels:
            - 'traefik.http.routers.whoami.rule=Host(`whoami.docker.localhost`)'

Wrapping it up

In conclusion, by implementing these five strategies, you can elevate your Docker Compose deployments to the next level, empowering you to build robust, scalable, and efficient containerized applications with ease.

Patrik Larsen
Kirjoittanut Patrik Larsen

Piditkö tästä artikkelista? Anna sille taputus!