Run Multi-Container Applications with Docker Compose 🐙

 Let's talk about Docker Compose, a powerful tool that simplifies the process of managing and orchestrating multiple Docker containers. Docker Compose allows us to define our multi-container application with all of its dependencies in a single file, then spin our application up in a single command which does everything that could otherwise be performed on the command line.

In this brief tutorial, we'll walk through the process of creating a Docker Compose file, explaining the role of each section and how they work together to define your application. We'll then demonstrate how to use this file to run your containers, making your application accessible on specified ports. By the end of this tutorial, you'll have a solid understanding of Docker Compose and how to use it to simplify the management of your Docker applications.

It's particularly useful in development environments for testing multi-container applications. Docker Compose can also be used in production, but it's typically not the best tool for the job. Docker Compose lacks some of the advanced features needed for running applications at scale, such as rolling updates, service discovery, and orchestration. For production environments, more robust tools like Kubernetes are often used.

I have talked before in my post Run .NET Application on Docker about how to build a .Net application as a docker image and this is the image (hardcode-sample) I will be using in this Docker Compose introduction so if you haven't built an image for an application yet please do so before implementing the next steps or just follow the steps in my previous post.

Docker Compose File

The first and most important step is preparing our Docker Compose file. It's a YAML (stands for YAML Ain't Markup Language) file used to specify the services (containers) of an application and their configurations. With a single command, docker-compose up, you can create and start all the services from your configuration. This makes it easy to manage complex applications with multiple interdependent containers.

Now let's see a simple example of a Docker Compose file and understand what it means.

version: '3'

services:

  hardcode-api:

    image: hardcode-sample

    ports:

      - "12334:8080"

Here's a breakdown of what each part of the file does:

version: '3': This specifies the version of the Docker Compose file format. Version 3 is the latest major version and is recommended for most use cases.

services:: This section defines the services that make up your application. Services are essentially containers that are run in separate environments but can communicate with each other.

hardcode-api:: This is the name of the service. You can choose any name you like, but it should be descriptive of the service's role.

image: hardcode-sample: This specifies the Docker image to use for this service. In this case, it's using an image named hardcode-sample. Docker will try to pull this image from your configured Docker registries.

ports:: This section maps ports between the host (your machine or VM) and the container. In this case, it's mapping port 12334 on the host to port 8080 in the container.

So, when you run docker-compose up with this file, Docker Compose will start a single service named hardcode-api, using the hardcode-sample image, and it will make the application in the container accessible on port 12334 of the host machine.

So, what about if I want to run multiple containers of this image (or other images) each listening on a different port? Easy, let's just repeat the first example and add other services and give each a different name and port as follows.

version: '3'

services:

  hardcode-api-1:

    image: hardcode-sample

    ports:

      - "12334:8080"

  hardcode-api-2:

    image: hardcode-sample

    ports:

      - "12335:8080"

  hardcode-api-3:

    image: hardcode-sample

    ports:

      - "12336:8080"

Okay, that looks good! Just add this script to a file named docker-compose.yml and add it to the root directory of your solution beside the DockerFile file we added in the post linked in the beginning of this post. Now we want to try to run these containers and see what's going to happen.

Running the Containers

Just like I said above you only need one simple command to get your containers running which is the following.

docker-compose up
Simply open your cmd and navigate to the directory where docker-compose.yml file exists and run this command and you should get the following response.

This should mean that your containers are up and running and you will keep getting logs from your application which is running on these containers depending on which instance you hit based on the port you use in your communication so let's try to communicate with one.

Communicating With the Containers

Okay, so from the previous post I showed you that I have a simple API in my application I use to test if my application is functioning well or not inside the container so I will just try to curl a request to the same API on the three different containers and see what I get.


Perfect! Seems like once again my application managed to stay alive inside containers. Hopefully yours will be able to do so as well. As I said this a very brief introduction to Docker Compose and there are other features that need exploring but hopefully this gets on the right track to start experimenting with containers on your local machine.

So, what about a more complex tool for orchestration? I kept mentioning that Docker Compose is typically used for development environments so what if we want to play with the big guns? This is where Kubernetes (k8s) shine and I will definitely show you in later posts how to experiment with this great tool but until then make sure that your head is wrapped around the containers concept and that you've played around enough with Docker Compose.

Comments

Popular posts

Google & Microsoft Token Validation Middleware in ASP.NET Applications

Publish and Consume Messages Using RabbitMQ as a Message Broker in 4 Simple Steps 🐰

Real Life Case Study: Synchronous Inter-Service Communication in Distributed Systems