Run Multi-Container Applications with Kubernetes ☸️

 After we explored a simpler approach of running multiple containers in Run Multi-Container Applications with Docker Compose let's try a more advanced approach with wider options and definitely more steps to implement.

Kubernetes is an open-source platform designed to automate deploying, scaling, and managing containerized applications. It groups containers into logical units for easy management and discovery, making it a key player in the world of DevOps, ci/cd and cloud-native applications. 

It's very important to note that this simple example is only aimed for software engineers who aren't responsible for DevOps or ci/cd rather this is for the engineers who develop the applications so that they get a glimpse of how their applications are generally handled after they get containerized and understand a bit about how their work gets deployed. 

I also would like to clarify that in order for us to run Kubernetes (k8s for short, because there are 8 letters between K and S) we need 2 things. The first one is to install kubectl which will allow us to run k8s commands in cmd and second and most important is a k8s cluster.

Kubernetes Cluster

A Kubernetes cluster is a set of node machines for running containerized applications. A cluster is comprised of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.

Here's a brief description of the components:

Master: The master coordinates the cluster. It's responsible for maintaining the desired state (like which applications are running and which container images they use), scheduling applications, and implementing changes to the cluster. For high availability, a cluster can have more than one master.

Nodes: Nodes are the worker machines that run applications. Each node runs at least a Kubelet, which is the agent responsible for communication between the master and the node. It also runs the container runtime, like Docker, to handle the actual container operations.

Pods: On the nodes, Kubernetes runs your applications inside units called pods. Each pod encapsulates an application container (or a group of tightly coupled containers), storage resources, a unique network IP, and options that govern how the container(s) should run.

A Kubernetes cluster provides a platform for deploying and managing containers at scale, with features for service discovery, load balancing, secret and configuration management, and rolling updates.

It's important to note that k8s involves a lot of terminologies that you must be familiar with before attempting a hands-on example and I will not be getting into any more details as this is not the scope of this post.

Having said that let's start with the following steps where we will be using Docker Desktop as our single-node cluster so let's see how to enable k8s in Docker Desktop.

Simply open settings in Docker Desktop and click on Kubernetes tab from the left then click on the Enable Kubernetes checkbox and finally click Apply & Restart.

After doing that it should take a couple of minutes to download needed assets and images to be able to run the k8s cluster.

Now that our cluster is set up, let's see how to prepare our YAML files which will help us run our containers.

Kubernetes Deployment

To run your application on k8s, you would need to create a Kubernetes Deployment or a Pod.

 Here's an example of a Kubernetes Deployment configuration that would run one replicas of our container:

apiVersion: apps/v1

kind: Deployment

metadata:

  name: hardcode-api

spec:

  replicas: 1

  selector:

    matchLabels:

      app: hardcode-api

  template:

    metadata:

      labels:

        app: hardcode-api

    spec:

      containers:

      - name: hardcode-api

        image: hardcode-sample

        imagePullPolicy: Never

        ports:

        - containerPort: 8080

In this file:

apiVersion: apps/v1 specifies the version of the Kubernetes API to use.
kind: Deployment specifies that we're creating a Deployment.
metadata: name: hardcode-api sets the name of the Deployment.
spec: replicas: 1 specifies that we want 1 replica of our container.
spec: selector: matchLabels: app: hardcode-api specifies the label that Kubernetes should use to select Pods for this Deployment.
spec: template: specifies the template for the Pods in this Deployment.
spec: template: spec: containers: specifies the containers to run in each Pod. In this case, we're running one container from the hardcode-sample image and it's listening on port 8080 and imagePullPolicy is set to Never to prevent this image from being pulled from an online repo because this is a local image.

To apply this configuration, save it to a file (for example, k8s-deployment.yml), then use the kubectl apply command:

kubectl apply -f k8s-deployment.yml

This command will create the Deployment on your Kubernetes cluster.

Please note that this configuration doesn't map the container port to a port on your host machine. In Kubernetes, you would typically use a Service to make your application accessible from outside the cluster. The type of Service you need (for example, NodePort, LoadBalancer, or Ingress) depends on your specific requirements and the environment where your cluster is running.

For local development we can create a Service file which we can run through kubectl too so let's create one so we can run our container.

Kubernetes Service

In the Service file, we're creating a NodePort service that exposes port 8080 of the Pods to port 30001 on the single node in your cluster. The Service is selecting Pods based on the app: hardcode-api label, which matches the label on the Pods created by the Deployment.
apiVersion: v1
kind: Service
metadata:
  name: hardcode-api-service
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30001
      protocol: TCP
  selector:
    app: hardcode-api
That should do the trick. Now let's apply our service using this simple command after saving under the name k8s-serice.yml and locating it in the same directory as our k8s-deployment.yml.

kubectl apply -f hardcode-api-service.yml

Perfect! Now you have applied your Service and Deployment now let's test all of this and see if our Pod is running fine.

Communicating With the Container

First of all, we need to know if our Pod is up and running, and also check its status so we use the following command.
kubectl get pods
and the response should look something like this.


Great! It seems like our Pod is running. Let's try to hit an API that exists in my application.


Perfect! Seems like my application can survive in any container it gets crammed in.

An important note to highlight now is if you open Docker Desktop and see multiple containers running although you specified only one in your Deployment file and it's okay because sometimes k8s runs a configuration container to run some dependencies along with your pod and as long as you see your number of pods running when you use the command get pods then you're in the clear and nothing is out of the ordinary.

You might also notice that when you try to stop the container from Docker Desktop it will keep coming back from the dead and that is because your Deployment file is applied and keeps recreating the needed Pods. So, if you want to terminate all running Pods simply run the following command.
kubectl scale deployment hardcode-api --replicas=0
In conclusion, using k8s is pretty straightforward and simple when it's just about one service or one pod and Docker Desktop is there to help provide many dependencies, but it gets more challenging when you're managing multiple containers with different ports. And because k8s allows you to really fine tune your deployment, you can really get lost in the details but for a software engineer whose main focus is development not DevOps or ci/cd, you really don't need to dig that deep, you just need to understand the basics and how things generally work and get your hands dirty every once in a while.

Comments

Popular posts

Why I Hate Microservices Part 1: The Russian Dolls Problem 🪆🪆🪆

Why I Hate Microservices Part 3: The Identity Crisis 😵

Why I Hate Microservices Part 2: The Who's Telling the Truth Problem 🤷