Run Multi-Container Applications with Kubernetes ☸️
After we explored a simpler approach of running multiple containers in Run Multi-Container Applications with Docker Compose let's try a more advanced approach with wider options and definitely more steps to implement.
Kubernetes is an open-source platform designed to automate deploying, scaling, and managing containerized applications. It groups containers into logical units for easy management and discovery, making it a key player in the world of DevOps, ci/cd and cloud-native applications.
It's very important to note that this simple example is only aimed for software engineers who aren't responsible for DevOps or ci/cd rather this is for the engineers who develop the applications so that they get a glimpse of how their applications are generally handled after they get containerized and understand a bit about how their work gets deployed.
I also would like to clarify that in order for us to run Kubernetes (k8s for short, because there are 8 letters between K and S) we need 2 things. The first one is to install kubectl which will allow us to run k8s commands in cmd and second and most important is a k8s cluster.
Kubernetes Cluster
A Kubernetes cluster is a set of node machines for running containerized applications. A cluster is comprised of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.
Here's a brief description of the components:
• Master: The master coordinates the cluster. It's responsible for maintaining the desired state (like which applications are running and which container images they use), scheduling applications, and implementing changes to the cluster. For high availability, a cluster can have more than one master.
• Nodes: Nodes are the worker machines that run applications. Each node runs at least a Kubelet, which is the agent responsible for communication between the master and the node. It also runs the container runtime, like Docker, to handle the actual container operations.
• Pods: On the nodes, Kubernetes runs your applications inside units called pods. Each pod encapsulates an application container (or a group of tightly coupled containers), storage resources, a unique network IP, and options that govern how the container(s) should run.
A Kubernetes cluster provides a platform for deploying and managing containers at scale, with features for service discovery, load balancing, secret and configuration management, and rolling updates.
It's important to note that k8s involves a lot of terminologies that you must be familiar with before attempting a hands-on example and I will not be getting into any more details as this is not the scope of this post.
Having said that let's start with the following steps where we will be using Docker Desktop as our single-node cluster so let's see how to enable k8s in Docker Desktop.
Simply open settings in Docker Desktop and click on Kubernetes tab from the left then click on the Enable Kubernetes checkbox and finally click Apply & Restart.
After doing that it should take a couple of minutes to download needed assets and images to be able to run the k8s cluster.
Now that our cluster is set up, let's see how to prepare our YAML files which will help us run our containers.
Kubernetes Deployment
To run your application on k8s, you would need to create a Kubernetes Deployment or a Pod.
Here's an example of a Kubernetes Deployment configuration that would run one replicas of our container:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hardcode-api
spec:
replicas: 1
selector:
matchLabels:
app: hardcode-api
template:
metadata:
labels:
app: hardcode-api
spec:
containers:
- name: hardcode-api
image: hardcode-sample
imagePullPolicy: Never
ports:
- containerPort: 8080
kubectl apply -f k8s-deployment.yml
apiVersion: v1kind: Servicemetadata:name: hardcode-api-servicespec:type: NodePortports:- port: 8080nodePort: 30001protocol: TCPselector:app: hardcode-api
Perfect! Now you have applied your Service and Deployment now let's test all of this and see if our Pod is running fine.
Communicating With the Container
kubectl get pods
kubectl scale deployment hardcode-api --replicas=0
Comments
Post a Comment