[go: up one dir, main page]

DEV Community

Hitesh Pattanayak
Hitesh Pattanayak

Posted on

Canary Deployment with Kubernetes

In this post, we will learn how to do canary deployment using Kubernetes. In a canary deployment, a small set of users or requests are directed to a new version of the software while the majority of the traffic is still being handled by the old version. This allows us to test new versions of software in production without risking the entire system.

Advantage of Canary Deployment

Canary deployment is a deployment strategy where a new version of an application is gradually rolled out to a small subset of users or servers before it is released to the entire user base. The advantages of canary deployment include:

  1. Early detection of issues: Canary deployment allows you to test new features or changes on a small scale before rolling them out to your entire user base. This helps you to detect and fix any issues or bugs early on, minimizing the impact on your users.

  2. Reduced risk: With canary deployment, you are reducing the risk of deploying new features or changes by limiting the scope of the rollout. This makes it easier to recover from any issues that may arise.

  3. Better user experience: By gradually rolling out changes to a small subset of users, you can gather feedback and make adjustments before releasing the changes to your entire user base. This ensures a better user experience for your customers.

  4. Improved performance: Canary deployment can improve the performance of your application by allowing you to test and optimize new features or changes on a small scale before rolling them out to your entire user base.

  5. Increased agility: Canary deployment enables you to be more agile in your development process by allowing you to release new features or changes more frequently and with less risk. This can help you to stay ahead of the competition and meet the changing needs of your users.

Steps to demontrate Canary Deployment

Let’s start by creating two nginx deployments with labels version=v1 and version=v2 to differentiate. We will call our first deployment nginx-app-1 and use the following command:

kubectl create deploy nginx-app-1 --image=nginx --replicas=3 --dry-run=client -o yaml > deploy-1.yaml
Enter fullscreen mode Exit fullscreen mode

Next, we will edit the deploy-1.yaml file and add a label app=v1 to the metadata.labels section. We will also add an initContainers section for the busybox pod:

initContainers:
  - name: install
    image: busybox:1.28
    command:
      - /bin/sh
      - -c
      - "echo version-1 > /work-dir/index.html"
    volumeMounts:
      - name: workdir
        mountPath: "/work-dir"
Enter fullscreen mode Exit fullscreen mode

Now, let’s create a service for the deployment to be accessible:

apiVersion: v1
kind: Service
metadata:
  name: nginx-app-svc
  labels:
    app: nginx-app
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
    app: nginx-app
Enter fullscreen mode Exit fullscreen mode

We can test the deployment by running:

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox --command -- wget -qO- nginx-app-svc
Enter fullscreen mode Exit fullscreen mode

Now, let’s create another similar deployment with one replica, but with a label value of app=v2. We will call this deployment nginx-app-2:

kubectl create deploy nginx-app-2 --image=nginx --replicas=1 --dry-run=client -o yaml > deploy-2.yaml
Enter fullscreen mode Exit fullscreen mode

We will also edit the deploy-2.yaml file and add a label app=v2 to the metadata.labels section. We will deploy this new version of the software by running:

kubectl apply -f deploy-2.yaml
Enter fullscreen mode Exit fullscreen mode

We can continuously call the service to see how the load balancer diverts the traffic between the two versions. To do this, we can run:

kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- /bin/sh -c 'while sleep 1; do wget -qO- nginx-app-svc; done'
Enter fullscreen mode Exit fullscreen mode

Once we determine that nginx-app-2 is stable and we would like to deprecate nginx-app-1, we can delete the old deployment by running:

kubectl delete deploy nginx-app-1
Enter fullscreen mode Exit fullscreen mode

All traffic will then be directed to nginx-app-2. We can also scale nginx-app-2 to four replicas by running:

kubectl scale --replicas=4 deploy nginx-app-2
Enter fullscreen mode Exit fullscreen mode

To check the traffic, we can run the following command:

while sleep 0.1; do curl $(kubectl get svc nginx-app-svc -o jsonpath="{.spec.clusterIP}"); done
Enter fullscreen mode Exit fullscreen mode

In summary, canary deployment is a powerful tool that can help you to deploy new features or changes with more confidence and less risk. By gradually rolling out changes and gathering feedback, you can improve the user experience, performance, and agility of your application.

Top comments (1)

Collapse
 
sreedharbukya profile image
Sreedhar Bukya

Hello @hiteshrepo , How is this supposed to Canary deployment, as this is deployed as independent deployments and it has its own services.

Do you have any other better examples of showing, How did you implement canary deployments in Kubernetes.