close

1/Deleting old pods and replacing them with new ones:

– If you have a ReplicationController managing a set of v1 pods, you can easily replace them by modifying the pod template so it refers to version v2 of the image and then deleting the old pod instances. The ReplicationController will notice that no pods match its label selector and it will spin up new instances with new version image.

– This is the simplest way to update a set of pods, if you can accept the short downtime between the time the old pods are deleted and new ones are started.

2/Spinning up new pods and then deleting the old ones:

– If you don’t want to see any downtime and your app supports running multiple versions at once, you can turn the process around and first spin up all the new pods and only then delete the old ones. This will require more hardware resources, because you’ll have double the number of pods running at the same time for a short while.

*Using switching from old to new version at once:

-Pods are usually fronted by a Service. Deploy new Replicase using new version (double number of pod running inlcude old version and new version). After that, change the Service’s label selector and have the Service switch over to the new pods. This is called a blue-green deployment. After switching over, and once you’re sure the new version functions correctly, you’re free to delete the old pods by deleting the old ReplicationController

Example: Switching old ReplicaSet using for old image(nignx) to new image(apache) as frontend pod

– Deploy Replica Set using nginx image for pod.

apiVersion: apps/v1

kind: ReplicaSet

metadata:

 name: nginx-rs

spec:

 replicas: 2

 selector:

  matchLabels:

   app: nginx

 template:

  metadata:

   labels:

    app: nginx

  spec:

   containers:

    – name: nginx-fe

      image: nginx

      ports:

       – name: http-port

         containerPort: 80

– Deploy service to expose these pod:

apiVersion: v1

kind: Service

metadata:

 name: fe-service

spec:

 selector:

  app: nginx

 type: NodePort

 ports:

  – port: 80

    targetPort: 80

    nodePort: 32660

– Accessing to pod by NodePort of Service

– Now deploy new ReplicaSet using new image for frontend pod (apache) with different label

apiVersion: apps/v1

kind: ReplicaSet

metadata:

 name: httpd-rs

spec:

 replicas: 2

 selector:

  matchLabels:

   app: apache

 template:

  metadata:

   labels:

    app: apache

  spec:

   containers:

    – name: apache-fe

      image: httpd

      ports:

       – name: http-port

         containerPort: 80

– Edit selector on service using for expose pod to client by  kubectl edit service command

– Accesing NodePort of service will send traffic to new pod

– Delete old ReplicaSet

*Performing an automatic rolling update with a ReplicationController:

– Using command kubectl rolling-update allow directly replace old Replication Controller to new Replication Controller with new version image

# kubectl rolling-update <old-rc> <new-rc> –image=<new image>

– When you run the command, a new ReplicationController called kubia-v2 is created immediately with number replicas=0. After that kubectl starts replacing pods by first scaling up the new controller to 1 and then scales down the old ReplicationController by 1. Finally kubectl will delete the original ReplicationController and the update process will be finished

– When performing rolling update directly using kubctl (update process is being performed by the client instead of on the server) on Replication Controller  if lost network connectivity while kubectl was performing the update, update process would be interrupted mid-way. Another reason why performing an update like this isn’t as good as it could be is because it’s imperative.

3/Using Deployments for updating apps declaratively:

– Deployment is a higher-level resource meant for deploying applications and updating them declaratively, instead of doing it through a ReplicationController or a ReplicaSet, which are both considered lower-level concepts.

– When you create a Deployment, a ReplicaSet resource is created underneath. When using a Deployment, the actual pods are created and managed by the Deployment’s ReplicaSets, not by the Deployment directly. Deployment creates multiple ReplicaSets—one for each version of the pod template.

*Updating a Deployment Using Triggering The Rolling Update:

– The default strategy is to perform a rolling update (the strategy is called RollingUpdate). The alternative is the Recreate strategy, which deletes all the old pods at once and then creates new ones.

– The Recreate strategy causes all old pods to be deleted before the new ones are created. Use this strategy when your application doesn’t support running multiple versions in parallel and requires the old version to be stopped completely before the new one is started.

– The RollingUpdate strategy, on the other hand, removes old pods one by one, while adding new ones at the same time, keeping the application available throughout the whole process.

– To trigger the actual rollout, change the image used in deployment resource. Instead of editing the whole YAML of the Deployment object or using the patch command to change the image, you’ll use the kubectl set image command.

Example: Create deployment using nignx image and rolling update for changing to apache image

– Create deployment resource using ReplicaSet with image apache and service to expose it

apiVersion: apps/v1

kind: Deployment

metadata:

 name: frontend

spec:

 replicas: 2

 selector:

  matchLabels:

   app: frontend

 template:

  metadata:

   labels:

    app: frontend

  spec:

   containers:

    – name: fe-container

      image: httpd

      ports:

       – name: http-port

         containerPort: 80

apiVersion: v1

kind: Service

metadata:

 name: fe-service

spec:

 selector:

  app: frontend

 type: NodePort

 ports:

  – port: 80

    targetPort: 80

    nodePort: 32660

– Execute rolling update by using kubectl set image command for changing image version

Tags : AutomationContainerDevOpsK8sKubernetesLinux-Unix

Leave a Response

error: Content is protected !!