kubernetes pods getting recreated?
One day I was working with my local kubernetes cluster which I set up using Kubespray and inside my Kubernetes cluster, I was trying to deploy my docker container. Since it was a local Kubernetes cluster so I had done a couple of Kubernetes deployments before also but after each deployment, I was deleting the Kubernetes POD, Kubernetes deployment, and Kubernetes service.
Eventually, after doing the same process 3-4 times I noticed one very strange behavior with the Kubernetes cluster, I was trying to delete the POD but the POD was getting recreated after a minute or so. It was weird behavior that I noticed inside the Kubernetes cluster. As a first thought, I started looking into the POD logs, deployment logs just in case if there is something wrong with my setup but unfortunately I did not find anything.
As I did not find anything significant in the logs so started to google around in the developer forum and someone has suggested that - You should delete the deployment before you even try to delete the POD and I followed the advice, in the end, I was successfully able to delete the POD along with the deployment.
But during this exercise, I learned a couple of new things about deletion order of Kubernetes POD, Deployment, Service, and Replica set. I thought of documenting it on this blog so that it might help someone with a similar issue.
So here is the sequence which I will follow to stop Kubernetes PODs from getting recreated -
- Run kubectl get to list all the resources running in the cluster
- Delete replica set - If the replica set is running
- Delete services
- Delete Deployment
- Delete Job or Daemonset
- (With caution) Nuke everything - replicasets, subscriptions, deployments, jobs, services, pods
- Conclusion
1. Run kubectl get to list all the resources running in the cluster
The first action I took is to get the list of all the resources which are running inside my Kubernetes cluster and the simplest way to know this is by running the command -
1kubectl get all
The $ kubectl get all command will list all the resources from all of your workspace, so that you will have a very good idea of how many PODS, Deployments, Services, and Replica Sets are running inside your Kubernetes cluster.
In case if you are working in a specific kubernetes namespace then I would recommend using the -n flag along with $ kubectl get all command
1kubectl get all -n <your_name_sapce_name>
The above command will is out all the resources running inside that particular namespace.
2. Delete replica set - If the replica set is running
Now in Step-1 we got all the list of resources that are running inside the Kubernetes cluster. The first resource which I will delete is the replica set.
So if you are also running the replica set inside the Kubernetes cluster then you should delete it first.
Here is the command for deleting the replica set -
1kubectl delete rs <your_replica_controller_name>
How deletion of replica set will help you in deleting Kubernetes POD?
The answer is Kubernetes PODs are managed by the replication controller, so even though you deleted the PODs manually but still, there is a reference of deleted POD which is present in the replication controller, and as you delete the POD manually the Kubernetes cluster sense that POD is down and it recreates another new copy of POD and you are back to square one where you have deleted the POD but the POD gets recreated. So for that reason, we first need to delete the replication controller before deleting the actual POD.
3. Delete the services
The next resource which you should target is the services. As you know for each Kubernetes deployment we need to create Service(ClusterIP, NodePort, LoadBalancer) to expose the deployment. So we always create a service after the successful deployment of the docker container.
The idea is to delete all the resources which are linked to deployment. Let's first find the service which is associated with your deployment -
1kubectl get service
The $ kubectl get service will list out all the services running inside your Kubernetes cluster. Find the correct service which is associated with your deployment and POD.
After identifying the service run the following command to delete the service -
1kubectl delete service <your_service_name>
4. Delete the deployment
If you are reading this point then I am assuming you have gone through the Step-1, Step-2, and Step-3 because those steps are necessary to fix issues associated with the recreation of PODs.
Now we have deleted the replication controller, Kubernetes services, and finally, look for the deployment using $ kubectl get deployment
1kubectl get deployment
Or you can run the same command with --all-namespaces flag which will list out all the deployments from across all of the workspaces -
1kubectl get deployments --all-namespaces
After listing the deployment identify all the deployments which are associated with the POD and then run the $ kubectl delete deployment command.
1kubectl delete deployment <deployment_name>
For deleting the deployment which is running under the namespace using the following command -
1kubectl delete deployment <deployment_name> -n <namespace_name>
After deleting the deployment there is one more resource job or daemonset which you should take care of.
5. Delete Job or Daemonset
After deleting the replicaset, Service, Deployment if still the POD is getting recreated then I would recommend checking for the daemonset or job running inside your Kubernetes cluster.
You can get the list of jobs, daemon sets, and daemonsets extensions by running the following command -
For Jobs
1kubectl get jobs
For daemon set
1kubectl get daemonsets.app --all-namespaces
For daemon sets extensions
1kubectl get daemonsets.extensions --all-namespaces
5.1 Delete the daemon set
After identifying all the daemon sets you can simply delete the daemon set by running $ kubectl delete
1kubectl delete <daemon_set_name>
You can supply an additional parameter cascade=true which will delete the POD associated with the daemonset
6. (With caution)Nuke everything - replicasets,subscriptions,deployments,jobs,services,pods
(Note* - I would keep this approach as my last resort and highly discourage use in a production environment)
At last, if there is nothing that is helping to fix the issue of POD recreation then you can nuke everything inside the Kubernetes cluster but Be very careful because once your delete replicasets, subscriptions, deployments, jobs, services, pods there is no returning.
1kubectl delete replicasets,subscriptions,deployments,jobs,services,pods --all -n <your_name_of_namespace>
7. Conclusion
PODs are getting recreated and finding the root cause of recreation can be a really daunting task if you do not know how many Kubernetes resources are running inside your Kubernetes cluster. So the idea is to keep the approach simple -
- the First step find all the resources running inside the Kubernetes cluster
- If needed scope down the resources using namespace flag
- Identify the resources pointing to the POD
- Start deleting child resources(jobs, daemonset, service) then aim for deployment resource and finally try to delete the actual POD.
Hope this post will help you to troubleshoot the issue with your POD recreation. Here are some reference from stackoverflow which I would recommend going through.
Learn more On Kubernetes -
- Setup kubernetes on Ubuntu
- Setup Kubernetes on CentOs
- Setup HA Kubernetes Cluster with Kubespray
- Setup HA Kubernetes with Minikube
- Setup Kubernetes Dashboard for local kubernetes cluster
- Setup Kubernetes Dashboard On GCP(Google Cloud Platform)
- How to use Persistent Volume and Persistent Volume Claims in Kubernetes
- Deploy Spring Boot Microservice on local Kubernetes cluster
- Deploy Spring Boot Microservice on Cloud Platform(GCP)
- Setting up Ingress controller NGINX along with HAproxy inside Kubernetes cluster
- CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
- kubectl export YAML | Get YAML for deployed kubernetes resources(service, deployment, PV, PVC....)
- How to setup kubernetes jenkins pipeline on AWS?
- Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
- How to fix kubernetes pods getting recreated?
- How to delete all kubernetes PODS?
- How to use Kubernetes secrets?
- Share kubernetes secrets between namespaces?
- How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
- Delete Kubernetes POD stuck in terminating state?
Posts in this Series
- Kubernetes Cheat Sheet for day to day DevOps operations?
- Delete Kubernetes POD stuck in terminating state?
- How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
- Share kubernetes secrets between namespaces?
- How to use Kubernetes secrets?
- How to delete all kubernetes PODS?
- kubernetes pods getting recreated?
- Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
- kubectl export yaml OR How to generate YAML for deployed kubernetes resources
- Kubernetes Updates
- CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
- Kubernetes cluster setup with Jenkins
- How to use Persistent Volume and Persistent Claims | Kubernetes
- How to fix ProvisioningFailed persistentvolume controller no volume plugin matched
- Fixing – Cannot bind to requested volume: storageClasseName does not match
- Fixing – pod has unbound immediate persistentvolumeclaims or cannot bind to requested volume incompatible accessmode
- How to fix kubernetes dashboard forbidden 403 error – message services https kubernetes-dashboard is forbidden User
- How to fix Kubernetes – error execution phase preflight [preflight]
- Deploy Spring Boot microservices on kubernetes?
- How to fix – ansible_memtotal_mb minimal_master_memory_mb
- How to use kubespray – 12 Steps for Installing a Production Ready Kubernetes Cluster
- How to setup kubernetes on CentOS 8 and CentOS 7
- How to fix – How to fix - ERROR Swap running with swap on is not supported. Please disable swap
- 14 Steps to Install kubernetes on Ubuntu 20.04(bento/ubuntu-20.04), 18.04(hashicorp/bionic64)
- Kubernetes Dashboard | Kubernetes Admin GUI | Kubernetes Desktop Client
- Install Kubernetes with Minikube