Fixing – pod has unbound immediate persistentvolumeclaims or cannot bind to requested volume incompatible accessmode
There could be multiple reasons behind this issue.
But before we jump to the solution, first let us see how to identify the issue -
Once you identify the issue then here are the possible solutions -
- Mismatch in the accessModes of Persistent Volume and Persistent Volume Claim
- PV(Persistent Volume) capacity is less than PVC(Persistent Volume Claim)
- Total number of PVC(Persistent Volume Claim) is higher than PV(Persistent Volume)
- nodeAffinity of the PV is missing
- Clean up the OLD PV(Persistent Volume) and PVC(Persistent Volume Claim)
How to identify the issue?
The first which I would recommend is to check the status of the Persistent Volume claim using the following command -
(In the bellow command please replace the PVC name as per your PVC name)
1$ kubectl get pvc jhooq-pvc
As you can see the STATUS says its Pending
Let's go ahead and check more details of the PVC (you can find the command below to the screeenshot)
1$ kubectl describe pvc jhooq-pvc
1Events:
2 Type Reason Age From Message
3 ---- ------ ---- ---- -------
4 Warning VolumeMismatch 2m15s (x42 over 12m) persistentvolume-controller Cannot bind to requested volume "jhooq-demo-pv": incompatible accessMode
As you can see in the Reason column it says Warning VolumeMismatch Cannot bind to requested volume "jhooq-demo-pv": incompatible accessMode which means the accessModes: are not the same between your Persistent Volume and Persistent Volume claim
1. Mismatch in the accessModes of Persistent Volume and Persistent Volume Claim
You have specified different accessModes: in your Persistent Volume (PV) and Persistent Volume Claim (PVC) configuration. For example, your PV might have accessModes: ReadWriteOnce and PVC might have accessModes:ReadWriteMany. To fix this issue you should put the same accessModes in both PV and **PVC ** configurations, so choose accessModes based on your need but put the same accessModes in both the configurations.
I faced this issue while I was trying to set up the Persistent Volume and Persistent Volume Claim.
(Here is the guide on -How to setup persistent Volume and Persistent Volume Claim)
To fix this issue please put the same accessMode name in your both Persistent Volume configuration as well as Persistent Volume Claim configuration
Here is the Persistent Volume (PV) configuration with accessModes: ReadWriteOnce -
(You can the find configuration file code snippet after the screen shot)
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: jhooq-demo-pv
5spec:
6 capacity:
7 storage: 1Gi
8 volumeMode: Filesystem
9 accessModes:
10 - ReadWriteOnce
11 persistentVolumeReclaimPolicy: Retain
12 storageClassName: local-storage
13 local:
14 path: /mnt/disks/ssd1
15 nodeAffinity:
16 required:
17 nodeSelectorTerms:
18 - matchExpressions:
19 - key: kubernetes.io/hostname
20 operator: In
21 values:
22 - node1
Here is the configuration for Persistent Volume Claim (PVC) with accessModes: ReadWriteOnce
(You can the find configuration file code snippet after the screen shot)

1apiVersion: v1
2kind: PersistentVolumeClaim
3metadata:
4 name: jhooq-pvc
5spec:
6 volumeName: jhooq-demo-pv
7 storageClassName: local-storage
8 volumeMode: Filesystem
9 accessModes:
10 - ReadWriteOnce
11 resources:
12 requests:
13 storage: 1Gi
How to identify this issue from Kubernetes POD status?
Well in some cases this issue is a little tricky to identify especially if you are checking the status of the POD.
1$ kubectl get pods
1NAME READY STATUS RESTARTS AGE
2jhooq-pod-with-pvc 0/1 Pending 0 52m
Here you can only guess that something is wrong with POD configuration.
You need to dig deeper to identify the issue underlying the POD. In my case, my POD name is jhooq-pod-with-pvc. So I will run the following command to check the failure status of the POD
(If you feel the screenshot is hard to see then please refer to the command below the screenshots. )
1$ kubectl describe pod jhooq-pod-with-pvc
1Name: jhooq-pod-with-pvc
2Namespace: default
3Priority: 0
4Node: <none>
5Labels: name=jhooq-pod-with-pvc
6Annotations: Status: Pending
7IP:
8IPs: <none>
9Containers:
10 jhooq-pod-with-pvc:
11 Image: rahulwagh17/kubernetes:jhooq-k8s-springboot
12 Port: 8080/TCP
13 Host Port: 0/TCP
14 Environment: <none>
15 Mounts:
16 /usr/share/nginx/html from www-persistent-storage (rw)
17 /var/run/secrets/kubernetes.io/serviceaccount from default-token-72pqr (ro)
18Conditions:
19 Type Status
20 PodScheduled False
21Volumes:
22 www-persistent-storage:
23 Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
24 ClaimName: jhooq-pvc
25 ReadOnly: false
26 default-token-72pqr:
27 Type: Secret (a volume populated by a Secret)
28 SecretName: default-token-72pqr
29 Optional: false
30QoS Class: BestEffort
31Node-Selectors: <none>
32Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
33 node.kubernetes.io/unreachable:NoExecute for 300s
34Events:
35 Type Reason Age From Message
36 ---- ------ ---- ---- -------
37 Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "jhooq-pod
38-with-pvc": pod has unbound immediate PersistentVolumeClaims
39 Warning FailedScheduling <unknown> default-scheduler running "VolumeBinding" filter plugin for pod "jhooq-pod
40-with-pvc": pod has unbound immediate PersistentVolumeClaims
So the POD error also singles you to the same problem of different access modes.
Warning FailedScheduling default-scheduler running "VolumeBinding" filter plugin for pod "jhooq-pod-with-pvc": pod has unbound immediate PersistentVolumeClaims
Again here you need to check the accessModes of persistent volume and persistent volume claim and make sure it has the same accessModes.
2. PV(Persistent Volume) capacity is less than PVC(Persistent Volume Claim)
The second reason for this issue can be due the PV's capacity.storage
< PVC's resources.requests.storage
.
Here is an example where capacity.storage = 3Gi
is less than resources.requests.storage = 19 Gi
-
1
2# PV
3 capacity:
4 storage: 3Gi
5
6# PVC
7 resources:
8 requests:
9 storage: 10Gi
If it is the case then you will definitely get an error unbound immediate PersistentVolumeClaims no volume plugin matched name
3. The total number of PVC(Persistent Volume Claim) is higher than PV(Persistent Volume)
The third possible test case can be where total number of PVC > total number of PV
Here is an example -
- There is only PV (Persistent Volume) available -
jhooq-pv
1$ kubectl get pv
2NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
3jhooq-pv 50Gi RWO Retain Bound default/jhooq-pvc local-storage 106m
- There are more than 1 PVC(Persistent Volume Claim) for
jhooq-pv
1$ kubectl get pvc
2NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
3pvc-1 Bound jhooq-pv 50Gi RWO local-storage 80m
4pvc-2 Pending local-storage 45m
In the above example, pvc-2
will never get bound and it will always be in pending
status.
To fix this either you have to delete the pvc-2
or create a new PV.
4. nodeAffinity of the PV is missing
The fourth possible scenario is where you are missing or you did not specify the nodeAffinity
when you are using
local values.
Here is an example -
1apiVersion: v1
2kind: PersistentVolume
3metadata:
4 name: jhooq-pv
5
6
7 nodeAffinity:
8 required:
9 nodeSelectorTerms:
10 - matchExpressions:
11 - key: kubernetes.io/hostname
12 operator: In
13 values:
14 - node-which-doesnt-exists # <--- Missing the value
So try to fix the node value for fixing the issue.
5. Clean up the OLD PV(Persistent Volume) and PVC(Persistent Volume Claim)
Well the fifth scenario is more of housekeeping if your Kubernetes cluster is old and you have been using the PV (Persistent Volume) and PVC(Persistent volume claim) quite a lot then it is always recommended to clean up the old PV and PVC to avoid redundancies.
I hope the above instructions help you to fix the issue with your Persistent Volume and Persistent Volume Claim.
Learn more On Kubernetes -
- Setup kubernetes on Ubuntu
- Setup Kubernetes on CentOs
- Setup HA Kubernetes Cluster with Kubespray
- Setup HA Kubernetes with Minikube
- Setup Kubernetes Dashboard for local kubernetes cluster
- Setup Kubernetes Dashboard On GCP(Google Cloud Platform)
- How to use Persistent Volume and Persistent Volume Claims in Kubernetes
- Deploy Spring Boot Microservice on local Kubernetes cluster
- Deploy Spring Boot Microservice on Cloud Platform(GCP)
- Setting up Ingress controller NGINX along with HAproxy inside Kubernetes cluster
- CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
- kubectl export YAML | Get YAML for deployed kubernetes resources(service, deployment, PV, PVC....)
- How to setup kubernetes jenkins pipeline on AWS?
- Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
- How to fix kubernetes pods getting recreated?
- How to delete all kubernetes PODS?
- How to use Kubernetes secrets?
- Share kubernetes secrets between namespaces?
- How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
- Delete Kubernetes POD stuck in terminating state?
Posts in this Series
- Kubernetes Cheat Sheet for day to day DevOps operations?
- Delete Kubernetes POD stuck in terminating state?
- How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
- Share kubernetes secrets between namespaces?
- How to use Kubernetes secrets?
- How to delete all kubernetes PODS?
- kubernetes pods getting recreated?
- Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
- kubectl export yaml OR How to generate YAML for deployed kubernetes resources
- Kubernetes Updates
- CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
- Kubernetes cluster setup with Jenkins
- How to use Persistent Volume and Persistent Claims | Kubernetes
- How to fix ProvisioningFailed persistentvolume controller no volume plugin matched
- Fixing – Cannot bind to requested volume: storageClasseName does not match
- Fixing – pod has unbound immediate persistentvolumeclaims or cannot bind to requested volume incompatible accessmode
- How to fix kubernetes dashboard forbidden 403 error – message services https kubernetes-dashboard is forbidden User
- How to fix Kubernetes – error execution phase preflight [preflight]
- Deploy Spring Boot microservices on kubernetes?
- How to fix – ansible_memtotal_mb minimal_master_memory_mb
- How to use kubespray – 12 Steps for Installing a Production Ready Kubernetes Cluster
- How to setup kubernetes on CentOS 8 and CentOS 7
- How to fix – How to fix - ERROR Swap running with swap on is not supported. Please disable swap
- 14 Steps to Install kubernetes on Ubuntu 20.04(bento/ubuntu-20.04), 18.04(hashicorp/bionic64)
- Kubernetes Dashboard | Kubernetes Admin GUI | Kubernetes Desktop Client
- Install Kubernetes with Minikube