How to setup kubernetes on CentOS 8 and CentOS 7
This tutorial is for the ones who want to try out the Kubernetes installation on CentOS
(This blog post has been updated and kubernetes installation instructions can be used for CentOS 7 as well as CentOS 8).
In this article, I have simplified the installation into 15 steps for installing Kubernetes on CentOS bento/centos-7 and centos/stream8
Before you begin with installation here are prerequisites for installing Kubernetes on CentOS.
Prerequisites
- Reading time is about 20 minutes
- Vagrant 2.2.7 or latest – For installation instruction click here
- VM VirtualBox – For installation instruction click here
Step 1: Start your vagrant box
Here is the Vagrantfile to spin up your vagrant box.
Use the following image name for CentOS 7 and CentOS 8 -
- CentOS 7 : - bento/centos-7
- CentOS 8 : - centos/stream8
Based on your need you need to update the following vagrantfile.
We are going with two VMs here -
- Master Node - 2 cpus, 2 GB Memory (Assinged IP - 100.0.0.1 )
- Worker Node - 1 cpu, 1 GB Memory (Assinged IP - 100.0.0.2 )
1Vagrant.configure("2") do |config|
2 config.vm.define "master" do |master|
3 master.vm.box_download_insecure = true
4 master.vm.box = "centos/stream8"
5 master.vm.network "private_network", ip: "100.0.0.1"
6 master.vm.hostname = "master"
7 master.vm.provider "virtualbox" do |v|
8 v.name = "master"
9 v.memory = 2048
10 v.cpus = 2
11 end
12 end
13
14 config.vm.define "worker" do |worker|
15 worker.vm.box_download_insecure = true
16 worker.vm.box = "centos/stream8"
17 worker.vm.network "private_network", ip: "100.0.0.2"
18 worker.vm.hostname = "worker"
19 worker.vm.provider "virtualbox" do |v|
20 v.name = "worker"
21 v.memory = 1024
22 v.cpus = 1
23 end
24 end
25
26end
Step 2: Update /etc/hosts on both nodes(master, worker)
master node - SSH into the master node
1vagrant ssh master
1sudo vi /etc/hosts
1100.0.0.1 master.jhooq.com master
2100.0.0.2 worker.jhooq.com worker
worker node- SSH into the worker node
1vagrant ssh worker
1vagrant@worker:~$sudo vi /etc/hosts
2
3100.0.0.1 master.jhooq.com master
4100.0.0.2 worker.jhooq.com worker
Test the worker node by sending ping from master
1ping worker
1PING worker.jhooq.com (100.0.0.2) 56(84) bytes of data.
264 bytes from worker.jhooq.com (100.0.0.2): icmp_seq=1 ttl=64 time=0.462 ms
364 bytes from worker.jhooq.com (100.0.0.2): icmp_seq=2 ttl=64 time=0.686 ms
Test the master node by sending ping from worker
1ping master
1PING master.jhooq.com (100.0.0.1) 56(84) bytes of data.
264 bytes from master.jhooq.com (100.0.0.1): icmp_seq=1 ttl=64 time=0.238 ms
364 bytes from master.jhooq.com (100.0.0.1): icmp_seq=2 ttl=64 time=0.510 ms
Step 3: Install Docker on both nodes (master, worker)
You need to install Docker on both the node. But before that we need to update the yum repos -
1sudo yum install -y yum-utils
Add the following docker repo to CentOS -
1sudo yum-config-manager \
2 --add-repo \
3 https://download.docker.com/linux/centos/docker-ce.repo
Run the following docker installation command on both the nodes
1sudo yum install docker-ce docker-ce-cli containerd.io
Enable docker: on both master and worker node
1sudo systemctl enable docker
1Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Start docker: on both master and worker node
1sudo systemctl start docker
Check the docker service status
1sudo systemctl status docker
Docker service should be up and running and you should get following output on the terminal
1● docker.service - Docker Application Container Engine
2 Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
3 Active: active (running) since Thu 2020-04-23 18:00:12 UTC; 26s ago
4 Docs: http://docs.docker.com
5 Main PID: 11892 (dockerd-current)
Step 4: Disable SELinux on both nodes(master, worker)
You need to disable the SELinux using following command
1sudo setenforce 0
1sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Step 5: Disable CentOS firewall on both nodes(master, worker)
Master Node
1sudo systemctl disable firewalld
1sudo systemctl stop firewalld
Worker Node
1sudo systemctl disable firewalld
1sudo systemctl stop firewall
Step 6: Disable swapping on both nodes(master, worker)
Disable the swapping on master as well as a worker node. Because to install Kubernetes we need to disable the swapping on both the nodes
Run following command on both master as well as worker node
1sudo swapoff -a
Step 7: Enable the usage of "iptables" on both nodes(master, worker)
Enable the usage of iptables which will prevent the routing errors happening. As the following runtime parameters:
1sudo bash -c 'echo "net.bridge.bridge-nf-call-ip6tables = 1" > /etc/sysctl.d/k8s.conf'
card
1sudo bash -c 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/k8s.conf'
1sudo sysctl --system
Step 8: Add the Kubernetes repo to rum.repos.d on both nodes(master, worker)
1sudo vi /etc/yum.repos.d/kubernetes.repo
Add following repo details -
1[kubernetes]
2name=Kubernetes
3baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
4enabled=1
5gpgcheck=1
6repo_gpgcheck=1
7gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
8 https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Step 9: Install Kubernetes on both nodes(master, worker)
1sudo yum install -y kubeadm kubelet kubectl
Step 10: Enable and Start Kubelet on both nodes(master, worker)
Run the following command both on master and worker nodes.
Enable the kubelet
1sudo systemctl enable kubelet
1Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Start the kubelet
1sudo systemctl start kubelet
Step 11: Initialize Kubernetes cluster only on master node
Initialize the Kubernetes cluster (-apiserver-advertise-address=100.0.0.1 this is the IP address we have assigned in the /etc/hosts)
1sudo kubeadm init --apiserver-advertise-address=100.0.0.1 --pod-network-cidr=10.244.0.0/16
Note down the kubeadm join command
1kubeadm join 100.0.0.1:6443 --token cfvd1x.8h8kzx0u9vcn4trf \
2 --discovery-token-ca-cert-hash sha256:cc9687b47f3a9bfa5b880dcf409eeaef05d25505f4c099732b65376b0e14458c
Step 12: Move kube config file to current user (only run on master)
To interact with the Kubernetes cluster and to use kubectl command, we need to have the Kube config file with us.
Use the following command to get the kube config file and put it under the working directory.
1mkdir -p $HOME/.kube
2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 13: Apply CNI from kube-flannel.yml(only run on master)
After the master of the cluster is ready to handle jobs and the services are running, for the purpose of making containers accessible to each other through networking, we need to set up the network for container communication.
Get the CNI(container network interface) configuration from flannel
But before downloading the flannel you should make sure that you have installed wget
1sudo yum install wget
1wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Note – But since we are working on the VMs so we need to check our Ethernet interfaces first.
Look out for the Ethernet i.e. eth1 which has a ip address 100.0.0.1(this is the ip address which we used in vagrant file)
1 ip a s
11: lo: <LOOPBACK,UP,LOWER_UP>
22: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
3 link/ether 08:00:27:bb:14:75 brd ff:ff:ff:ff:ff:ff
4 inet 10.0.2.15
53: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
6 link/ether 08:00:27:fb:48:77 brd ff:ff:ff:ff:ff:ff
7 inet 100.0.0.1
84: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP>
Now we need to add the extra args for eth1 in kube-flannel.yml
1vi kube-flannel.yml
Searche for – “flanneld”
In the args section add : – –iface=eth1
1- --iface=eth1
2 args:
3 - --ip-masq
4 - --kube-subnet-mgr
5 - --iface=eth1
Apply the flannel configuration
1kubectl apply -f kube-flannel.yml
1podsecuritypolicy.policy/psp.flannel.unprivileged created
2clusterrole.rbac.authorization.k8s.io/flannel created
3clusterrolebinding.rbac.authorization.k8s.io/flannel created
4serviceaccount/flannel created
5configmap/kube-flannel-cfg created
6daemonset.apps/kube-flannel-ds-amd64 created
7daemonset.apps/kube-flannel-ds-arm64 created
8daemonset.apps/kube-flannel-ds-arm created
9daemonset.apps/kube-flannel-ds-ppc64le created
10daemonset.apps/kube-flannel-ds-s390x created
Step 14: Join master node run only on worker node
In the Step 11 we generated the token and kubeadm join command.
Now we need to use that join command from our worker node
1sudo kubeadm join 100.0.0.1:6443 --token cfvd1x.8h8kzx0u9vcn4trf --discovery-token-ca-cert-hash sha256:cc9687b47f3a9bfa5b880dcf409eeaef05d25505f4c099732b65376b0e14458c
1W0423 18:50:54.480382 8100 join.go:346] preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
2preflight] Running pre-flight checks
3preflight] Reading configuration from the cluster...
4preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
5kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
6kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
7kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
8kubelet-start] Starting the kubelet
9kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
10
11This node has joined the cluster:
12* Certificate signing request was sent to apiserver and a response was received.
13* The Kubelet was informed of the new secure connection details.
14
15Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Step 15: Check the nodes status(only run on master)
Check the nodes status in the master
1 kubectl get nodes
2NAME STATUS ROLES AGE VERSION
3master Ready master 26m v1.18.2
4worker Ready <none> 63s v1.18.2
So that concludes "15 Steps: Install Kubernetes on CentOS “bento/centos-7”"
Learn more On Kubernetes -
- Setup kubernetes on Ubuntu
- Setup Kubernetes on CentOs
- Setup HA Kubernetes Cluster with Kubespray
- Setup HA Kubernetes with Minikube
- Setup Kubernetes Dashboard for local kubernetes cluster
- Setup Kubernetes Dashboard On GCP(Google Cloud Platform)
- How to use Persistent Volume and Persistent Volume Claims in Kubernetes
- Deploy Spring Boot Microservice on local Kubernetes cluster
- Deploy Spring Boot Microservice on Cloud Platform(GCP)
- Setting up Ingress controller NGINX along with HAproxy inside Kubernetes cluster
- CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
- kubectl export YAML | Get YAML for deployed kubernetes resources(service, deployment, PV, PVC....)
- How to setup kubernetes jenkins pipeline on AWS?
- Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
- How to fix kubernetes pods getting recreated?
- How to delete all kubernetes PODS?
- How to use Kubernetes secrets?
- Share kubernetes secrets between namespaces?
- How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
- Delete Kubernetes POD stuck in terminating state?
Posts in this Series
- Kubernetes Cheat Sheet for day to day DevOps operations?
- Delete Kubernetes POD stuck in terminating state?
- How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
- Share kubernetes secrets between namespaces?
- How to use Kubernetes secrets?
- How to delete all kubernetes PODS?
- kubernetes pods getting recreated?
- Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
- kubectl export yaml OR How to generate YAML for deployed kubernetes resources
- Kubernetes Updates
- CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
- Kubernetes cluster setup with Jenkins
- How to use Persistent Volume and Persistent Claims | Kubernetes
- How to fix ProvisioningFailed persistentvolume controller no volume plugin matched
- Fixing – Cannot bind to requested volume: storageClasseName does not match
- Fixing – pod has unbound immediate persistentvolumeclaims or cannot bind to requested volume incompatible accessmode
- How to fix kubernetes dashboard forbidden 403 error – message services https kubernetes-dashboard is forbidden User
- How to fix Kubernetes – error execution phase preflight [preflight]
- Deploy Spring Boot microservices on kubernetes?
- How to fix – ansible_memtotal_mb minimal_master_memory_mb
- How to use kubespray – 12 Steps for Installing a Production Ready Kubernetes Cluster
- How to setup kubernetes on CentOS 8 and CentOS 7
- How to fix – How to fix - ERROR Swap running with swap on is not supported. Please disable swap
- 14 Steps to Install kubernetes on Ubuntu 20.04(bento/ubuntu-20.04), 18.04(hashicorp/bionic64)
- Kubernetes Dashboard | Kubernetes Admin GUI | Kubernetes Desktop Client
- Install Kubernetes with Minikube