15 Steps: Install Kubernetes on CentOS “bento/centos-7”

Share on:

This tutorial is for the ones who want to try out the Kubernetes installation on CentOS.

In this article, I have simplified the installation into 15 steps for installing Kubernetes on CentOS “bento/centos-7”

Before you begin with installation here are prerequisites for installing Kubernetes on CentOS.

Prerequisites

  • Reading time is about 20 minutes
  • Vagrant 2.2.7 or latest – For installation instruction click here
  • VM VirtualBox – For installation instruction click here

Step 1: Start your vagrant box

Use the following Vagrantfile to spin up your vagrant box.

We are going with two VMs here -

  1. Master Node - 2 cpus, 2 GB Memory (Assinged IP - 100.0.0.1 )
  2. Worker Node - 1 cpu, 1 GB Memory (Assinged IP - 100.0.0.2 )
 1Vagrant.configure("2") do |config|
 2  config.vm.define "master" do |master|
 3    master.vm.box_download_insecure = true    
 4    master.vm.box = "bento/centos-7"
 5    master.vm.network "private_network", ip: "100.0.0.1"
 6    master.vm.hostname = "master"
 7    master.vm.provider "virtualbox" do |v|
 8      v.name = "master"
 9      v.memory = 2048
10      v.cpus = 2
11    end
12  end
13
14  config.vm.define "worker" do |worker|
15    worker.vm.box_download_insecure = true 
16    worker.vm.box = "bento/centos-7"
17    worker.vm.network "private_network", ip: "100.0.0.2"
18    worker.vm.hostname = "worker"
19    worker.vm.provider "virtualbox" do |v|
20      v.name = "worker"
21      v.memory = 1024
22      v.cpus = 1
23    end
24  end
25
26end

Step 2: Update /etc/hosts on both nodes(master, worker)

master node - SSH into the master node

1$ vagrant ssh master
1vagrant@master:~$ sudo vi /etc/hosts
2
3100.0.0.1 master.jhooq.com master
4100.0.0.2 worker.jhooq.com worker

worker node- SSH into the worker node

1$ vagrant ssh worker
1vagrant@worker:~$ sudo vi /etc/hosts
2
3100.0.0.1 master.jhooq.com master
4100.0.0.2 worker.jhooq.com worker

Test the worker node by sending ping from master

1PING worker.jhooq.com (100.0.0.2) 56(84) bytes of data.
264 bytes from worker.jhooq.com (100.0.0.2): icmp_seq=1 ttl=64 time=0.462 ms
364 bytes from worker.jhooq.com (100.0.0.2): icmp_seq=2 ttl=64 time=0.686 ms

Test the master node by sending ping from worker

1vagrant@worker ~]$ ping master
2PING master.jhooq.com (100.0.0.1) 56(84) bytes of data.
364 bytes from master.jhooq.com (100.0.0.1): icmp_seq=1 ttl=64 time=0.238 ms
464 bytes from master.jhooq.com (100.0.0.1): icmp_seq=2 ttl=64 time=0.510 ms

Step 3: Install Docker on both nodes (master, worker)

You need to install Docker on both the node.

So run the following docker installation command on both the nodes

1vagrant@master ~]$ sudo yum install docker -y<

Enable docker: on both master and worker node

1vagrant@master ~]$ sudo systemctl enable docker
2Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Start docker: on both master and worker node

1vagrant@master ~]$ sudo systemctl start  docker

Check the docker service status

1vagrant@master ~]$ sudo systemctl status docker

Docker service should be up and running and you should get following output on the terminal

1● docker.service - Docker Application Container Engine
2   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
3   Active: active (running) since Thu 2020-04-23 18:00:12 UTC; 26s ago
4     Docs: http://docs.docker.com
5 Main PID: 11892 (dockerd-current)

Step 4: Disable SELinux on both nodes(master, worker)

You need to disable the SELinux using following command

1vagrant@master ~]$ sudo setenforce 0
1vagrant@master ~]$ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Step 5: Disable CentOS firewall on both nodes(master, worker)

Master Node

1vagrant@master ~]$ sudo systemctl disable firewalld
1vagrant@master ~]$ sudo systemctl stop firewalld

Worker Node

1vagrant@worker ~]$ sudo systemctl disable firewalld
1vagrant@master ~]$ sudo systemctl stop firewall

Step 6: Disable swapping on both nodes(master, worker)

Disable the swapping on master as well as a worker node. Because to install Kubernetes we need to disable the swapping on both the nodes

Run following command on both master as well as worker node

1vagrant@master ~]$ sudo swapoff -a

Step 7: Enable the usage of “iptables” on both nodes(master, worker)

Enable the usage of iptables which will prevent the routing errors happening. As the following runtime parameters:

1vagrant@worker ~]$ sudo bash -c 'echo "net.bridge.bridge-nf-call-ip6tables = 1" > /etc/sysctl.d/k8s.conf'
1vagrant@worker ~]$ sudo bash -c 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/k8s.conf'
1vagrant@worker ~]$ sudo sysctl --system

Step 8: Add the Kubernetes repo to rum.repos.d on both nodes(master, worker)

1vagrant@master ~]$ sudo vi /etc/yum.repos.d/kubernetes.repo

Add following repo details -

1kubernetes]
2name=Kubernetes
3baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
4enabled=1
5gpgcheck=1
6repo_gpgcheck=1
7gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
8       https:&#47;&#47;packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Step 9: Install Kubernetes on both nodes(master, worker)

1vagrant@master ~]$ sudo yum install -y kubeadm kubelet kubectl

Step 10: Enable and Start Kubelet on both nodes(master, worker)

Run the following command both on master and worker nodes.

Enable the kubelet

1vagrant@worker ~]$ sudo systemctl enable kubelet
2Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

Start the kubelet

1vagrant@master ~]$ sudo systemctl start kubelet

Step 11: Initialize Kubernetes cluster only on master node

Initialize the Kubernetes cluster (-apiserver-advertise-address=100.0.0.1 this is the IP address we have assigned in the /etc/hosts)

1vagrant@master ~]$ sudo kubeadm init --apiserver-advertise-address=100.0.0.1 --pod-network-cidr=10.244.0.0/16

Note down the kubeadm join command

1kubeadm join 100.0.0.1:6443 --token cfvd1x.8h8kzx0u9vcn4trf \
2    --discovery-token-ca-cert-hash sha256:cc9687b47f3a9bfa5b880dcf409eeaef05d25505f4c099732b65376b0e14458c

Step 12: Move kube config file to current user (only run on master)

To interact with the Kubernetes cluster and to use kubectl command, we need to have the Kube config file with us.

Use the following command to get the kube config file and put it under the working directory.

1vagrant@master ~]$ mkdir -p $HOME/.kube
2vagrant@master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3vagrant@master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 13: Apply CNI from kube-flannel.yml(only run on master)

After the master of the cluster is ready to handle jobs and the services are running, for the purpose of making containers accessible to each other through networking, we need to set up the network for container communication.

Get the CNI(container network interface) configuration from flannel

1vagrant@master ~]$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Note – But since we are working on the VMs so we need to check our Ethernet interfaces first.

Look out for the Ethernet i.e. eth1 which has a ip address 100.0.0.1(this is the ip address which we used in vagrant file)

1vagrant@master ~]$ ip a s
11: lo: &lt;LOOPBACK,UP,LOWER_UP>
22: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
3    link/ether 08:00:27:bb:14:75 brd ff:ff:ff:ff:ff:ff
4    inet 10.0.2.15
53: eth1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
6    link/ether 08:00:27:fb:48:77 brd ff:ff:ff:ff:ff:ff
7    inet 100.0.0.1
84: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP>

Now we need to add the extra args for eth1 in kube-flannel.yml

1vagrant@master ~]$ vi kube-flannel.yml

Searche for – “flanneld”

In the args section add : – –iface=eth1

1- --iface=eth1
2        args:
3        - --ip-masq
4        - --kube-subnet-mgr
5        - --iface=eth1

Apply the flannel configuration

1vagrant@master:~$ kubectl apply -f kube-flannel.yml
 1podsecuritypolicy.policy/psp.flannel.unprivileged created
 2clusterrole.rbac.authorization.k8s.io/flannel created
 3clusterrolebinding.rbac.authorization.k8s.io/flannel created
 4serviceaccount/flannel created
 5configmap/kube-flannel-cfg created
 6daemonset.apps/kube-flannel-ds-amd64 created
 7daemonset.apps/kube-flannel-ds-arm64 created
 8daemonset.apps/kube-flannel-ds-arm created
 9daemonset.apps/kube-flannel-ds-ppc64le created
10daemonset.apps/kube-flannel-ds-s390x created

Step 14: Join master node run only on worker node

In the Step 11 we generated the token and kubeadm join command.

Now we need to use that join command from our worker node

1vagrant@worker ~]$ sudo kubeadm join 100.0.0.1:6443 --token cfvd1x.8h8kzx0u9vcn4trf --discovery-token-ca-cert-hash sha256:cc9687b47f3a9bfa5b880dcf409eeaef05d25505f4c099732b65376b0e14458c
 1W0423 18:50:54.480382    8100 join.go:346] preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
 2preflight] Running pre-flight checks
 3preflight] Reading configuration from the cluster...
 4preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
 5kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
 6kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 7kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 8kubelet-start] Starting the kubelet
 9kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
10
11This node has joined the cluster:
12* Certificate signing request was sent to apiserver and a response was received.
13* The Kubelet was informed of the new secure connection details.
14
15Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
16

Step 15: Check the nodes status(only run on master)

Check the nodes status in the master

1vagrant@master ~]$ kubectl get nodes
2NAME     STATUS   ROLES    AGE   VERSION
3master   Ready    master   26m   v1.18.2
4worker   Ready    &lt;none>   63s   v1.18.2

So that concludes “15 Steps: Install Kubernetes on CentOS “bento/centos-7””

For more similar kubernetes article please refer to my 14 Steps to Install kubernetes on Ubuntu