How to use kubespray – 12 Steps for Installing a Production Ready Kubernetes Cluster



Before we jump into the steps of installation, if you are familiar with Puppet, Chef and Ansible (https://github.com/kubernetes-incubator/kubespray) then kubespray is going to be the best choice to set up a Kubernetes cluster.

In this article, we will be going through 12 steps starting from setting up vagrant VMs till running the final ansible-playbook.

Disclaimer - If you are a beginner with kubernetes then i would highly recommend to go through the manual installation of kubernetes on Ubuntu or CentOS and for the same you can refer to -

  1. 14 Steps to Install kubernetes on Ubuntu 18.04 and 16.04
  2. 15 Steps: Install Kubernetes on CentOS “bento/centos-7”

Table of Content

  1. Step 1: Provision the VMs using Vagrant
  2. Step 2: Update /etc/hosts on all the nodes
  3. Step 3: Generate SSH key for ansible (only need to run on ansible node .i.e. amaster)
  4. Step 4: Copy SSH key to other nodes .i.e. - kmaster, kworker
  5. Step 5: Install python3-pip (only need to run on ansible node .i.e. amaster)
  6. Step 6: Clone the kubespray git repo (only need to run on ansible node .i.e. amaster)
  7. Step 7: Install kubespray package from “requirement.txt” (only need to run on ansible node .i.e. amaster)
  8. Step 8: Copy inventory file to current users (only need to run on ansible node .i.e. amaster)
  9. Step 9: Prepare host.yml for kubespray (only need to run on ansible node .i.e. amaster)
  10. Step 10: Run the ansible-playbook on ansible node .i.e. - amaster (only need to run on ansible node .i.e. amaster)
  11. Step 11: Install kubectl on kubernetes master .i.e. - kmaster (only need to run on kuebernets node .i.e. kmaster)
  12. Step 12: Verify the kubernetes nodes



Okay now lets try some kubespray and kubernetes -


Note - This article kubespray – 12 Steps for Installing a Production Ready Kubernetes Cluster has been tested and verified on the following latest release version -

  1. Kubespray - v2.16.2
  2. Ansible - v2.10.x
  3. Jnja - v2.11.2

If you want to upgrade your kubernetes cluster using Kubespray

Click here - Upgrade kubernetes using kubespray With the latest version of kubespray v2.16.0, it is considered to be more stable for CentOS 8 and now kubespray supports Exoscale, Vsphere , UpCloud also


Step 1: Provision the VMs using Vagrant

First we need to provision the VMs using vagrant.

We will be setting up total 3 VMs (Virtual Machine) with its unique IP -

  1. Ansible Node (amaster) - 100.0.0.1 - 2 CPU - 2 GB Memory
  2. Kubernetes Master Node (kmaster) - 100.0.0.2 - 2 CPU - 2 GB Memory
  3. Kubernetes Worker Node (kworker) - 100.0.0.3 - 2 CPU - 2 GB Memory

Here is the Vagratnfile

 1Vagrant.configure("2") do |config|
 2  config.vm.define "amaster" do |amaster|
 3    amaster.vm.box_download_insecure = true
 4    amaster.vm.box = "hashicorp/bionic64"
 5    amaster.vm.network "private_network", ip: "100.0.0.1"
 6    amaster.vm.hostname = "amaster"
 7    amaster.vm.provider "virtualbox" do |v|
 8      v.name = "amaster"
 9      v.memory = 2048
10      v.cpus = 2
11    end
12  end
13
14  config.vm.define "kmaster" do |kmaster|
15    kmaster.vm.box_download_insecure = true
16    kmaster.vm.box = "hashicorp/bionic64"
17    kmaster.vm.network "private_network", ip: "100.0.0.2"
18    kmaster.vm.hostname = "kmaster"
19    kmaster.vm.provider "virtualbox" do |v|
20      v.name = "kmaster"
21      v.memory = 2048
22      v.cpus = 2
23    end
24  end
25
26  config.vm.define "kworker" do |kworker|
27    kworker.vm.box_download_insecure = true
28    kworker.vm.box = "hashicorp/bionic64"
29    kworker.vm.network "private_network", ip: "100.0.0.3"
30    kworker.vm.hostname = "kworker"
31    kworker.vm.provider "virtualbox" do |v|
32      v.name = "kworker"
33      v.memory = 2048
34      v.cpus = 2
35    end
36  end
37
38end

Start the vagrant box -

1vagrant up


Step 2: Update /etc/hosts on all the nodes

After starting the vagrant box you need to update the /etc/hosts file on each node .i.e -amaster, kmaster, kworker

So run the following command on all the three nodes

1sudo vi /etc/hosts

Add the following entries in the hosts files of each node (amaster, kmaster, kworker)

1100.0.0.1 amaster.jhooq.com amaster
2100.0.0.2 kmaster.jhooq.com kmaster
3100.0.0.3 kworker.jhooq.com kworker

Your /etc/hosts file should look like this on all the three nodes .i.e. - amaster, kmaster, kworker

1cat /etc/hosts
 1127.0.0.1	localhost
 2127.0.1.1	amaster	amaster
 3
 4# The following lines are desirable for IPv6 capable hosts
 5::1     localhost ip6-localhost ip6-loopback
 6ff02::1 ip6-allnodes
 7ff02::2 ip6-allrouters
 8100.0.0.1 amaster.jhooq.com amaster
 9100.0.0.2 kmaster.jhooq.com kmaster
10100.0.0.3 kworker.jhooq.com kworker


Step 3: Generate SSH key for ansible (only need to run on ansible node .i.e. amaster)

To setup the kubespray smoothly we need to generate the SSH keys for the ansible master(amaster) nodes and copy the ssh keys to other nodes. So that you do not have to provide username and password everytime you login/ssh into the other nodes .i.e. - kmaster, kworker

Generate SSH key (during the ssh key generation it might will ask for passphrase so either you create a new passphrase or leave it empty)-

1ssh-keygen -t rsa
 1Generating public/private rsa key pair.
 2Enter file in which to save the key (/home/vagrant/.ssh/id_rsa): 
 3Enter passphrase (empty for no passphrase): 
 4Enter same passphrase again: 
 5Your identification has been saved in /home/vagrant/.ssh/id_rsa.
 6Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub.
 7The key fingerprint is:
 8SHA256:LWGasiSDAqf8eY3pz5swa/nUl2rWc1IFgiPuqFTYsKs vagrant@amaster
 9The key's randomart image is:
10+---[RSA 2048]----+
11|          .      |
12|   .   . o . .   |
13|. . = . + . . .  |
14|o+ o o = o     . |
15|+.o = = S .   .  |
16|. .*.++...  ..   |
17|  ooo*.o ..o.    |
18| E .oo* .oo+ .   |
19|    .oo*+.  +    |
20+----[SHA256]-----+


Step 4: Copy SSH key to other nodes .i.e. - kmaster, kworker

In the step-3 we have generated the SSH keys, now we need to copy the SSH keys to other nodes .i.e. kmaster, kworker

Copy to kmaster node (During the ssh-copy-id it will ask for the other node password, so in case if you have not set any password then you can supply default password .i.e. vagrant) -

1ssh-copy-id 100.0.0.2
1/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/vagrant/.ssh/id_rsa.pub"
2The authenticity of host '100.0.0.2 (100.0.0.2)' can't be established.
3ECDSA key fingerprint is SHA256:uY6GIjFdI9qTC4QYb980QRk+WblJF9cd5glr3SmmL+w.

Type "yes" when it asks for - Are you sure you want to continue connecting (yes/no)? yes

1Are you sure you want to continue connecting (yes/no)? yes
1/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
2/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
3vagrant@100.0.0.2's password: 
4
5Number of key(s) added: 1
6
7Now try logging into the machine, with:   "ssh '100.0.0.2'"
8and check to make sure that only the key(s) you wanted were added.

Copy to kworker node (During the ssh-copy-id it will ask for the other node password, so in case if you have not set any password then you can supply default password .i.e. vagrant) -

1ssh-copy-id 100.0.0.3
1/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/vagrant/.ssh/id_rsa.pub"
2The authenticity of host '100.0.0.3 (100.0.0.3)' can't be established.
3ECDSA key fingerprint is SHA256:uY6GIjFdI9qTC4QYb980QRk+WblJF9cd5glr3SmmL+w.

Type "yes" when it asks for - Are you sure you want to continue connecting (yes/no)? yes

1Are you sure you want to continue connecting (yes/no)? yes
1/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
2/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
3vagrant@100.0.0.3's password: 
4
5Number of key(s) added: 1
6
7Now try logging into the machine, with:   "ssh '100.0.0.3'"
8and check to make sure that only the key(s) you wanted were added.


Step 5: Install python3-pip (only need to run on ansible node .i.e. amaster)

Before installing the python3-pip, you need to download and update the package list from the repository.

Run the following command(on all the nodes)

1sudo apt-get update

Now you need to install the python3-pip, use the following installation command to install the python3-pip (only need to run on ansible node .i.e. amaster)

1sudo apt install python3-pip

To proceed with the installation press "y"

1Do you want to continue? [Y/n] y

After the installation verify the python and pip version

1python -V
2Python 2.7.15+
1pip3 -V
2pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)

Step 6: Clone the kubespray git repo (only need to run on ansible node .i.e. amaster)

In the next step we are going to clone the kubespray. Use the following git command to clone kubespray

1git clone https://github.com/kubernetes-sigs/kubespray.git
1Cloning into 'kubespray'...
2remote: Enumerating objects: 3, done.
3remote: Counting objects: 100% (3/3), done.
4remote: Compressing objects: 100% (3/3), done.
5remote: Total 43626 (delta 0), reused 1 (delta 0), pack-reused 43623
6Receiving objects: 100% (43626/43626), 12.72 MiB | 5.18 MiB/s, done.
7Resolving deltas: 100% (24242/24242), done.

Step 7: Install kubespray package from "requirement.txt" (only need to run on ansible node .i.e. amaster)

Goto "kubespray" directory

1cd kubespray

Install the kubespray packages

1sudo pip3 install -r requirements.txt
 1The directory '/home/vagrant/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
 2The directory '/home/vagrant/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
 3Collecting ansible==2.9.6 (from -r requirements.txt (line 1))
 4  Downloading https://files.pythonhosted.org/packages/ae/b7/c717363f767f7af33d90af9458d5f1e0960db9c2393a6c221c2ce97ad1aa/ansible-2.9.6.tar.gz (14.2MB)
 5    100% |████████████████████████████████| 14.2MB 123kB/s 
 6Collecting jinja2==2.11.1 (from -r requirements.txt (line 2))
 7  Downloading https://files.pythonhosted.org/packages/27/24/4f35961e5c669e96f6559760042a55b9bcfcdb82b9bdb3c8753dbe042e35/Jinja2-2.11.1-py2.py3-none-any.whl (126kB)
 8    100% |████████████████████████████████| 133kB 4.1MB/s 
 9Collecting netaddr==0.7.19 (from -r requirements.txt (line 3))
10  Downloading https://files.pythonhosted.org/packages/ba/97/ce14451a9fd7bdb5a397abf99b24a1a6bb7a1a440b019bebd2e9a0dbec74/netaddr-0.7.19-py2.py3-none-any.whl (1.6MB)
11    100% |████████████████████████████████| 1.6MB 954kB/s 
12Collecting pbr==5.4.4 (from -r requirements.txt (line 4))
13  Downloading https://files.pythonhosted.org/packages/7a/db/a968fd7beb9fe06901c1841cb25c9ccb666ca1b9a19b114d1bbedf1126fc/pbr-5.4.4-py2.py3-none-any.whl (110kB)
14    100% |████████████████████████████████| 112kB 7.0MB/s 
15Collecting hvac==0.10.0 (from -r requirements.txt (line 5))
16  Downloading https://files.pythonhosted.org/packages/8d/d7/63e63936792a4c85bea3884003b6d502a040242da2d72db01b0ada4bdb28/hvac-0.10.0-py2.py3-none-any.whl (116kB)
17    100% |████████████████████████████████| 122kB 6.0MB/s 
18Collecting jmespath==0.9.5 (from -r requirements.txt (line 6))
19  Downloading https://files.pythonhosted.org/packages/a3/43/1e939e1fcd87b827fe192d0c9fc25b48c5b3368902bfb913de7754b0dc03/jmespath-0.9.5-py2.py3-none-any.whl
20Collecting ruamel.yaml==0.16.10 (from -r requirements.txt (line 7))
21  Downloading https://files.pythonhosted.org/packages/a6/92/59af3e38227b9cc14520bf1e59516d99ceca53e3b8448094248171e9432b/ruamel.yaml-0.16.10-py2.py3-none-any.whl (111kB)
22    100% |████████████████████████████████| 112kB 5.6MB/s 
23Requirement already satisfied: PyYAML in /usr/lib/python3/dist-packages (from ansible==2.9.6->-r requirements.txt (line 1))
24Requirement already satisfied: cryptography in /usr/lib/python3/dist-packages (from ansible==2.9.6->-r requirements.txt (line 1))
25Collecting MarkupSafe>=0.23 (from jinja2==2.11.1->-r requirements.txt (line 2))
26  Downloading https://files.pythonhosted.org/packages/b2/5f/23e0023be6bb885d00ffbefad2942bc51a620328ee910f64abe5a8d18dd1/MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl
27Requirement already satisfied: six>=1.5.0 in /usr/lib/python3/dist-packages (from hvac==0.10.0->-r requirements.txt (line 5))
28Collecting requests>=2.21.0 (from hvac==0.10.0->-r requirements.txt (line 5))
29  Downloading https://files.pythonhosted.org/packages/1a/70/1935c770cb3be6e3a8b78ced23d7e0f3b187f5cbfab4749523ed65d7c9b1/requests-2.23.0-py2.py3-none-any.whl (58kB)
30    100% |████████████████████████████████| 61kB 6.9MB/s 
31Collecting ruamel.yaml.clib>=0.1.2; platform_python_implementation == "CPython" and python_version < "3.9" (from
32 ruamel.yaml==0.16.10->-r requirements.txt (line 7))
33  Downloading https://files.pythonhosted.org/packages/53/77/4bcd63f362bcb6c8f4f06253c11f9772f64189bf08cf3f40c5ccbda9e561/ruamel.yaml.clib-0.2.0-cp36-cp36m-manylinux1_x86_64.whl (548kB)
34    100% |████████████████████████████████| 552kB 2.5MB/s 
35Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests>=2.21.0->hvac==0.10.0->-r requirements.txt (line 5))
36Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests>=2.21.0->hvac==0.10.0
37->-r requirements.txt (line 5))
38Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from
39 requests>=2.21.0->hvac==0.10.0->-r requirements.txt (line 5))
40Requirement already satisfied: chardet<4,>=3.0.2 in /usr/lib/python3/dist-packages (from requests>=2.21.0->hvac==0.10
41.0->-r requirements.txt (line 5))
42Installing collected packages: MarkupSafe, jinja2, ansible, netaddr, pbr, requests, hvac, jmespath, ruamel.yaml.clib, ruamel.yaml
43  Running setup.py install for ansible ... done
44  Found existing installation: requests 2.18.4
45    Not uninstalling requests at /usr/lib/python3/dist-packages, outside environment /usr
46Successfully installed MarkupSafe-1.1.1 ansible-2.9.6 hvac-0.10.0 jinja2-2.11.1 jmespath-0.9.5 netaddr-0.7.19 pbr-5.4.4 requests-2.23.0 ruamel.yaml-0.16.10 ruamel.yaml.clib-0.2.0

Step 8: Copy inventory file to current users (only need to run on ansible node .i.e. amaster)

Now we need to copy the inventory file to current user using the following command

1cp -rfp inventory/sample inventory/mycluster

Step 9: Prepare host.yml for kubespray (only need to run on ansible node .i.e. amaster)

This step is little trivial because we need to update host.yml with the nodes IP.

Now we are going to declare a variable "IPS" for storing the IP address of other nodes .i.e. kmaster(100.0.0.2), kworker(100.0.0.3)

1declare -a IPS=(100.0.0.2 100.0.0.3)
1CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
 1DEBUG: Adding group all
 2DEBUG: Adding group kube-master
 3DEBUG: Adding group kube-node
 4DEBUG: Adding group etcd
 5DEBUG: Adding group k8s-cluster
 6DEBUG: Adding group calico-rr
 7DEBUG: adding host node1 to group all
 8DEBUG: adding host node2 to group all
 9DEBUG: adding host node1 to group etcd
10DEBUG: adding host node1 to group kube-master
11DEBUG: adding host node2 to group kube-master
12DEBUG: adding host node1 to group kube-node
13DEBUG: adding host node2 to group kube-node

After running the above commands do verify the hosts.yml and it should be like -

1vi inventory/mycluster/hosts.yml
 1all:
 2  hosts:
 3    node1:
 4      ansible_host: 100.0.0.2
 5      ip: 100.0.0.2
 6      access_ip: 100.0.0.2
 7    node2:
 8      ansible_host: 100.0.0.3
 9      ip: 100.0.0.3
10      access_ip: 100.0.0.3
11  children:
12    kube-master:
13      hosts:
14        node1:
15        node2:
16    kube-node:
17      hosts:
18        node1:
19        node2:
20    etcd:
21      hosts:
22        node1:
23    k8s-cluster:
24      children:
25        kube-master:
26        kube-node:
27    calico-rr:
28      hosts: {}

Step 10: Run the ansible-playbook on ansible node .i.e. - amaster (only need to run on ansible node .i.e. amaster)

Now we have done all the prerequisite for running the ansible-playbook.

Use the following ansible-playbook command to begin the installation

1ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml

Running ansible playbook takes little time because it depends on the network bandwidth also.

During playbook run if you face an error "ansible_memtotal_mb >= minimal_master_memory_mb", please refer to - How to fix – ansible_memtotal_mb >= minimal_master_memory_mb


Step 11: Install kubectl on kubernetes master .i.e. - kmaster (only need to run on kuebernets node .i.e. kmaster)

Now you need to log into the kubernetes master .i.e. kmaster and download the kubectl onto it.

1curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
1  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
2                                 Dload  Upload   Total   Spent    Left  Speed
3100 41.9M  100 41.9M    0     0  5893k      0  0:00:07  0:00:07 --:--:-- 5962k

Now we need to copy the admin.conf file to .kube

1sudo cp /etc/kubernetes/admin.conf /home/vagrant/config
1mkdir .kube
1mv config .kube/
1sudo chown $(id -u):$(id -g ) $HOME/.kube/config

Check the kubectl version after installation

1kubectl version
1Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
2Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Step 12: Verify the kubernetes nodes

Now we have done all the required steps for the installing kubernetes using kubespray.

Lets check the nodes status in out final step

1kubectl get nodes
1NAME    STATUS   ROLES    AGE   VERSION
2node1   Ready    master   13m   v1.18.2
3node2   Ready    master   13m   v1.18.2

Learn more On Kubernetes -

  1. Setup kubernetes on Ubuntu
  2. Setup Kubernetes on CentOs
  3. Setup HA Kubernetes Cluster with Kubespray
  4. Setup HA Kubernetes with Minikube
  5. Setup Kubernetes Dashboard for local kubernetes cluster
  6. Setup Kubernetes Dashboard On GCP(Google Cloud Platform)
  7. How to use Persistent Volume and Persistent Volume Claims in Kubernetes
  8. Deploy Spring Boot Microservice on local Kubernetes cluster
  9. Deploy Spring Boot Microservice on Cloud Platform(GCP)
  10. Setting up Ingress controller NGINX along with HAproxy inside Kubernetes cluster
  11. CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
  12. kubectl export YAML | Get YAML for deployed kubernetes resources(service, deployment, PV, PVC....)
  13. How to setup kubernetes jenkins pipeline on AWS?
  14. Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
  15. How to fix kubernetes pods getting recreated?
  16. How to delete all kubernetes PODS?
  17. How to use Kubernetes secrets?
  18. Share kubernetes secrets between namespaces?
  19. How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
  20. Delete Kubernetes POD stuck in terminating state?

Posts in this Series