Setting up Ingress controller NGINX along with HAproxy for Microservice deployed inside Kubernetes cluster

Setting up Ingress controller NGINX along with HAproxy for Microservice deployed inside Kubernetes cluster

Working with kubernetes and managing the external traffic is like juggling more than two balls. All cloud service provider (GCP,AWS, Openshift, Digital Ocean) comes with their own load balancer which can help us exposing internal service deployed inside kubernetes cluster to the external world.

Exposing the services deployed within kubernetes cluster over Loadbalancer with external IP is really easy but considering the production use case -

Will it be easy for you to remember the URL which has IP address in it?

In my opinion I wouldn't like to use any web-service where I always need to remember IP address while accessing it.

Well do not worry we have HAProxy Ingress Controller to take care of external traffic coming to kubernetes cluster

HAProxy Ingress Controller - It does all the heavy lifting when it comes to managing external traffic into kubernetes cluster and it requires primarily -

  1. IP address
  2. Port

What are we going to do?

  1. Setup kuberentes Cluster
  2. Install/setup HAProxy on kubernetes node
  3. Update frontend, backend configuration of haproxy.cfg (/etc/haproxy/haproxy.cfg)
  4. Setup kubernetes Ingress controller
  5. Deploy spring boot microservice inside kubernetes cluster
  6. Create Ingress resource
  7. Use HAProxy to access the deployed microservice withing kubernetes cluster


1. Setup kubernetes Cluster

The first and most essential step up for you is - "You should have a kuberentes cluster setup and it should be running. "

There are two ways to setup the kubernetes cluster -

  1. Setting it locally on your laptop - Click here to setup your local kubernetes cluster
  2. Using cloud service provider such as Google Cloud Platform -Click here to user Google cloud for setting it up kubernetes cluster

If you are doing this for learning purpose than I would prefer to go for option 1 for setting it locally on your laptop.

But if you are already familiar with Google cloud platform than I would choose option 2.

Once you setup your kubernetes cluster you can run the following kubectl command to verify your cluster

1$ kubectl get all
2``
3
4You should see default kubernetes service running as ClusterIP
5
6```bash
7NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
8service/kubernetes   ClusterIP   10.233.0.1   <none>        443/TCP   31m


2. Install/setup HAProxy on kubernetes node

Before we deep dive into the Kubernetes Ingress controller, lets complete our first and most important per-requisite of installing HAProxy Loadbalancer.

First update your package information using following command

Ubuntu

1$ sudo apt-get update

CentOS

1$ sudo yum check-update

You can use the following command to install HAProxy Loadbalancer

Ubuntu -

1$ sudo apt-get -y install haproxy

CentOS

1$ sudo yum install haproxy

Verify the installation - After the successful installation you should be able to see haproxy.cfg at /etc/haproxy/haproxy.cfg

You can check the installation version also -

1$ haproxy version 
2 HA-Proxy version 1.8.8-1ubuntu0.11 2020/06/22


3. Update frontend, backend configuration of haproxy.cfg (/etc/haproxy/haproxy.cfg)

HAproxy frontend backend

After the installation now you need to update the frontend as well as backend configuration for HAProxy.

Frontend - It receives the requests from the clients.

1frontend Local_Server
2    bind *:80
3    mode http
4    default_backend k8s_server

(If you do not have the above frontend configuration in your haproxy.cfg file then please do add this configuration at the end of the file.)

Backend - It is responsible for fulfilling the request sent from Frontend. Look at the following Backend configuration -

1backend k8s_server
2    mode http
3    balance roundrobin
4    server web1.example.com  100.0.0.2:8080

(If you do not have this backend configuration in your haproxy.cfg then i would suggest you to add at the end of the file)

If you are wondering where to find the haproxy.cfg then use the following command

1$ sudo vi /etc/haproxy/haproxy.cfg

You final haproxy.cfg should look like this -

 1global
 2        log /dev/log    local0
 3        log /dev/log    local1 notice
 4        chroot /var/lib/haproxy
 5        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
 6        stats timeout 30s
 7        user haproxy
 8        group haproxy
 9        daemon
10
11        # Default SSL material locations
12        ca-base /etc/ssl/certs
13        crt-base /etc/ssl/private
14
15        # Default ciphers to use on SSL-enabled listening sockets.
16        # For more information, see ciphers(1SSL). This list is from:
17        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
18        # An alternative list with additional directives can be obtained from
19        #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
20        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
21        ssl-default-bind-options no-sslv3
22
23defaults
24        log     global
25        mode    http
26        option  httplog
27        option  dontlognull
28        timeout connect 5000
29        timeout client  50000
30        timeout server  50000
31        errorfile 400 /etc/haproxy/errors/400.http
32        errorfile 403 /etc/haproxy/errors/403.http
33        errorfile 408 /etc/haproxy/errors/408.http
34        errorfile 500 /etc/haproxy/errors/500.http
35        errorfile 502 /etc/haproxy/errors/502.http
36        errorfile 503 /etc/haproxy/errors/503.http
37        errorfile 504 /etc/haproxy/errors/504.http
38
39frontend Local_Server
40    bind *:80
41    mode http
42    default_backend k8s_server
43
44backend k8s_server
45    mode http
46    balance roundrobin
47    server web1.example.com  100.0.0.2:8080


Once you have updated you haproxy.cfg file then you need to verify you configuration.

You can run the following command to check the correctness of the configuration .

1$ haproxy -c -f /etc/haproxy/haproxy.cfg

If your configuration is correct then you should see the following message

1Configuration file is valid

Now you need to restart haproxy

1$ sudo service haproxy restart


4. Setup kubernetes Ingress controller

Ingress controller are not set up by default inside the kubernetes cluster. We need to set up them manually. There are many Ingress controllers available such as -AKS Application Gateway Ingress Controller , Ambassador, AppsCode Inc., AWS ALB Ingress Controller, Contour, Citrix Ingress Controller, F5 BIG-IP, Gloo, Istio, Kong, Skipper, Traefik

But in our case we are going with NGINX Ingress Controller for Kubernetes.

If you like than there is official guide for setting up NGINX Ingress controller.

But here are the steps which I followed for setting it up -

  1. You need to clone the git repo -
1$ git clone https://github.com/nginxinc/kubernetes-ingress.git
  1. Go to the directory kubernetes-ingress/deployments
1cd kubernetes-ingress/deployments
  1. Inside the deployments directory you will find namespace and service account yaml .e.g. ns-and-sa.yaml. Using this yaml we need to create namespace and service account for the Ingress controller.

You can find ns-and-sa.yaml, inside the directory common/ns-and-sa.yaml

1$kubectl apply -f common/ns-and-sa.yaml
1namespace/nginx-ingress created
2serviceaccount/nginx-ingress created
  1. As a next step you need to create cluster role and cluster role binding for the service account which we have created in step no 3.

For the cluster role and cluster role binding you can find rbac.yaml inside the directory rbac/rbac.yaml

1$ kubectl apply -f rbac/rbac.yaml
1clusterrole.rbac.authorization.k8s.io/nginx-ingress created
2clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress created
  1. For the App protect role create the following role binding
1kubectl apply -f rbac/ap-rbac.yaml
  1. Now you need to create secret using TLS certificate and key for the server.

Use the default-server-secret.yaml available inside the directory common/default-server-secret.yaml

1$ kubectl apply -f common/default-server-secret.yaml
1secret/default-server-secret created
  1. For customizing NGINX configuration you need to create config map using nginx-config.yaml available at common/nginx-config.yaml
1$ kubectl apply -f common/nginx-config.yaml
1configmap/nginx-config created
  1. Lets create ingress controller pod using the deployment
1$ kubectl apply -f deployment/nginx-ingress.yaml
1deployment.apps/nginx-ingress created
  1. Now run it as a Daemon set
1$ kubectl apply -f daemon-set/nginx-ingress.yaml
1daemonset.apps/nginx-ingress
  1. Now we can check all the container images running inside the namespace - nginx-ingress
1$ kubectl get all -n nginx-ingress

After running the above command you should see something similar in your terminal

1NAME                      READY   STATUS    RESTARTS   AGE
2pod/nginx-ingress-hqghc   1/1     Running   0          42s
3pod/nginx-ingress-jcxjv   1/1     Running   0          42s
1NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
2daemonset.apps/nginx-ingress   2         2         2       2            2           <none>          42s

Till now we have setup NGINX controller but not the Ingress resource yet.

Before setting up Ingress resource first we need to deploy some application inside our kubernetes cluster.

"Why we need to deploy application before setting up Ingress resource?"

The answer to this question is - We need to have service deployed and running at certain port, so that we can use service name and port number inside Ingress resource.

5.Deploy spring boot microservice inside kubernetes cluster

Alright lets deploy spring boot microservice using follow command.

1$ kubectl create deployment demo --image=rahulwagh17/kubernetes:jhooq-k8s-springboot

(Note - If you want to know more about deploying Spring boot microservice i_nside kubernetes cluster than I would recommend to go through - Deploy Spring Boot microservices on kubernetes)_

Check the deployment

1$ kubectl get deployments
2NAME   READY   UP-TO-DATE   AVAILABLE   AGE
3demo   1/1     1            1           5h20m

Expose the deployment as service

1$ kubectl expose deployment demo --type=ClusterIP --name=demo-service  --port=8080
1service/demo-service exposed

Okay so now we have created the deployment and exposed the service as ClusterIP running on port 8080.

You can view the exposed service

1$ kubectl get service demo-service
1NAME           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
2demo-service   ClusterIP   10.233.62.13   <none>        8080/TCP   43h

6. Create Ingress resource

Before you create the Ingress resource you should be mindful about the two things -

  1. deployed service name .e.g. - demo-service
  2. service port .e.g. 8080

Now you should create a yaml with the name ingress-resource.yaml

1$ touch ingress-resource.yaml

You need to edit the ingress-resource.yaml and fill in with configuration needed for the service .e.g. - demo-service

Copy the following configuration and paste it into your ingress-resource.yaml

 1apiVersion: extensions/v1beta1
 2kind: Ingress
 3metadata:
 4  name: springboot-ingress
 5  annotations:
 6    ingress.kubernetes.io/rewrite-target: /
 7spec:
 8  rules:
 9  - host: jhooq.demo
10    http:
11      paths:
12      - backend:
13          serviceName: demo-service
14          servicePort: 8080

Alright now after updating the ingress-resource.yaml, you need to create the ingress resource using following command

1$ kubectl create -f ingress-resource.yaml

Once your Ingress resource is created, you can check it with the following command

1$ kubectl describe ing springboot-ingress

You should be able to see something similar in your terminal

 1 Name:             springboot-ingress
 2Namespace:        default
 3Address:
 4Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
 5Rules:
 6  Host        Path  Backends
 7  ----        ----  --------
 8  jhooq.demo
 9                 demo-service:8080 (10.233.90.4:8080)
10Annotations:  ingress.kubernetes.io/rewrite-target: /
11Events:
12  Type    Reason          Age   From                      Message
13  ----    ------          ----  ----                      -------
14  Normal  AddedOrUpdated  103s  nginx-ingress-controller  Configuration for default/springboot-ingress was added or updated
15  Normal  AddedOrUpdated  103s  nginx-ingress-controller  Configuration for default/springboot-ingress was added or updated

7. Use HAProxy to access the deployed microservice withing kubernetes cluster

Once you have implemented all the 6 steps then you are pretty much done setting up everything.

One last thing which remaining is to add an host entry into your /etc/hosts file.

Since our host IP address in this example is 100.0.0.2, so make an following entry inside the /etc/hosts

1$ vi /etc/hosts
1100.0.0.2 jhooq.demo

Lets test our microservice using the hostname .i.e. jhooq.demo

1$ curl jhooq.demo/hello
1Hello - Jhooq-k8s

(Note : - You should test this from outside of your kubernetes cluster )

Here is screenshot from the browser.

testing microservice URL after setting up HAproxy load balancer, ingress controller, kubernetes cluster and spring boot microservice

Conclusion

If you are reading this conclusion then you pretty much learned -

  1. Setting up kubernetes cluster
  2. Installing the HAProxy loadbalancer on your host machine
  3. Setting up ingress controller on kubernetes cluster
  4. Creating kubernetes deployment for spring boot microservice
  5. Exposing kubernetes deployment as service on ClusterIP
  6. Creating Ingress resource for the Spring Boot Microservice to communicate with the HAProxy loadbalancer
  7. Finally testing the complete setup .

Learn more On Kubernetes -

  1. Setup kubernetes on Ubuntu
  2. Setup Kubernetes on CentOs
  3. Setup HA Kubernetes Cluster with Kubespray
  4. Setup HA Kubernetes with Minikube
  5. Setup Kubernetes Dashboard for local kubernetes cluster
  6. Setup Kubernetes Dashboard On GCP(Google Cloud Platform)
  7. How to use Persistent Volume and Persistent Volume Claims in Kubernetes
  8. Deploy Spring Boot Microservice on local Kubernetes cluster
  9. Deploy Spring Boot Microservice on Cloud Platform(GCP)
  10. Setting up Ingress controller NGINX along with HAproxy inside Kubernetes cluster
  11. CI/CD Kubernetes | Setting up CI/CD Jenkins pipeline for kubernetes
  12. kubectl export YAML | Get YAML for deployed kubernetes resources(service, deployment, PV, PVC....)
  13. How to setup kubernetes jenkins pipeline on AWS?
  14. Implementing Kubernetes liveness, Readiness and Startup probes with Spring Boot Microservice Application?
  15. How to fix kubernetes pods getting recreated?
  16. How to delete all kubernetes PODS?
  17. How to use Kubernetes secrets?
  18. Share kubernetes secrets between namespaces?
  19. How to Delete PV(Persistent Volume) and PVC(Persistent Volume Claim) stuck in terminating state?
  20. Delete Kubernetes POD stuck in terminating state?