How to Install a Kubernetes Docker Cluster on CentOS 7

Kubernetes is an open source platform for managing containerized applications developed by Google. It allows you to manage, scale, and automatically deploy your containerized applications in the clustered environment. With Kubernetes, we can orchestrate our containers across multiple hosts, scale the containerized applications with all resources on the fly, and have centralized container management environment.

In this tutorial, I will show you step-by-step how to install and configure Kubernetes on CentOS 7. We will be using 1 server 'k8s-master' as the Kubernetes Host Master, and 2 servers as Kubernetes node, 'node01' and 'node02'.

Prerequisites

  • 3 CentOS 7 Servers
    • 10.0.15.10      k8s-master
    • 10.0.15.21      node01
    • 10.0.15.22      node02
  • Root privileges

What we will do?

  1. Kubernetes Installation
  2. Kubernetes Cluster Initialization
  3. Adding node01 and node02 to the Cluster
  4. Testing - Create First Pod

Step 1 - Kubernetes Installation

In this first step, we will prepare those 3 servers for Kubernetes installation, so run all commands on the master and node servers.

We will prepare all servers for Kubernetes installation by changing the existing configuration on servers, and also installating some packages, including docker-ce and kubernetes itself.

- Configure Hosts

Edit hosts file on all server using the vim editor.

vim /etc/hosts

Paste the host's list below.

10.0.15.10      k8s-master
10.0.15.21      node01
10.0.15.22      node02

Save and exit.

- Disable SELinux

In this tutorial, we will not cover about SELinux configuration for Docker, so we will disable it.

Run the command below to disable SELinux.

setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

- Enable br_netfilter Kernel Module

The br_netfilter module is required for kubernetes installation. Enable this kernel module so that the packets traversing the bridge are processed by iptables for filtering and for port forwarding, and the kubernetes pods across the cluster can communicate with each other.

Run the command below to enable the br_netfilter kernel module.

modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

- Disable SWAP

Disable SWAP for kubernetes installation by running the following commands.

swapoff -a

Disable swap

And then edit the '/etc/fstab' file.

vim /etc/fstab

Comment the swap line UUID as below.

Edit /etc/fstab

- Install Docker CE

Install the latest version of Docker-ce from the docker repository.

Install the package dependencies for docker-ce.

yum install -y yum-utils device-mapper-persistent-data lvm2

Add the docker repository to the system and install docker-ce using the yum command.

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce

Wait for the docker-ce installation.

Install Docker

- Install Kubernetes

Add the kubernetes repository to the centos 7 system by running the following command.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Now install the kubernetes packages kubeadm, kubelet, and kubectl using the yum command below.

yum install -y kubelet kubeadm kubectl

Install Kubernetes

After the installation is complete, restart all those servers.

sudo reboot

Log in again to the server and start the services, docker and kubelet.

systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet

- Change the cgroup-driver

We need to make sure the docker-ce and kubernetes are using same 'cgroup'.

Check docker cgroup using the docker info command.

docker info | grep -i cgroup

And you see the docker is using 'cgroupfs' as a cgroup-driver.

Now run the command below to change the kuberetes cgroup-driver to 'cgroupfs'.

sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Reload the systemd system and restart the kubelet service.

systemctl daemon-reload
systemctl restart kubelet

Now we're ready to configure the Kubernetes Cluster.

Konfigure Kubernetes Cluster

Step 2 - Kubernetes Cluster Initialization

In this step, we will initialize the kubernetes master cluster configuration.

Move the shell to the master server 'k8s-master' and run the command below to set up the kubernetes master.

kubeadm init --apiserver-advertise-address=10.0.15.10 --pod-network-cidr=10.244.0.0/16

Kubernetes Cluster Initialization

Note:

--apiserver-advertise-address = determines which IP address Kubernetes should advertise its API server on.

--pod-network-cidr = specify the range of IP addresses for the pod network. We're using the 'flannel' virtual network. If you want to use another pod network such as weave-net or calico, change the range IP address.

When the Kubernetes initialization is complete, you will get the result as below.

Kubernetes initialization is complete

Note:

Copy the 'kubeadm join ... ... ...' command to your text editor. The command will be used to register new nodes to the kubernetes cluster.

Now in order to use Kubernetes, we need to run some commands as on the result.

Create new '.kube' configuration directory and copy the configuration 'admin.conf'.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Next, deploy the flannel network to the kubernetes cluster using the kubectl command.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Kubernetes Join

The flannel network has been deployed to the Kubernetes cluster.

Wait for a minute and then check kubernetes node and pods using commands below.

kubectl get nodes
kubectl get pods --all-namespaces

And you will get the 'k8s-master' node is running as a 'master' cluster with status 'ready', and you will get all pods that are needed for the cluster, including the 'kube-flannel-ds' for network pod configuration.

Make sure all kube-system pods status is 'running'.

Check Kubernetes Node

Kubernetes cluster master initialization and configuration has been completed.

Step 3 - Adding node01 and node02 to the Cluster

In this step, we will add node01 and node02 to join the 'k8s' cluster.

Connect to the node01 server and run the kubeadm join command as we copied on the top.

kubeadm join 10.0.15.10:6443 --token vzau5v.vjiqyxq26lzsf28e --discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

Adding node01 and node02 to the Cluster

Connect to the node02 server and run the kubeadm join command as we copied on the top.

kubeadm join 10.0.15.10:6443 --token vzau5v.vjiqyxq26lzsf28e --discovery-token-ca-cert-hash sha256:e6d046ba34ee03e7d55e1f5ac6d2de09fd6d7e6959d16782ef0778794b94c61e

Connect Docker nodes

Wait for some minutes and back to the 'k8s-master' master cluster server check the nodes and pods using the following command.

kubectl get nodes
kubectl get pods --all-namespaces

Now you will get node01 and node02 has been added to the cluster with status 'ready'.

kubctl command

node01 and node02 have been added to the kubernetes cluster.

Step 4 - Testing Create First Pod

In this step, we will do a test by deploying the Nginx pod to the kubernetes cluster. A pod is a group of one or more containers with shared storage and network that runs under Kubernetes. A Pod contains one or more containers, such as Docker container.

Login to the 'k8s-master' server and create new deployment named 'nginx' using the kubectl command.

kubectl create deployment nginx --image=nginx

To see details of the 'nginx' deployment sepcification, run the following command.

kubectl describe deployment nginx

And you will get the nginx pod deployment specification.

Next, we will expose the nginx pod accessible via the internet. And we need to create new service NodePort for this.

Run the kubectl command below.

kubectl create service nodeport nginx --tcp=80:80

Create first pod

Make sure there is no error. Now check the nginx service nodeport and IP using the kubectl command below.

kubectl get pods
kubectl get svc

Get list of pods

Now you will get the nginx pod is now running under cluster IP address '10.160.60.38' port 80, and the node main IP address '10.0.15.x' on port '30691'.

From the 'k8s-master' server run the curl command below.

curl node01:30691

Test with curl

curl node02:30691

Test node 2 with curl

The Nginx Pod has now been deployed under the Kubernetes cluster and it's accessible via the internet.

Now access from the web browser.

http://10.0.15.10:30691/

And you will get the Nginx default page.

Page on node 1

On the node02 server - http://10.0.15.11:30691/

Page on node 2

The Kubernetes cluster Installation and configuration on CentOS 7 has been completed successfully.

Reference

Share this page:

Suggested articles

12 Comment(s)

Add comment

Comments

By: vitorserenity at: 2018-05-23 07:35:25

Thank you very much for the article!

By: Tarun at: 2018-06-01 16:34:22

You can follow up the repository made by one of our developer with an additional thing of Horizontal Pod autoscaling of stateless application.

https://github.com/vevsatechnologies/Install-Kubernetes-on-CentOs

By: gzcwnk at: 2018-07-17 03:57:14

Was doing well, until I find I cant add nodes to the master, I get "no route to host" which makes no sense.   :(

By: gzcwnk at: 2018-07-17 04:17:17

Re: my comment on not working,  I traced this down to a firewall on the master which is installed by default on centos 7.5.   Odd thing is the port is 6443 but looks like ipv6 only according to netstat -tunlp  

So, firewall-cmd --add-port=6443/tcp --permanent ; firewall-cmd --relaod

However this rule needs tighetning to the kubernetes nodes only.

By: gzcwnk at: 2018-07-17 22:45:50

There is some explanation/command(s) missing to run 2 instances, one on each node. ie  I have 1 nginx instance running fine on one node which is what I expected to happen.  From the "kubectl deployment nginx" it say for replicas 1 desired etc 

Reading your screen shot I see "kubectl edit deployment nginx" and changing replicas from 1 to 2 and saving and I now get 2 nginx instances as shown.

rather neat.

 

 

By: recaptcha at: 2018-09-11 07:18:45

I followed your tutorial without problems till cluster configuration on Centos7 (step 2). I'm connecting to internet over authenticated proxy (no other option) connection where I get error:

 

[[email protected] log]# kubeadm init --apiserver-advertise-address=10.x.x.x --pod-network-cidr=10.244.0.0/16

unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: x509: certificate signed by unknown authority

[[email protected] log]#

 

I see Kubelet is not running:

 

[[email protected] log]# systemctl status kubelet

? kubelet.service - kubelet: The Kubernetes Node Agent

   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)

  Drop-In: /etc/systemd/system/kubelet.service.d

           ??10-kubeadm.conf

   Active: activating (auto-restart) (Result: exit-code) since Tue 2018-09-11 09:12:32 CEST; 3s ago

     Docs: https://kubernetes.io/docs/

  Process: 62472 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)

 Main PID: 62472 (code=exited, status=255)

Sep 11 09:12:32 docker4uat systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a

Sep 11 09:12:32 docker4uat systemd[1]: Unit kubelet.service entered failed state.

Sep 11 09:12:32 docker4uat systemd[1]: kubelet.service failed.

[[email protected] log]#

 

Logs from messages:

 

Sep 11 09:15:46 docker4uat systemd: Started kubelet: The Kubernetes Node Agent.

Sep 11 09:15:46 docker4uat systemd: Starting kubelet: The Kubernetes Node Agent...

Sep 11 09:15:46 docker4uat kubelet: F0911 09:15:46.503054   62789 server.go:190] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory

Sep 11 09:15:46 docker4uat systemd: kubelet.service: main process exited, code=exited, status=255/n/a

Sep 11 09:15:46 docker4uat systemd: Unit kubelet.service entered failed state.

Sep 11 09:15:46 docker4uat systemd: kubelet.service failed.

 

Thanks for help!

By: djp at: 2018-09-18 15:30:35

Nice guide, thanks much!  I stumbled a bit upon one issue when trying to get node01/02 to join:

[[email protected] ~]# kubeadm join <myip>:6443 --token <mytoken> --discovery-toke n-ca-cert-hash sha256:<myhash>[preflight] running pre-flight checks        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs] or no  builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]

My resolution was:

yum install -y ipvsadmmodprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_shmodprobe ip_vs

Background:

[[email protected] ~]# cat /etc/redhat-releaseCentOS Linux release 7.4.1708 (Core)

Note also: [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03

Thanks again!

By: Radheshyam at: 2018-09-29 13:28:26

After reboot step vm is not connection. Brlow messege is coming ... in google cloud.

Transferring SSH keys to the VM.

By: Prasenjit Dutta at: 2018-10-18 11:34:55

I followed the article and the steps mentioned here to install Kubernetes cluster, but when tried "kubectl apply -f kube-flannel.yml

", getting the error as "http://<kubernetes-master-machine-ip>/api?timeout=32s: net/http: TLS handshake timeout", firewall, selinux in this machine is disabled, also from this machine wget can be done for any url. Could you please let me know, is it the our network issue or some kind of bug in kubernetes(docker version:1.13.1 and kubectl and kubeadm version is 1.12.1) 

By: Tushar at: 2018-11-09 12:23:20

Good Articles but it is not working. When i am trying to deploy a service on pods getting error Error Image pull. Kubernetes is not able to pull docker image from docker registry. can you resolve it

By: Charles at: 2018-11-20 09:50:02

Just tried, wendorful instruction.

One thing to be mentioned. The latest version of k8s doesn't support the latest docker at this moment. (today: 2018-11-20 17:41:45)

The latest version of k8s is "v1.12.2" and it supports Docker-CE "18.06".

While the latest version of Docker-CE is "18.09" which has not been verified by k8s team. So it leads error in "kubeadm init".

Ref:

https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/util/system/docker_validator.go#L41 

 

The good way is to install docker-ce-18.06.1.ce at beginning. But if you have got into the trouble like me, you need to `yum remove docker-ce docker-ce-cli` and then `yum install docker-ce-18.06.1.ce`. Don't forget to `systemctl restart docker && systemctl enable docker` after reinstallation.

 

Cheers and thanks!

Charles - https://www.linkedin.com/in/iamchen/

 

By: Himanshu at: 2018-12-08 17:04:58

Thanks, i followed this and was able to create two node K8s cluster for my POC.