HowtoForge

How to Setup Kubernetes Cluster with Kubeadm on Ubuntu 22.04

Kubernetes or k8s is an open-source platform for container orchestration that automates deployments, management, and scaling of containerized applications. Kubernetes is a container orchestration created by Google, and now become an open-source project and become standard for modern application deployment and computing platforms.

Kubernetes is the solution for the modern container deployment era. It provides service discovery and load-balancing, storage orchestration, automated rollout and rollback, self-healing service, secret and configuration management. Kubernetes enables cost-effective cloud-native development.

In this tutorial, you will set up the Kubernetes Cluster by:

  1. Setting up systems, which includes - setup /etc/hosts file, enabling kernel modules, and disabling SWAP.
  2. Setting up UFW firewall by adding some ports that are required for Kubernetes.
  3. Installing containerd as the container runtime for Kubernetes.
  4. Installing Kubernetes packages such as kubelet, kubeadm, and kubectl.
  5. Installing Flannel network plugin for Kubernetes Pods.
  6. Initializing one control-plane node and adding two worker nodes.

Prerequisites

To complete this tutorial, you will need the following requirements:

Setting Up Systems

Before you start installing any packages for Kubernetes deployment, you will need to set up all of your systems as required for Kubernetes Deployment. This includes the following configurations:

Setup /etc/hosts file

In this first step, you will set up the system hostname and the /etc/hosts file on all of your servers. For this demonstration, we will use the following servers.

Hostname    IP Address        Used as
--------------------------------------------
cplane1     192.168.5.10      control-plane
worker1     192.168.5.25      worker node
worker2     192.168.5.26      worker node

Run the following hostnamectl command below to set up the system hostname on each server.

For the control-plane node, run the following command to set up the system hostname to "cplane1".

sudo hostnamectl set-hostname cplane1

For Kubernetes worker nodes, run the following hostnamectl command.

# setup hostname worker1
sudo hostnamectl set-hostname worker1

# setup hostname worker2
sudo hostnamectl set-hostname worker2

Next, modify the /etc/hosts file on all servers using the following command.

sudo nano /etc/hosts

Add the following configuration to the file. Be sure each hostname is pointed to the correct IP address.

192.168.5.10 cplane1
192.168.5.25 worker1
192.168.5.26 worker2

Save and close the file when you are finished.

Lastly, if you run the ping command against each hostname, you will be pointed to the correct IP address as defined on the /etc/hosts file.

ping cplane1 -c3
ping worker1 -c3
ping worker2 -c3

Configuring UFW Firewall

Kubernetesrequired some ports to be open on all of your systems. On the default Ubuntu system, the UFW firewall is used as the default firewall. You will add some ports to the UFW firewall for the Kubernetes deployment.

For the Kubernetes control-plane, you need to open the following ports:

Protocol  Direction Port Range  Purpose Used By
-----------------------------------------------
TCP       Inbound   6443        Kubernetes API server All
TCP       Inbound   2379-2380   etcd server client API  kube-apiserver, etcd
TCP       Inbound   10250       Kubelet API Self, Control plane
TCP       Inbound   10259       kube-scheduler  Self
TCP       Inbound   10257       kube-controller-manager Self

For the Kubernetes worker nodes, you need to open the following ports:

Protocol  Direction Port Range  Purpose Used By
--------------------------------------------------
TCP       Inbound   10250       Kubelet API Self, Control plane
TCP       Inbound   30000-32767 NodePort Services†  All

Before adding UFW rules, be sure to add the OpenSSH application to your firewall using the below command. Then, enable the UFW firewall. When prompted for the confirmation, input "y" to enable and run the UFW firewall.

sudo ufw allow "OpenSSH"
sudo ufw enable

On the control-plane node "cplane1", run the following ufw command to open ports.

sudo ufw allow 6443/tcp
sudo ufw allow 2379:2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10259/tcp
sudo ufw allow 10257/tcp

sudo ufw status

On worker nodes "worker1" and "worker2", run the following ufw command to open some ports.

sudo ufw allow 10250/tcp
sudo ufw allow 30000:32767/tcp

sudo ufw status

Enable Kernel Modules and Disable SWAP

The Kubernetes required the kernel modules "overlay" and "br_netfilter" to be enabled on all servers. This will let the iptbales see bridged traffics. Also, you will need to enable the port forwarding and disable SWAP.

Run the following command to enable the kernel modules "overlay" and "br_netfilter".

sudo modprobe overlay
sudo modprobe br_netfilter

To make it permanent, create the configuration file to "/etc/modules-load.d/k8s.conf". This will enable Linux systems to enable kernel modules during the system boot.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

Next, create the systemctl params required using the following command.

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

To apply the new sysctl configuration without reboot, use the following command. You should get the list of default sysctl params on your system and be sure you get sysctl params that you just added in the file "k8s.conf".

sudo sysctl --system

To disable SWAP, you will need to comment on the SWAP configuration on the "/etc/fstab" file. This can be done by using the single command via sed (stream editor) or manually editing the /etc/fstab file.

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

or

sudo nano /etc/fstab

Now turn off the SWAP on the current session using the below command. Then, verify the SWAP is off using the "free -m" command. You should see the SWAP has "0" values, which means it's now disabled.

sudo swapoff -a
free -m

Installing Container Runtime: Containerd

To set up Kubernetes Cluster, you must install the container runtime on all servers so that Pods can run. Multiple container runtimes can be used for Kubernetes deployments such as containerd, CRI-O, Mirantis Container Runtime, and Docker Engine (via cri-dockerd).

In this demonstration, we will use the "containerd" as the container for our Kubernetes deployment. So, you will install containerd on all servers, control-plane, and worker nodes.

There are multiple ways to install containerd, the easiest way is by using pre-built binary packages provided by the Docker repository.

Now run the following command to add the Docker repository and GPG key.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update and refresh the package index on your ubuntu system using the below command.

sudo apt update

Now install the containerd package using the apt command below. And the installation will begin.

sudo apt install containerd.io

After installation is finished, run the following command to stop the containerd service.

sudo systemctl stop containerd

Back up the default containerd configuration and generate a new fresh one using the following command.

sudo mv /etc/containerd/config.toml /etc/containerd/config.toml.orig
sudo containerd config default > /etc/containerd/config.toml

Now modify the containerd config file "/etc/containerd/config.toml" using the following command.

sudo nano /etc/containerd/config.toml

Change the value of cgroup driver "SystemdCgroup = false" to "SystemdCgroup = true". This will enable the systemd cgroup driver for the containerd container runtime.

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

When you are finished, save and close the file.

Next, run the following systemctl command to start the containerd service.

sudo systemctl start containerd

Lastly, check and verify the containerd service using the below command. You should see the containerd is enabled and will be run automatically at system boot. And the current status of containerd service is running.

sudo systemctl is-enabled containerd
sudo systemctl status containerd

Installing Kubernetes Package

You have installed the containerd container runtime. Now you will install Kubernetes packages on all of your Ubuntu systems. This includes the kubeadm for bootstrapping the Kubernetes cluster, kubelet the main component of the Kubernetes Cluster, and the kubectl the command-line utility for managing the Kubernetes cluster.

In this example, we will install Kubernetes packages using the repository provided by Kubernetes. So, you will add the Kubernetes repository to all of your ubuntu systems.

Run the following apt command to install some package dependencies.

sudo apt install apt-transport-https ca-certificates curl -y

Now add the Kubernetes repository and GPG key using the following command.

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update and refresh your Ubuntu repository and package index.

sudo apt update

When the update is finished, install Kubernetes packages using the following apt command. Input Y to confirm the installation and press ENTER to continue, and the installation will begin.

sudo apt install kubelet kubeadm kubectl

After installation is finished, run the following command to pin the current version of Kubernetes packages. This will prevents Kubernetes packages to be updated automatically and prevent the version skew between Kubernetes packages.

sudo apt-mark hold kubelet kubeadm kubectl

Installing CNI (Container Network Interface) Plugin: Flannel

Kubernetes supports various Container Network Plugins such as AWS VPC for Kubernetes, Azure CNI, Cilium, Calico, Flannel, and many more. In this example, we will use Flannel as the CNI plugin for the Kubernetes deployment. And this required you to install the binary file of Flannel across Kubernetes nodes.

Run the below command to create a new directory "/opt/bin". Then, download the binary file of Flannel into it.

mkdir -p /opt/bin/
sudo curl -fsSLo /opt/bin/flanneld https://github.com/flannel-io/flannel/releases/download/v0.19.0/flanneld-amd64

Now make the "flanneld" binary file executable by changing the permission of the file using the below command. This "flanneld" binary file will be executed automatically when you set up the Pod network addon.

sudo chmod +x /opt/bin/flanneld

Initializing Kubernetes Control Plane

You have finished all dependencies and requirements for deploying Kubernetes Cluster. Now you will start the Kubernetes Cluster by initializing the Control Plane node for the first time. In this example, the Kubernetes Control Plane will be installed on the "cplane1" server with the IP address "192.168.5.10".

Before initializing the Control Plane node, run the following command to check the "br_netfilter" kernel modules are enabled. If you get an output from the command, it means the "br_netfilter" module is enabled.

lsmod | grep br_netfilter

Next, run the following command to download images that are required for the Kubernetes Cluster. This command will download all container images that are needed for creating Kubernetes Cluster such as coredns, kube-api server, etcd, kube-controller, kube-proxy, and the pause container image.

sudo kubeadm config images pull

After the download is finished, run the following "kubeadm init" command to initialize the Kubernetes Cluster on the "cplane1" server. This node "cplane1" will automatically be selected as the Kubernetes Control Plane because this is the first time initializing the cluster.

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.5.10 \
--cri-socket=unix:///run/containerd/containerd.sock

Below is the output when you initialize the Kubernetes Cluster on the "cplane1" server.

When the initialize is finished, you can see the message such as "Your Kubernetes control-plane has initialized successfully!" with some important output messages for setting up the Kubernetes credentials and deploying the Pod network add-on, the how to add the worker node to your Kubernetes Cluster.

Before you start using the Kubernetes Cluster, you will need to set up the Kubernetes credentials. Run the following command to set up the Kubernetes credentials.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now you can use the "kubectl" command to interact with your Kubernetes cluster. Run the following "kubectl" command to check the Kubernetes Cluster information. And you should see the Kubernetes control plane and the coredns running.

kubectl cluster-info

To get full information about your Kubernetes, you can use the option dump - so "kubectl cluster-info dump".

After the Kubernetes Control Plane is running, run the following command to install the Flannel Pod network plugin. This command will automatically run the "flanneld" binary file and run some flannel pods.

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Check the list of running pods on your Kubernetes using the following command. if your Kubernetes installation is successful, you should see all main pods for Kubernetes are running.

kubectl get pods --all-namespaces

Adding Worker Nodes to Kubernetes

After initializing the Kubernetes Control Plane on the "cplane1" server, you will add worker nodes "worker1" and "worker2" to the Kubernetes Cluster.

Move to the "worker1" server and run the following "kubeadm join" command below to add the "worker1" to the Kubernetes Cluster. You may have different token and ca-cert-hash, you can see details of this information on the output message when you initialize the Control Plane node.

kubeadm join 192.168.5.10:6443 --token po3sb1.oux4z76nwb0veuna \
--discovery-token-ca-cert-hash sha256:f5068150fabaf85f3d04e19a395c60d19298ba441e2d9391e20df3267ea6cd28

In the following output, you can see that the "worker1" server is joined by the Kubernetes Cluster.

Next, move to the "worker2" server and run the "kubeadm join" command to add the "worker2" to the Kubernetes Cluster.

kubeadm join 192.168.5.10:6443 --token po3sb1.oux4z76nwb0veuna \
--discovery-token-ca-cert-hash sha256:f5068150fabaf85f3d04e19a395c60d19298ba441e2d9391e20df3267ea6cd28

You will see the same output message when the process is finished.

Now back to the Control Plane server 'cplane1" and run the following command to check all running pods on the Kubernetes Cluster. You should see there are additional pods on every Kubernetes component.

kubectl get pods --all-namespaces

Lastly, check and verify all available nodes on the Kubernetes Cluster using the 'kubectl" command below. You should see the 'cplane1" server is running as the Kubernetes Control Plane, and the "worker1" and "worker2" servers are running as the worker node.

kubectl get nodes -o wide

Conclusion

Throughout this tutorial, you have finished the deployment of the Kubernetes Cluster with three nodes Ubuntu 22.04 servers. The Kubernetes Cluster is running with one control plane and two worker nodes. It's running with the containerd as the Container Runtime for your Kubernetes Cluster, and the Flannel network plugin for the networking of Pods on your cluster. You have fully configured Kubernetes Cluster, you can start deploying your applications to the Kubernetes Cluster or try to install the Kubernetes Dashboard to learn more about your Kubernetes environment.

How to Setup Kubernetes Cluster with Kubeadm on Ubuntu 22.04