How to Setup Kubernetes Cluster with Kubeadm on Debian 11
On this page
- Prerequisites
- Setting Up Systems
- Setup /etc/hosts file
- Setup UFW Firewall
- Enable Kernel Modules and Disable SWAP
- Installing Container Runtime: CRI-O
- Installing Kubernetes Packages
- Initializing Kubernetes Control Plane
- Adding Worker Nodes to Kubernetes
- Deploying Nginx Pod on Kubernetes Cluster
- Conclusion
Kubernetes or k8s is an open-source platform for container orchestration that automates deployments, management, and scaling of containerized applications. Kubernetes is a container orchestration created by Google, and now become an open-source project and become standard for modern application deployment and computing platforms.
Kubernetes is the solution for the modern container deployment era. It provides service discovery and load-balancing, storage orchestration, automated rollout and rollback, self-healing service, secret and configuration management. Kubernetes enables cost-effective cloud-native development.
In this tutorial, you will set up the Kubernetes Cluster by:
- Setting up systems, which includes - setup /etc/hosts file, enabling kernel modules, and disabling SWAP.
- Setting up UFW firewall by adding some ports that are required for Kubernetes and CNI Plugin (Calico).
- Installing CRI-O as the container runtime for Kubernetes.
- Installing Kubernetes packages such as kubelet, kubeadm, and kubectl.
- Initializing one control-plane node and adding two worker nodes.
Prerequisites
To complete this tutorial, you will need the following requirements:
- Three or more Debian 11 servers.
- A non-root user with root/administrator privileges.
Setting Up Systems
Before you start installing any packages for Kubernetes deployment, you will need to set up all of your systems as required for Kubernetes Deployment. This includes the following configurations:
- Setup correct /etc/hosts file: Each server hostname must be resolved to the correct IP address. This can be done in multiple ways, but the easiest and simple one is by using the /etc/hosts file on all servers.
- Setup UFW Firewall: For the production environment, it's always recommended to enable the firewall on both control-plane and worker nodes. You will set up a UFW firewall for the Kubernetes control plane, worker node, and the CNI plugin Calico.
- Enable kernel Modules: The Kubernetes required some kernel modules on the Linux system to be enabled. The kernel module "overlay" and "br_netfilter" is required to let iptables see bridged traffics.
- Disable SWAP: This is mandatory, you must disable SWAP on all Kubernetes nodes, both control-plane and worker nodes. Otherwise, the kubelet service will be running with issues.
Setup /etc/hosts file
In this first step, you will set up the system hostname and the /etc/hosts file on all of your servers. For this demonstration, we will use the following servers.
Hostname IP Address Used as
--------------------------------------------
k8s-master 192.168.5.10 control-plane
k8s-worker1 192.168.5.115 worker node
k8s-worker2 192.168.5.116 worker node
Run the following hostnamectl command below to set up the system hostname on each server.
For the control-plane node, run the following command to set up the system hostname to "k8s-master".
sudo hostnamectl set-hostname k8s-master
For Kubernetes worker nodes, run the following hostnamectl command.
# setup hostname k8s-worker1
sudo hostnamectl set-hostname k8s-worker1
# setup hostname k8s-worker2
sudo hostnamectl set-hostname k8s-worker2
Next, edit the /etc/hosts file on all servers using the following command.
sudo nano /etc/hosts
Add the following configuration to the file. Be sure each hostname is pointed to the correct IP address.
192.168.5.10 k8s-master
192.168.5.115 k8s-worker1
192.168.5.116 k8s-worker2
Save and close the file when you are finished.
Lastly, if you run the ping command against each hostname, you will be pointed to the correct IP address as defined on the /etc/hosts file.
ping k8s-master -c3
ping k8s-worker1
ping k8s-worker2 -c3
Setup UFW Firewall
Kubernetes required some ports to be open on all of your systems. On the default Ubuntu system, the UFW firewall is used as the default firewall. You will install the UFW firewall on all your Debian systems and add some UFW rules for the Kubernetes deployment.
For the Kubernetes control-plane, you need to open the following ports:
Protocol Direction Port Range Purpose Used By
-----------------------------------------------
TCP Inbound 6443 Kubernetes API server All
TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 10259 kube-scheduler Self
TCP Inbound 10257 kube-controller-manager Self
For the Kubernetes worker nodes, you need to open the following ports:
Protocol Direction Port Range Purpose Used By
--------------------------------------------------
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services† All
In this example, we will use Calico as the CNI (Container Network Interface) plugin. So, you will open some additional ports below:
Protocol Direction Port Range Purpose Used By
-------------------------------------------------------
TCP Bidirectional 179 Calico networking (BGP)
UDP Bidirectional 4789 Calico networking with VXLAN enabled
TCP Incoming 2379 etcd datastore
UDP Bidirectional 4789 flannel networking (VXLAN)
Install the UFW package to your Debian servers using the following apt command. Input Y to confirm the installation and press ENTER, and the installation will begin.
sudo apt install ufw
Next, add the OpenSSH application to your firewall using the below command. Then, enable the UFW firewall. When prompted for the confirmation, input "y" to enable and run the UFW firewall.
sudo ufw allow "OpenSSH"
sudo ufw enable
On the control-plane node "k8s-master", run the following ufw command to open ports. Then, check and verify UFW rules.
Firewall rules for Kubernetes control plane.
sudo ufw allow 6443/tcp
sudo ufw allow 2379:2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10259/tcp
sudo ufw allow 10257/tcp
Firewall rules for Calico CNI plugin.
sudo ufw allow 179/tcp
sudo ufw allow 4789/udp
sudo ufw allow 4789/tcp
sudo ufw allow 2379/tcp
sudo ufw status
On worker nodes "k8s-worker1" and "k8s-worker2", run the following ufw command to open some ports. Then, check the UFW firewall rules.
Firewall rules for Kubernetes worker nodes.
sudo ufw allow 10250/tcp
sudo ufw allow 30000:32767/tcp
Firewall rules for Calico on Kubernetes worker nodes.
sudo ufw allow 179/tcp
sudo ufw allow 4789/udp
sudo ufw allow 4789/tcp
sudo ufw allow 2379/tcp
sudo ufw status
Enable Kernel Modules and Disable SWAP
The Kubernetes required the kernel modules "overlay" and "br_netfilter" to be enabled on all servers. This will let the iptbales see bridged traffics. Also, you will need to enable the port forwarding and disable SWAP.
Run the following command to enable the kernel modules "overlay" and "br_netfilter".
sudo modprobe overlay
sudo modprobe br_netfilter
To make it permanent, create the configuration file to "/etc/modules-load.d/k8s.conf" using the below command. This will allow Linux systems to enable kernel modules during the system boot.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
Next, create the systemctl params required using the following command.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
To apply the new sysctl configuration without reboot, use the sysctl command below. You should get the list of default sysctl params on your system and be sure you get sysctl params that you just added in the file "k8s.conf".
sudo sysctl --system
To disable SWAP, you will need to comment on the SWAP configuration on the "/etc/fstab" file. This can be done by using the single command via sed (stream editor) or manually editing the /etc/fstab file.
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
or
sudo nano /etc/fstab
Next, turn off the SWAP on the current session using the below command. Then, verify the SWAP is off using the "free -m" command. You should see the SWAP has "0" values, which means it's now disabled.
sudo swapoff -a
free -m
Installing Container Runtime: CRI-O
To set up Kubernetes Cluster, you must install the container runtime on all servers so that Pods can run. Multiple container runtimes can be used for Kubernetes deployments such as containerd, CRI-O, Mirantis Container Runtime, and Docker Engine (via cri-dockerd).
In this demonstration, we will use the "CRI-O" as the container for our Kubernetes deployment. So, you will install CRI-O on all servers, control-plane, and worker nodes.
Before installing CRI-O, run the apt command below to install the basic package "gnupg2" and "apt-transport-https". Input Y to confirm the installation and press ENTER.
sudo apt install gnupg2 apt-transport-https
Now create a new environment variable for the CRI-O installation. The variable "$OS" with the value "Debian_11" and the variable "$VERSION" with the value "1.24". In this example, we will install CRI-O container v1.24 (current version) for "Debian_11" systems.
export OS=Debian_11
export VERSION=1.24
Run the following command to add the CRI-O repository for Debian 11 system.
echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
Run the following command to add the GPG key for the CRI-O repository.
mkdir -p /usr/share/keyrings
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg
Now update system repositories and refresh the package index using the below command. You should see the CRI-O repository is added to Debian 11 servers.
sudo apt update
To install the CRI-O container runtime, run the following apt command. Input Y to confirm the installation and press ENTER, and the CRI-O installation will begin.
sudo apt install cri-o cri-o-runc cri-tools
After installation is finished, edit the CRI-O configuration "/etc/crio/crio.conf" using the below command.
sudo nano /etc/crio/crio.conf
On the "[crio.network]" section, uncomment the option "network_dir" and the "plugin_dir".
# The crio.network table containers settings pertaining to the management of
# CNI plugins.
[crio.network]
# The default CNI network name to be selected. If not set or "", then
# CRI-O will pick-up the first one found in network_dir.
# cni_default_network = ""
# Path to the directory where CNI configuration files are located.
network_dir = "/etc/cni/net.d/"
# Paths to directories where CNI plugin binaries are located.
plugin_dirs = [
"/opt/cni/bin/",
]
When you are finished, save and close the file.
Next, edit the CRI-O bridge configuration "/etc/cni/net.d/100-crio-bridge.conf" using the below command.
sudo nano /etc/cni/net.d/100-crio-bridge.conf
Change the default subnet of IP address using your custom subnet. This subnet IP address will be used for your Pods on the Kubernetes cluster. Also, you will need to ensure the subnet IP address is matched with the IP address configuration on the CNI plugin.
In this example, we will use the subnet IP address "10.42.0.0/24" for Pods on the Kubernetes cluster.
...
"ranges": [
[{ "subnet": "10.42.0.0/24" }],
[{ "subnet": "1100:200::/24" }]
]
...
Save and close the file when you are done.
Next, run the following systemctl command to restart the CRI-O service and apply new changes.
sudo systemctl restart crio
Lastly, enable the CRI-O service to run at system boot. Then, check and verify the CRI-O service status. And you should see the CRI-O service is enabled, and the current status is running.
sudo systemctl enable crio
sudo systemctl status crio
Installing Kubernetes Packages
You have installed the CRI-O container runtime. Now you will install Kubernetes packages on all of your Debian systems. This includes the kubeadm for bootstrapping the Kubernetes cluster, kubelet the main component of the Kubernetes Cluster, and the kubectl the command-line utility for managing the Kubernetes cluster.
In this example, we will install Kubernetes packages using the repository provided by Kubernetes. So, you will add the Kubernetes repository to all of your Debian systems.
Run the following command to add the Kubernetes repository and GPG key.
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Update and refresh your Ubuntu repository and package index.
sudo apt update
When the update is finished, install Kubernetes packages using the following apt command. Input Y to confirm the installation and press ENTER to continue, and the installation will begin.
sudo apt install kubelet kubeadm kubectl
After installation is finished, run the following command to pin the current version of Kubernetes packages. This will prevents Kubernetes packages to be updated automatically and prevent the version skew between Kubernetes packages.
sudo apt-mark hold kubelet kubeadm kubectl
Initializing Kubernetes Control Plane
You have finished all dependencies and requirements for deploying Kubernetes Cluster. Now you will start the Kubernetes Cluster by initializing the Control Plane node for the first time. In this example, the Kubernetes Control Plane will be installed on the "k8s-master" server with the IP address "192.168.5.10".
Before initializing the Control Plane node, run the following command to check the "br_netfilter" kernel modules are enabled. If you get an output from the command, it means the "br_netfilter" module is enabled.
lsmod | grep br_netfilter
Next, run the following command to download images that are required for the Kubernetes Cluster. This command will download all container images that are needed for creating Kubernetes Cluster such as coredns, kube-api server, etcd, kube-controller, kube-proxy, and the pause container image.
sudo kubeadm config images pull
After the download is finished, run the "crictl" command below to check the list of available images on the "k8s-master" server. You should see the list of images that will be used for creating the Kubernetes Cluster.
sudo crictl images
Next, run the following "kubeadm init" command to initialize the Kubernetes Cluster on the "k8s-master" server. This node "k8s-master" will automatically be selected as the Kubernetes Control Plane because this is the first time initializing the cluster.
Also, in this example, we specify the network for Pods to "10.42.0.0/24", which is the same subnet as the CRI-O bridge configuration "/etc/cni/net.d/100-crio-bridge.conf".
The "--apiserver-advertise-address" determines in which IP address the Kubernetes API server will be running, this example uses the internal IP address "192.168.5.10".
For the "--cri-socket" option here, we specify the CRI socket to the CRI-O container runtime socket that is available on "/var/run/crio/crio.sock". If you are using different Container Runtime, then you must change the path of the socket file, or you can just remove this option "--cri-socket" because the kubeadm will detect the Container Runtime socket automatically.
sudo kubeadm init --pod-network-cidr=10.42.0.0/24 \
--apiserver-advertise-address=192.168.5.10 \
--cri-socket=unix:///var/run/crio/crio.sock
Below is the output when you initialize the Kubernetes Cluster on the "k8s-master" server.
When the initialize is finished, you can see the message such as "Your Kubernetes control-plane has initialized successfully!" with some important output messages for setting up the Kubernetes credentials and deploying the Pod network add-on, the how to add the worker node to your Kubernetes Cluster.
Before you start managing the Kubernetes Cluster with the "kubectl" tool, you will need to set up the Kubernetes credentials. Run the following command to set up the Kubernetes credentials.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now you can use the "kubectl" command to interact with your Kubernetes cluster. Run the following "kubectl" command to check the Kubernetes Cluster information. And you should see the Kubernetes control plane and the coredns running.
kubectl cluster-info
To get full information about your Kubernetes, you can use the option dump - so "kubectl cluster-info dump".
After the Kubernetes is running, you will set up the Calico CNI plugin for your Kubernetes Cluster. Run the following command to download the Calico manifest file "calico.yaml". Then, edit the file "calico.yaml" using nano editor.
curl https://docs.projectcalico.org/manifests/calico.yaml -O
nano calico.yaml
Uncomment the configuration "CALICO_IPV4POOL_CIDR" and change the network subnet to "10.42.0.0/24". This subnet configuration must be the same subnet as on the CRI-O bridge configuration and the "--pod-network-cidr" configuration during the Kubernetes initialization using the "kubeadm init" command.
...
- name: CALICO_IPV4POOL_CIDR
value: "10.42.0.0/24"
...
When you are finished, save and close the file.
Next, run the "kubectl" command below to deploy the Calico CNI plugin with the custom manifest file "calico.yaml". This command will create multiple Kubernetes resources for the Calico CNI plugin. Also, this will download Calico images and create new Pods for Calico.
sudo kubectl apply -f calico.yml
Now run the following kubectl command below to check available Pods on your Kubernetes Cluster. You should see two additional Pods the "calico-node-xxx" and "calico-kube-controller-xxx".
kubectl get pods --all-namespaces
Adding Worker Nodes to Kubernetes
After initializing the Kubernetes Control Plane on the "k8s-master" server, you will add worker nodes "k8s-worker1" and "k8s-worker2" to the Kubernetes Cluster.
Move to the "k8s-worker1" server and run the following "kubeadm join" command below to add the "k8s-worker1" to the Kubernetes Cluster. You may have different token and ca-cert-hash, you can see details of this information on the output message when you initialize the Control Plane node.
kubeadm join 192.168.5.10:6443 --token dbgk8h.nwzqqp1v5aqif5fy \
--discovery-token-ca-cert-hash sha256:7a543a545585358b143ce3e8633a8d673b6f628c5abc995939a58606c6dd219c
In the following output, you can see that the "k8s-worker1" server is joined by the Kubernetes Cluster.
Next, move to the "k8s-worker2" server and run the "kubeadm join" command to add the "k8s-worker2" to the Kubernetes Cluster.
kubeadm join 192.168.5.10:6443 --token dbgk8h.nwzqqp1v5aqif5fy \
--discovery-token-ca-cert-hash sha256:7a543a545585358b143ce3e8633a8d673b6f628c5abc995939a58606c6dd219c
You will see the same output message when the process is finished.
Now back to the Control Plane server 'k8s-master" and run the following command to check all running pods on the Kubernetes Cluster. You should see there are additional pods for every Kubernetes component on all namespaces.
kubectl get pods --all-namespaces
or
kubectl get pods -o wide --all-namespaces
Lastly, check and verify all available nodes on the Kubernetes Cluster using the 'kubectl" command below. You should see the 'cplane1" server is running as the Kubernetes Control Plane, and the "k8s-worker1" and "k8s-worker2" servers are running as the worker node.
kubectl get nodes -o wide
Deploying Nginx Pod on Kubernetes Cluster
Run the following command to create a new deployment for the Nginx web server. In this example, we will create new Nginx Pods based on the image "nginx:alpine" with two replicas.
kubectl create deployment nginx --image=nginx:alpine --replicas=2
Now create a new service type "NodePort" that will expose the Nginx deployment using the following kubectl command. This command will create a new Kubernetes service named "nginx" with the type "NodePort" and expose the port "80" for the Pod.
kubectl create service nodeport nginx --tcp=80:80
Next, run the following kubectl command to check the list of the running pod on your Kubernetes cluster. And you should see two Nginx pods running.
kubectl get pods
Now check the list of available services on Kubernetes using the following command. You should see the "nginx" service type NodePort exposed the port "80" and port "31277" on Kubernetes hosts. The service NodePort will always expose the port between the range 30000-32767.
kubectl get svc
Run the curl command below to access your Nginx deployment.
curl k8s-worker1:31277
curl k8s-worker2:31277
Below is the output of the index.html source code from the "k8s-worker1" node.
And below is the index.html code from the "k8s-worker2" node.
Conclusion
Throughout this tutorial, you have finished the deployment of the Kubernetes Cluster with three nodes and Debian 11 servers. The Kubernetes Cluster is running with one control plane and two worker nodes. It's running with the CRI-O as the Container Runtime for your Kubernetes Cluster, and the Calico CNI plugin for the networking of Pods on your cluster. Also, you have successfully deployed the Nginx web server inside the Kubernetes Cluster.
You have fully configured Kubernetes Cluster, you can start deploying your applications to the Kubernetes Cluster or try to install the Kubernetes Dashboard to learn more about your Kubernetes environment.