Multi-Node Kubernetes Cluster

Date: 2021-08-17
Author: Helmuth

This time I wanted to install a multi-node cluster using kubeadm. For simplicity I am using one physical box with multipass, which allows me to have multiple virtual machines for the nodes.

Multipass is delivered using snap, and for installing snap one has to install (if not done already) the snap daemon snapd.

# apt-get install snapd # snap install multipass
Code language: Shell Session (shell)

Next step is installing the virtual machines. Our setup will start with a single node cluster, and we will then add the second node as a compute node.

I suggest allocating at least 2GB RAM, 5GB disk space and 2vCPU per node, but the more the better. In my setup I ended up with 4GB RAM, 20GB disk space and 4vCPUs per node.

# multipass launch --name k8s-control --cpus 4 --mem 4G --disk 20G # multipass launch --name k8s-node --cpus 4 --mem 4G --disk 20G
Code language: Shell Session (shell)

The node k8s-control will be our control plane, and k8s-node will be our compute node.

Let's check the status of the nodes after some time:

# multipass info --all Name: k8s-control State: Running IPv4: 10.135.201.221 Release: Ubuntu 20.04.2 LTS Image hash: bca04157409e (Ubuntu 20.04 LTS) Load: 0.14 0.08 0.03 Disk usage: 1.2G out of 19.2G Memory usage: 166.4M out of 3.8G Name: k8s-node State: Running IPv4: 10.135.201.143 Release: Ubuntu 20.04.2 LTS Image hash: bca04157409e (Ubuntu 20.04 LTS) Load: 0.31 0.12 0.04 Disk usage: 1.2G out of 19.2G Memory usage: 168.8M out of 3.8G
Code language: Shell Session (shell)

Looks good so far!

To access the node for our control plane we can use multipass k8s-control and we will be on the virtual machine.

We start with updating the node to the latest packages:

# multipass shell k8s-control ubuntu@k8s-control:~$ sudo apt-get update && sudo apt-get -y dist-upgrade
Code language: Shell Session (shell)

We will install CRI-O for the container runtime. Let's prepare for this:

ubuntu@k8s-control:~$ sudo modprobe overlay ubuntu@k8s-control:~$ sudo modprobe br_netfilter ubuntu@k8s-control:~$ sudo vi /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 ubuntu@k8s-control:~$ sudo sysctl --system
Code language: Shell Session (shell)

Next is installing CRI-O.

$ sudo vi /etc/apt/sources.list.d/cri-o.list deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ / deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.18/xUbuntu_20.04/ / $ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/Release.key | sudo apt-key add - $ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.18/xUbuntu_20.04/Release.key | sudo apt-key add - $ sudo apt-get update $ sudo apt-get install cri-o cri-o-runc
Code language: Shell Session (shell)

Now we can start the CRI-O runtime:

ubuntu@k8s-control:~$ sudo systemctl daemon-reload ubuntu@k8s-control:~$ sudo systemctl enable crio ubuntu@k8s-control:~$ sudo systemctl start crio ubuntu@k8s-control:~$ sudo systemctl status crio
Code language: Shell Session (shell)

Next step: install kubadm. For this we also make sure that kubelet will find the CRI-O runtime.

ubuntu@k8s-control:~$ sudo vi /etc/default/kubelet KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m
Code language: Shell Session (shell)

Adding the repo for kubernetes. There is no entry for "focal" (20.04) so we use the entry for xenial which is still working fine.

ubuntu@k8s-control:~$ sudo vi /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main ubuntu@k8s-control:~$ curl -L https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ubuntu@k8s-control:~$ sudo apt-get update ubuntu@k8s-control:~$ sudo apt-get install -y kubeadm=1.18.1-00 kubelet=1.18.1-00 kubectl=1.18.1-00
Code language: Shell Session (shell)

Updates of kubeadm and other tools must be controlled, we are locking the current version in.

ubuntu@k8s-control:~$ sudo apt-mark hold kubelet kubeadm kubectl
Code language: Shell Session (shell)

Next we look into installing Calico for networking.

ubuntu@k8s-control:~$ wget https://docs.projectcalico.org/manifests/calico.yaml
Code language: Shell Session (shell)

There is one setting for the IPv4 pool which is used to allocate addresses to pods. This pool (by default 192.168.0.0/16) should not conflict with other addresses in the cluster.

Next is to add the local IP address of the control plane to the /etc/hosts file, with a name - I'm using k8smaster in this example.

sudo vi /etc/hosts<br>10.135.201.221 k8smaster
Code language: Shell Session (shell)

As multipass is managing the /etc/hosts file we also have to update the template to ensure this value will not be changed later:

sudo vi /etc/cloud/templates/hosts.debian.tmpl
Code language: Shell Session (shell)

Following this we create a simple kubeadm configuration for initializing the master:

vi kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.18.1 controlPlaneEndpoint: "k8smaster:6443" networking: podSubnet: 192.168.0.0/16
Code language: YAML (yaml)

With this file we can now initialize the cluster:

sudo kubeadm init --config=kubeadm-config.yaml --upload-certs \<br> | tee kubeadm-init.out
Code language: Shell Session (shell)

To have control access, we copy the kube-config file for the current user:

ubuntu@k8s-control:~$ mkdir .kube ubuntu@k8s-control:~$ sudo cp -i /etc/kubernetes/admin.conf .kube/config ubuntu@k8s-control:~$ sudo chown ubuntu: .kube/config
Code language: Shell Session (shell)

And we can now apply the calico configuration.

kubectl apply -f calico.yaml
Code language: Shell Session (shell)

For future ease we can update bash completion, using the defaults by Kubernetes:

echo "source <(kubectl completion bash)" >> $HOME/.bashrc
Code language: Shell Session (shell)

Next step is connecting our second worker node to the cluster.

crossmenu