Multi-Node Kubernetes Cluster

This time I wanted to install a multi-node cluster using kubeadm. For simplicity I am using one physical box with multipass, which allows me to have multiple virtual machines for the nodes.

Multipass is delivered using snap, and for installing snap one has to install (if not done already) the snap daemon snapd.

# apt-get install snapd # snap install multipass
Code language: Shell Session (shell)

Next step is installing the virtual machines. Our setup will start with a single node cluster, and we will then add the second node as a compute node.

I suggest allocating at least 2GB RAM, 5GB disk space and 2vCPU per node, but the more the better. In my setup I ended up with 4GB RAM, 20GB disk space and 4vCPUs per node.

# multipass launch --name k8s-control --cpus 4 --mem 4G --disk 20G # multipass launch --name k8s-node --cpus 4 --mem 4G --disk 20G
Code language: Shell Session (shell)

The node k8s-control will be our control plane, and k8s-node will be our compute node.

Let's check the status of the nodes after some time:

# multipass info --all Name: k8s-control State: Running IPv4: 10.135.201.221 Release: Ubuntu 20.04.2 LTS Image hash: bca04157409e (Ubuntu 20.04 LTS) Load: 0.14 0.08 0.03 Disk usage: 1.2G out of 19.2G Memory usage: 166.4M out of 3.8G Name: k8s-node State: Running IPv4: 10.135.201.143 Release: Ubuntu 20.04.2 LTS Image hash: bca04157409e (Ubuntu 20.04 LTS) Load: 0.31 0.12 0.04 Disk usage: 1.2G out of 19.2G Memory usage: 168.8M out of 3.8G
Code language: Shell Session (shell)

Looks good so far!

To access the node for our control plane we can use multipass k8s-control and we will be on the virtual machine.

We start with updating the node to the latest packages:

# multipass shell k8s-control ubuntu@k8s-control:~$ sudo apt-get update && sudo apt-get -y dist-upgrade
Code language: Shell Session (shell)

We will install CRI-O for the container runtime. Let's prepare for this:

ubuntu@k8s-control:~$ sudo modprobe overlay ubuntu@k8s-control:~$ sudo modprobe br_netfilter ubuntu@k8s-control:~$ sudo vi /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 ubuntu@k8s-control:~$ sudo sysctl --system
Code language: Shell Session (shell)

Next is installing CRI-O.

$ sudo vi /etc/apt/sources.list.d/cri-o.list deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ / deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.18/xUbuntu_20.04/ / $ curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/Release.key | sudo apt-key add - $ curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.18/xUbuntu_20.04/Release.key | sudo apt-key add - $ sudo apt-get update $ sudo apt-get install cri-o cri-o-runc
Code language: Shell Session (shell)

Now we can start the CRI-O runtime:

ubuntu@k8s-control:~$ sudo systemctl daemon-reload ubuntu@k8s-control:~$ sudo systemctl enable crio ubuntu@k8s-control:~$ sudo systemctl start crio ubuntu@k8s-control:~$ sudo systemctl status crio
Code language: Shell Session (shell)

Next step: install kubadm. For this we also make sure that kubelet will find the CRI-O runtime.

ubuntu@k8s-control:~$ sudo vi /etc/default/kubelet KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m
Code language: Shell Session (shell)

Adding the repo for kubernetes. There is no entry for "focal" (20.04) so we use the entry for xenial which is still working fine.

ubuntu@k8s-control:~$ sudo vi /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main ubuntu@k8s-control:~$ curl -L https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ubuntu@k8s-control:~$ sudo apt-get update ubuntu@k8s-control:~$ sudo apt-get install -y kubeadm=1.18.1-00 kubelet=1.18.1-00 kubectl=1.18.1-00
Code language: Shell Session (shell)

Updates of kubeadm and other tools must be controlled, we are locking the current version in.

ubuntu@k8s-control:~$ sudo apt-mark hold kubelet kubeadm kubectl
Code language: Shell Session (shell)

Next we look into installing Calico for networking.

ubuntu@k8s-control:~$ wget https://docs.projectcalico.org/manifests/calico.yaml
Code language: Shell Session (shell)

There is one setting for the IPv4 pool which is used to allocate addresses to pods. This pool (by default 192.168.0.0/16) should not conflict with other addresses in the cluster.

Next is to add the local IP address of the control plane to the /etc/hosts file, with a name - I'm using k8smaster in this example.

sudo vi /etc/hosts<br>10.135.201.221 k8smaster
Code language: Shell Session (shell)

As multipass is managing the /etc/hosts file we also have to update the template to ensure this value will not be changed later:

sudo vi /etc/cloud/templates/hosts.debian.tmpl
Code language: Shell Session (shell)

Following this we create a simple kubeadm configuration for initializing the master:

vi kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: 1.18.1 controlPlaneEndpoint: "k8smaster:6443" networking: podSubnet: 192.168.0.0/16
Code language: YAML (yaml)

With this file we can now initialize the cluster:

sudo kubeadm init --config=kubeadm-config.yaml --upload-certs \<br> | tee kubeadm-init.out
Code language: Shell Session (shell)

To have control access, we copy the kube-config file for the current user:

ubuntu@k8s-control:~$ mkdir .kube ubuntu@k8s-control:~$ sudo cp -i /etc/kubernetes/admin.conf .kube/config ubuntu@k8s-control:~$ sudo chown ubuntu: .kube/config
Code language: Shell Session (shell)

And we can now apply the calico configuration.

kubectl apply -f calico.yaml
Code language: Shell Session (shell)

For future ease we can update bash completion, using the defaults by Kubernetes:

echo "source <(kubectl completion bash)" >> $HOME/.bashrc
Code language: Shell Session (shell)

Next step is connecting our second worker node to the cluster.

Vanilla k8s on Hetzner

To learn more about k8s I wanted to install a vanilla k8s single-node cluster on top of a Hetzner baremetal server.

First I installed Ubuntu 20.04 on it, using LVM and RAID1.

Next step is ensuring that iptables will see the bridged traffic, and to enable forwarding of packages:

# vi /etc/modules-load.d/k8s.conf overlay br_netfilter # modprobe br_netfilter # vi /etc/sysctl.d/k8s.conf <code>net.bridge.bridge-nf-call-ip6tables = 1</code> <code>net.bridge.bridge-nf-call-iptables = 1</code> <strong>net.ipv4.ip_forward</strong> = <strong>1</strong> # sysctl --system
Code language: Shell Session (shell)

Next step is installing a container runtime. I'm using containerd which is the runtime developed by Docker (and spinned off as a separate project).

# apt-get install -y containerd runc
Code language: Shell Session (shell)

We have to adjust the configuration of containerd such that it uses systemd and will let systemd manage its cgroups:

# vi /etc/containerd/config.toml version = 2 [plugins]   [plugins."io.containerd.grpc.v1.cri"]    [plugins."io.containerd.grpc.v1.cri".containerd]       [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]           runtime_type = "io.containerd.runc.v2"           [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]             SystemdCgroup = true
Code language: Shell Session (shell)

Following that we perform a restart of containerd:

# service containerd restart
Code language: Shell Session (shell)

Next is install kubeadm, which will perform the installation of k8s.

# apt-get install -y apt-transport-https curl gnupg # <code>curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -</code> # vi /etc/apt/sources.list.d/kubernetes.list <code>deb https://apt.kubernetes.io/ kubernetes-xenial main</code> # apt-get update # apt-get install -y <code>kubelet kubeadm kubectl</code>
Code language: Shell Session (shell)

As documented, the kubelet is now restarting in a loop until we are finished with configuration.

To avoid any unexpected upgrades of kubelet, kubeadm or kubectl in the future we mark the packages as "hold".

# apt-mark hold kubelet kubeadm kubectl
Code language: Shell Session (shell)

Since we are using containerd and not docker we will have to specify the cgroupDriver value when initializing the cluster.

In addition we also want to specify a DNS name as the control plane endpoint, this will allow us to extend our cluster to high availability.

And since we want to use swap on our system we will also instruct kubeadm not to complain about it.

# vi kubeadm.yaml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration failSwapOn: false cgroupDriver: systemd --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration networking:   podSubnet: "192.168.0.0/24" controlPlaneEndpoint: "k8s.helmuth.at:6443"
Code language: Shell Session (shell)

Now we can initialize the cluster, where we will also specify the address of the pods and also not to complain about the swap:

# kubeadm config images pull # kubeadm init --ignore-preflight-errors Swap --config kubeadm.yaml
Code language: Shell Session (shell)

After some time the cluster is installed and is now waiting for the network to be configured:

# mkdir ~/.kube # cp /etc/kubernetes/admin.conf .kube/config # kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Code language: Shell Session (shell)

Finally, since this is a single-node cluster, we need to "untaint" the node so that it can execute workload pods:

# kubectl taint nodes --all node-role.kubernetes.io/master-
Code language: Shell Session (shell)

So far so good! We have a working single-node cluster. Now it's time to add tools, like Helm, the Kubernetes Dashboard, and an Ingress Controller.

# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 # bash get_helm.sh <code># helm repo add stable https://charts.helm.sh/stable</code>
Code language: Shell Session (shell)

With helm, installing an Nginx ingress is as simple as this:

# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx # helm repo update # vi nginx.yaml controller:   dnsPolicy: ClusterFirstWithHostNet   hostNetwork: true reportNodeInternalIp: true kind: DaemonSet # helm install -f nginx.yaml nginx ingress-nginx/ingress-nginx
Code language: Shell Session (shell)

Next on our list is cert-manager. This will allow us to use Let's Encrypt for deploying trusted certificates!

# helm repo add jetstack https://charts.jetstack.io # helm repo update # <code>kubectl create namespace cert-manager</code> # vi cert-manager-install.yaml  installCRDs: true ingressShim:   defaultIssuerName: "letsencrypt-prod"   defaultIssuerKind: "ClusterIssuer"   defaultIssuerGroup: "cert-manager.io" # <code>helm install cert-manager --namespace cert-manager -f cert-manager-install.yaml jetstack/cert-manager</code> # vi cert-manager.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata:   name: letsencrypt-prod spec:   acme:     # You must replace this email address with your own.     # Let's Encrypt will use this to contact you about expiring     # certificates, and issues related to your account.     email: someone@example.com     server: https://acme-v02.api.letsencrypt.org/directory     privateKeySecretRef:       # Secret resource that will be used to store the account's private key.       name: k8s-issuer-account-key     # Add a single challenge solver, HTTP01 using nginx     solvers:     - http01:         ingress:           class: nginx # kubectl apply -f cert-manager.yaml
Code language: Shell Session (shell)

Great! So now let's deploy the Dashboard.

# helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ # helm repo update # helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
Code language: Shell Session (shell)

Awesome!

Next step: we will deploy a sample workload.

Managed k3s

Yesterday I stumbled upon a provider of managed k3s.

If you did not hear about k3s: it is a tiny Kubernetes distribution created by Rancher, which does some peculiar design approaches for minimal size and runtime overhead:

This makes k3s the ideal choice for small-scale systems, e.g. Raspberry Pi or small VMs from cloud providers.

Now CIVO - which originally focussed on an PAAS platform - is offereing a beta on its upcoming Kubernetes service based on k3s:

https://www.civo.com/kube100

They will give you 70 EUR per month while in beta, and the prices are very reasonable, on par with e.g. DigitalOcean.