Vanilla k8s on Hetzner

Date: 2020-12-11
Author: Helmuth

To learn more about k8s I wanted to install a vanilla k8s single-node cluster on top of a Hetzner baremetal server.

First I installed Ubuntu 20.04 on it, using LVM and RAID1.

Next step is ensuring that iptables will see the bridged traffic, and to enable forwarding of packages:

# vi /etc/modules-load.d/k8s.conf
# modprobe br_netfilter
# vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# sysctl --system

Next step is installing a container runtime. I'm using containerd which is the runtime developed by Docker (and spinned off as a separate project).

# apt-get install -y containerd runc

We have to adjust the configuration of containerd such that it uses systemd and will let systemd manage its cgroups:

# vi /etc/containerd/config.toml
version = 2
          runtime_type = "io.containerd.runc.v2"
            SystemdCgroup = true

Following that we perform a restart of containerd:

# service containerd restart

Next is install kubeadm, which will perform the installation of k8s.

# apt-get install -y apt-transport-https curl gnupg
# curl -s | apt-key add -
# vi /etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
# apt-get update
# apt-get install -y kubelet kubeadm kubectl

As documented, the kubelet is now restarting in a loop until we are finished with configuration.

To avoid any unexpected upgrades of kubelet, kubeadm or kubectl in the future we mark the packages as "hold".

# apt-mark hold kubelet kubeadm kubectl

Since we are using containerd and not docker we will have to specify the cgroupDriver value when initializing the cluster.

In addition we also want to specify a DNS name as the control plane endpoint, this will allow us to extend our cluster to high availability.

And since we want to use swap on our system we will also instruct kubeadm not to complain about it.

# vi kubeadm.yaml
kind: KubeletConfiguration
failSwapOn: false
cgroupDriver: systemd
kind: ClusterConfiguration
  podSubnet: ""
controlPlaneEndpoint: ""

Now we can initialize the cluster, where we will also specify the address of the pods and also not to complain about the swap:

# kubeadm config images pull
# kubeadm init --ignore-preflight-errors Swap --config kubeadm.yaml

After some time the cluster is installed and is now waiting for the network to be configured:

# mkdir ~/.kube
# cp /etc/kubernetes/admin.conf .kube/config
# kubectl apply -f

Finally, since this is a single-node cluster, we need to "untaint" the node so that it can execute workload pods:

# kubectl taint nodes --all

So far so good! We have a working single-node cluster. Now it's time to add tools, like Helm, the Kubernetes Dashboard, and an Ingress Controller.

# curl -fsSL -o
# bash
# helm repo add stable

With helm, installing an Nginx ingress is as simple as this:

# helm repo add ingress-nginx
# helm repo update
# vi nginx.yaml
  dnsPolicy: ClusterFirstWithHostNet
  hostNetwork: true
  reportNodeInternalIp: true
  kind: DaemonSet
# helm install -f nginx.yaml nginx ingress-nginx/ingress-nginx

Next on our list is cert-manager. This will allow us to use Let's Encrypt for deploying trusted certificates!

# helm repo add jetstack
# helm repo update
# kubectl create namespace cert-manager
# vi cert-manager-install.yaml 
installCRDs: true
  defaultIssuerName: "letsencrypt-prod"
  defaultIssuerKind: "ClusterIssuer"
  defaultIssuerGroup: ""
# helm install cert-manager --namespace cert-manager -f cert-manager-install.yaml jetstack/cert-manager
# vi cert-manager.yaml
kind: ClusterIssuer
  name: letsencrypt-prod
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
      # Secret resource that will be used to store the account's private key.
      name: k8s-issuer-account-key
    # Add a single challenge solver, HTTP01 using nginx
    - http01:
          class: nginx
# kubectl apply -f cert-manager.yaml

Great! So now let's deploy the Dashboard.

# helm repo add kubernetes-dashboard
# helm repo update
# helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard


Next step: we will deploy a sample workload.