Vanilla k8s on Hetzner

Date: 2020-12-11
Author: Helmuth

To learn more about k8s I wanted to install a vanilla k8s single-node cluster on top of a Hetzner baremetal server.

First I installed Ubuntu 20.04 on it, using LVM and RAID1.

Next step is ensuring that iptables will see the bridged traffic, and to enable forwarding of packages:

# vi /etc/modules-load.d/k8s.conf overlay br_netfilter # modprobe br_netfilter # vi /etc/sysctl.d/k8s.conf <code>net.bridge.bridge-nf-call-ip6tables = 1</code> <code>net.bridge.bridge-nf-call-iptables = 1</code> <strong>net.ipv4.ip_forward</strong> = <strong>1</strong> # sysctl --system
Code language: Shell Session (shell)

Next step is installing a container runtime. I'm using containerd which is the runtime developed by Docker (and spinned off as a separate project).

# apt-get install -y containerd runc
Code language: Shell Session (shell)

We have to adjust the configuration of containerd such that it uses systemd and will let systemd manage its cgroups:

# vi /etc/containerd/config.toml version = 2 [plugins]   [plugins."io.containerd.grpc.v1.cri"]    [plugins."io.containerd.grpc.v1.cri".containerd]       [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]         [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]           runtime_type = "io.containerd.runc.v2"           [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]             SystemdCgroup = true
Code language: Shell Session (shell)

Following that we perform a restart of containerd:

# service containerd restart
Code language: Shell Session (shell)

Next is install kubeadm, which will perform the installation of k8s.

# apt-get install -y apt-transport-https curl gnupg # <code>curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -</code> # vi /etc/apt/sources.list.d/kubernetes.list <code>deb https://apt.kubernetes.io/ kubernetes-xenial main</code> # apt-get update # apt-get install -y <code>kubelet kubeadm kubectl</code>
Code language: Shell Session (shell)

As documented, the kubelet is now restarting in a loop until we are finished with configuration.

To avoid any unexpected upgrades of kubelet, kubeadm or kubectl in the future we mark the packages as "hold".

# apt-mark hold kubelet kubeadm kubectl
Code language: Shell Session (shell)

Since we are using containerd and not docker we will have to specify the cgroupDriver value when initializing the cluster.

In addition we also want to specify a DNS name as the control plane endpoint, this will allow us to extend our cluster to high availability.

And since we want to use swap on our system we will also instruct kubeadm not to complain about it.

# vi kubeadm.yaml apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration failSwapOn: false cgroupDriver: systemd --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration networking:   podSubnet: "192.168.0.0/24" controlPlaneEndpoint: "k8s.helmuth.at:6443"
Code language: Shell Session (shell)

Now we can initialize the cluster, where we will also specify the address of the pods and also not to complain about the swap:

# kubeadm config images pull # kubeadm init --ignore-preflight-errors Swap --config kubeadm.yaml
Code language: Shell Session (shell)

After some time the cluster is installed and is now waiting for the network to be configured:

# mkdir ~/.kube # cp /etc/kubernetes/admin.conf .kube/config # kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Code language: Shell Session (shell)

Finally, since this is a single-node cluster, we need to "untaint" the node so that it can execute workload pods:

# kubectl taint nodes --all node-role.kubernetes.io/master-
Code language: Shell Session (shell)

So far so good! We have a working single-node cluster. Now it's time to add tools, like Helm, the Kubernetes Dashboard, and an Ingress Controller.

# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 # bash get_helm.sh <code># helm repo add stable https://charts.helm.sh/stable</code>
Code language: Shell Session (shell)

With helm, installing an Nginx ingress is as simple as this:

# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx # helm repo update # vi nginx.yaml controller:   dnsPolicy: ClusterFirstWithHostNet   hostNetwork: true reportNodeInternalIp: true kind: DaemonSet # helm install -f nginx.yaml nginx ingress-nginx/ingress-nginx
Code language: Shell Session (shell)

Next on our list is cert-manager. This will allow us to use Let's Encrypt for deploying trusted certificates!

# helm repo add jetstack https://charts.jetstack.io # helm repo update # <code>kubectl create namespace cert-manager</code> # vi cert-manager-install.yaml  installCRDs: true ingressShim:   defaultIssuerName: "letsencrypt-prod"   defaultIssuerKind: "ClusterIssuer"   defaultIssuerGroup: "cert-manager.io" # <code>helm install cert-manager --namespace cert-manager -f cert-manager-install.yaml jetstack/cert-manager</code> # vi cert-manager.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata:   name: letsencrypt-prod spec:   acme:     # You must replace this email address with your own.     # Let's Encrypt will use this to contact you about expiring     # certificates, and issues related to your account.     email: someone@example.com     server: https://acme-v02.api.letsencrypt.org/directory     privateKeySecretRef:       # Secret resource that will be used to store the account's private key.       name: k8s-issuer-account-key     # Add a single challenge solver, HTTP01 using nginx     solvers:     - http01:         ingress:           class: nginx # kubectl apply -f cert-manager.yaml
Code language: Shell Session (shell)

Great! So now let's deploy the Dashboard.

# helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ # helm repo update # helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
Code language: Shell Session (shell)

Awesome!

Next step: we will deploy a sample workload.

crossmenu