Running Kubernetes on a Raspberry Pi cluster

A couple of weeks back I spent the evening in an workshop organized by resin.io where we had the chance to play around with some raspberry pis. Once that was done we got to keep the boards for ourselves. With this nice little gift the number of raspberry pis in my house grew to three. Now, as we all know, one's company, two's a crowd, and three's a cluster, so I figured it would be fun to try and get Kubernetes running on these machines. I spent a few hours over a weekend doing just that.

Setting up the cluster

To get a raspberry k8s cluster up and running you'll need the following:

cluster-pic

After you've got all hardware in place, there are six main targets to achieve in order to go from a bunch of circuit boards lying around to a k8s cluster:

We'll go through each of these bullet points in greater detail and get everything set up.

For each Raspberry Pi

Terminator PRO Tip

SSH into each machine in a different terminal, add all terminals to a group by clicking the three squares button in the top left corner, and select Broadcast Group in the same menu. Every command you type in one terminal will then be broadcast to all other terminals. You'll be able to write these commands once and run them across all three Pis.

There are of course various ways of automating this process across machines but using the terminal is a very simple way of doing it. Plus, there is educational value in running these commands by hand at least once.

Install Raspbian

  1. Get Raspbian Stretch Lite here
  2. Burn it to your sd cards using Etcher - a great gui tool for burning images to sd cards
  3. Enable ssh before putting your cards in by creating an empty "ssh" file in the boot partition of each card
cd /path/to/sdcard/boot/partition # e.g. /media/cristian/boot
touch ssh

Configure networking

k8s-rpi-network

Your setup should look more or same like in the above diagram. The network router could be the one you probably have in your home already and if you don't have an extra router everything in that diagram would simply connect to the so called network router. I bought a small separate router just for this cluster because I'd like to be able to take this cluster with me and plug it into any network I might find. In any case, to get this to work, you need to control the router that your pis connect to.

Log in to your router's interface, usually at 192.168.0.1/192.168.1.1, but do read your router's manual for instructions, and figure out a way to reserve/assign static ip addresses for each raspberry pi. These settings should normally reside in or around a "DHCP Server Settings" page. The router I bought has them under "IP & MAC Binding". After you've assigned a static IP of your choice to the MAC address of each Raspberry unplug the machines and plug them back in. Refresh your router's interface page and you should see them coming up online at the specified addresses.

At this point, you should be able to ssh into your Pis using your computer and login with password raspberry.

ssh pi@192.168.1.42 # or whatever ip you picked

After you're in, change the hostname of each Raspberry using the raspi-config tool.

sudo raspi-config
# then network options > hostname

I do recommend changing the password as well using the same utility.

Disable Swap

For Kubernetes 1.7 and later you will get an error if swap space is enabled. To turn off swap run:

sudo dphys-swapfile swapoff && \
sudo dphys-swapfile uninstall && \
sudo update-rc.d dphys-swapfile remove

If swap is off this command should show no entries:

sudo swapon --summary

Install Docker

This installs the latest version of Docker:

curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker

You'll get a warning later on from Kubernetes about this very recent docker version being untested but I haven't had any docker related issues so it should be fine.

Enable cgroup

sudo nano /boot/cmdline.txt

Add this to the end of the file:

cgroup_enable=cpuset cgroup_enable=memory

Then restart your Pis

sudo reboot

Install Kubernetes

The installation guide is available on the official page here and I will simply paste the same commands here.

VERY IMPORTANT: You should install version 1.9.6 and not the latest. I've had issues which other people online seem to have also run into using 1.10 and the most obvious fix is to simply use 1.9.6 for now.

You will install kubeadm, kubelet, and kubectl on all machines.

sudo apt update && sudo apt install -y apt-transport-https curl # enable installing from repo over https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - # add GPG key
sudo nano /etc/apt/sources.list.d/kubernetes.list
# add this line to the file: deb http://apt.kubernetes.io/ kubernetes-xenial main
sudo apt update
sudo apt install kubelet=1.9.6-00 kubeadm=1.9.6-00 kubectl=1.9.6-00

Edit the kubelet systemd file and add the --cgroup-driver flag to the ExecStart command. The cgroup-driver flag should match the cgroup driver used by Docker. You can see the one Docker uses with

docker info | grep -i cgroup

You'll have to add --cgroup-driver=cgroupfs as a flag to the kubelet command. Afterwards:

sudo systemctl daemon-reload
sudo systemctl restart kubelet

At this point the kubelet service should be restarting continously with the following err:

error: unable to load client CA file /etc/kubernetes/pki/ca.crt

You can check the kubelet logs with

journalctl -u kubelet

On the master Pi node

Initialize your Kubernetes cluster:

sudo kubeadm init

Run the commands given to you:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

This will allow you to control the cluster using kubectl.

Make sure to also save the kubeadm join command given in the output. You'll use this on the worker nodes to connect to the cluster.

Install Weave network driver

My master node is a Raspberry Pi 3 Model B+ and I've run into issues installing Weave using the default command described here. I found a workaround in a github issue here and am pasting the modified command down below:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&env.WEAVE_NO_FASTDP=1"

To see your pods starting up run:

kubectl get pods --all-namespaces

After all containers have finished creating, you should see something like this:

NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
kube-system   etcd-pimaster                      1/1       Running   0          4m
kube-system   kube-apiserver-pimaster            1/1       Running   3          4m
kube-system   kube-controller-manager-pimaster   1/1       Running   0          4m
kube-system   kube-dns-7b6ff86f69-jwkc2          3/3       Running   0          17m
kube-system   kube-proxy-qq8dt                   1/1       Running   0          17m
kube-system   kube-scheduler-pimaster            1/1       Running   0          4m
kube-system   weave-net-k5qfc                    2/2       Running   0          5m

On the worker nodes

Run the join command given to you earlier by kubeadm init, i.e.

sudo kubeadm join --token 2650d1.67e7083ed7ec08f3 192.168.1.42:6443 --discovery-token-ca-cert-hash sha256:de81fd0d8c10aa82d3df771fd0d8eea5c7755f32650d1d0cfba13eaeb22650d1

Go back to the master and run

watch kubectl get nodes

Wait a few minutes and you should see the status of the nodes change to ready.

Deploy a sample application from your computer

Copy the kube config file to your computer (replace the ip with your master ip):

scp pi@192.168.1.42:/home/pi/.kube/config k8s-piconfig

Check kubectl works on your computer:

kubectl --kubeconfig k8s-piconfig get pods --all-namespaces

Create a simple deployment and service for a markdown-to-html converter web service (using this docker image) by creating a file markdown.yaml and pasting the contents below,

---
apiVersion: v1
kind: Service
metadata:
  name: markdownrender
  labels:
    app: markdownrender
spec:
  type: NodePort
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31605
  selector:
    app: markdownrender
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: markdownrender
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: markdownrender
    spec:
      containers:
      - name: markdownrender
        image: functions/markdownrender:latest-armhf
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP

and then running:

kubectl --kubeconfig=k8s-piconfig apply -f markdown.yaml

Watch the pod status column until the containers are created and the pods running:

watch kubectl --kubeconfig=k8s-piconfig get pods

and then test the service (replacing the ip with the ip of your master node):

$ curl http://192.168.0.42:31605 -d "# Hello from the Kubernetes Raspberry Pi cluster"
<h1>Hello from the Kubernetes Raspberry Pi cluster</h1>

Kubernetes dashboard

k8s-dash

To install the kubernetes dashboard run:

kubectl --kubeconfig k8s-piconfig apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-arm.yaml

and to get a token that you can login with either read this guide or simply copy and paste the following into a dashboard.yaml file,

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

and then run

kubectl --kubeconfig k8s-piconfig apply -f dashboard.yaml

Open up a localhost proxy to the dashboard via

kubectl proxy

and then login with the token obtained by running

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Have fun with your little cluster! For more content on the subject have a look at this github gist and go through the comments if you happen to run into issues. I also recommend this great post by Hanselman regarding his Pi cluster.

Jun 16, 2018

More articles