Kubernetes on Ubuntu 16.04
Views: 6560
After Evaluation, I recommend using docker swarm instead of kubernetes. Docker swarm is now integrated in docker, so it fits perfectly the need when using docker, and moreover it is extremely simple to use and maintain.
Kubernetes is a cluster management system that allows to deploy docker containers to several nodes. So it is a direct replacement of docker swarm. While docker swarm exposes docker API and therefore looks like docker but schedules containers to different hosts, kubernetes uses a much more complex but more flexible approach and implements it’s own API. You can’t start a docker container on kubernetes unless you specify a «pod», which encloses a whole service consisting of several containers, volumes, etc., similar to docker compose.
Wording
- Node
- A worker machine, namely host, a server, running pods.
- Master Node
- The main node that controls all other nodes.
- Pod
- A definition of a unit that can run on an node. A combination of docker container (virtual machine), volumes (data storage), options such as ports, hardware requirements (such as memory, CPU), number of instances for load balancing.
- Service
- An abstraction which defines a logical set of Pods and a policy by which to access them.
Overview
Installation
Installation of kubernetes is a bit tricky and all the instructions I found are not completely accurate, that’s why I summarize it here. Kubernetes consists of several nodes, one master node for administration and any number of additional nodes to run pods. Load is then distributed among the nodes. by default, the master node does not run pods, we will change this.
Master Node
The most accurate instructions, I found at medium.com. They are more accurate than the official guide, so I’ll follow those instructions and explain my additional findings.
Setup Repository
Unfortunatelly, Ubuntu does not officially provide a repository, but the kubernetes providers do, so set it up:
apt update apt install -y wget apt-transport-https wget -qO- https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo dd of=/etc/apt/sources.list.d/kubernetes.list apt update
Install Kubernetes Master Node
apt install docker.io kubelet kubeadm kubectl kubernetes-cni sudo rm -rf --one-file-system /var/lib/kubelet sudo kubeadm init
After sudo kubeadm init
, you get an important information, a kubeadm join --token=
-command. Store this on a safe place, you’ll need it later to setup more nodes, it is the secret token to join the kubernetes cluster.
Unfortunately there is a misconfiguration that you need to fix:
sudo sed -i 's, $KUBELET_NETWORK_ARGS,,' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf systemctl restart kubelet systemctl daemon-reload
Possible Problems
If you have btrfs, in kubeadm init
, you get following error message:
[preflight] Some fatal errors occurred: unsupported graph driver: btrfs
It seems that kubernetes does not know that it supports btrfs. So just ignore it and run:
sudo kubeadm init --skip-preflight-checks
Administrate Kubernetes as User
First copy the administration config file to the user’s home:
mkdir ~/.kube sudo cp /etc/kubernetes/admin.conf ~/.kube/config sudo chown $(id -u):$(id -g) ~/.kube/config
Then you can use kubectl
as user, e.g. check the nodes status:
kubectl get node
Now your master node should be ready, e.g.:
marc@dev0001:~$ kubectl get node NAME STATUS AGE VERSION dev0001 Ready 21m v1.7.1
Allow master node to run pods by removing the taint (use your own node name instead of dev0001
):
kubectl taint nodes dev0001 node-role.kubernetes.io/master:NoSchedule-
Verify that the taint is gone (use your own node name instead of dev0001
):
kubectl describe nodes dev0001
Possible Problem
If you see NotReady
and in /var/log/syslog
lines with Unable to update cni config: No networks found in /etc/cni/net.d
and Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
, then the daemon was not properly restarted after removing $KUBELET_NETWORK_ARGS
from /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
. command ps aux | grep kube | grep cni
should give you an empty line. If not, just stop it, check ps aux | grep kube | grep cni
, if empty restart it again and check again, it should still be empty:
ps aux | grep kube | grep cni systemctl stop kubelet ps aux | grep kube | grep cni systemctl start kubelet ps aux | grep kube | grep cni
Setup Network
I am not sure if this does what it should, since we removed the network variable $KUBELET_NETWORK_ARGS
above. But it seems that kubectl
only supports CNI Networks. You can use e.g. wave net:
kubectl apply -f https://git.io/weave-kube
Run Dashboard
The dashboard is a pod, so install the dashboard-pos, then run the proxy:
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml kubectl proxy
Head your browser to the given address and start with the tutorial.
Cleanup In Case of Unsolvable Problems
If you want to remove the kubernetes installation, e.g. because a problem has ocurred and you want to retry, just do the following:
systemctl stop kubelet apt purge kubelet kubeadm kubectl kubernetes-cni ps aux | grep kube sudo pkill kube sudo pkill sidecar sudo pkill -KILL kube sudo pkill -KILL sidecar ps aux | grep kube docker rm -f $(docker ps -a | grep k8s_ | awk '{print $1}') sudo rm -rf --one-file-system /var/lib/etcd /var/lib/kubelet /etc/kubernetes /etc/cni rm -rf ~/.kube
After deinstallation, the kubelet processes still run, so you need to kill all of them, what I do with pkill
.
After deinstallation, there are still the docker images created by kubernetes, so you may want to remove them too. They all have prefix k8s_
, see line docker rm -f
above.
Possible Problems
If the sudo rm
line reports that a specific file wasn’t deleted, then umount
it and run the rm
again, e.g.:
marc@dev0001:~$ sudo rm -rf --one-file-system /var/lib/etcd /var/lib/kubelet /etc/kubernetes /etc/cni rm: skipping '/var/lib/kubelet/pods/998f85b7-6bc0-11e7-a33c-0023543259ee/volumes/kubernetes.io~secret/kube-proxy-token-957vr', since it's on a different device rm: skipping '/var/lib/kubelet/pods/998db3e9-6bc0-11e7-a33c-0023543259ee/volumes/kubernetes.io~secret/kube-dns-token-qt2wk', since it's on a different device rm: skipping '/var/lib/kubelet/pods/34d7cb29-6bc7-11e7-a33c-0023543259ee/volumes/kubernetes.io~secret/kubernetes-dashboard-token-0fw6c', since it's on a different device marc@dev0001:~$ sudo umount /var/lib/kubelet/pods/998f85b7-6bc0-11e7-a33c-0023543259ee/volumes/kubernetes.io~secret/kube-proxy-token-957vr /var/lib/kubelet/pods/998db3e9-6bc0-11e7-a33c-0023543259ee/volumes/kubernetes.io~secret/kube-dns-token-qt2wk /var/lib/kubelet/pods/34d7cb29-6bc7-11e7-a33c-0023543259ee/volumes/kubernetes.io~secret/kubernetes-dashboard-token-0fw6c marc@dev0001:~$ sudo rm -rf --one-file-system /var/lib/kubelet
Setup Additional Client Nodes
On the node, install the ubuntu packages, then run the kubeadm join
with the token and IP-Address you recorded above. That should be sufficient. (untested)
Install Software
Same as above:
apt update apt install -y wget apt-transport-https wget -qO- https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo dd of=/etc/apt/sources.list.d/kubernetes.list apt update apt install docker.io kubelet kubeadm kubectl kubernetes-cni
Setup Client Node
Join to the previousely defined cluster using kubeadm join
and the token you recorded above, e.g.:
kubeadm join --token=858698.51d1418b0490485a 192.168.0.13
If you forgot the token, generate a new one on the master node:
kubeadm token generate
Deploy Pods
Deploy from URL
Deploying a pod from an URL is simple, e.g.:
marc@dev0001:~$ kubectl create -f https://k8s.io/docs/tasks/run-application/deployment.yaml deployment "nginx-deployment" created
Now see the pods:
marc@dev0001:~$ kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-431080787-6rcxr 1/1 Running 0 1m nginx-deployment-431080787-c4zv8 1/1 Running 0 1m
Access Pods
The pods are running, but not accessible from outside. On any node, you can access the pods using their IP address:
marc@dev0001:~$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-deployment-431080787-23sg4 1/1 Running 0 24s 172.18.0.4 dev0001 nginx-deployment-431080787-gcp3l 1/1 Running 0 24s 172.18.0.5 dev0001 21.07.17 16.09.57 marc@dev0001:~$ ping 172.18.0.4 PING 172.18.0.4 (172.18.0.4) 56(84) bytes of data. 64 bytes from 172.18.0.4: icmp_seq=1 ttl=64 time=0.107 ms ^C --- 172.18.0.4 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms
You can create a service by exposing the ports, but this is still only accessible using a local cluster IP address from within a cluster node. To access it from the whole world, you need to specify the public IP address of any of the cluster nodes with --external-ip=
, e.g.:
kubectl expose --external-ip=172.16.0.24 deployment nginx-deployment
Delete Deployed Pods
If you just delete the pods, e.g. using kubectl delete pods -l "app=nginx"
, then they are recreate immediately after deletion. To get rid of them, you must delete the deployment:
marc@dev0001:~$ kubectl delete deployment nginx-deployment deployment "nginx-deployment" deleted
Open Tasks
This text will be extended until all open task have been resolved, such as:
- specify own pod configuration
- autostart
kubectl proxy
(for this, an authentication mechanism is required) - add and test additional nodes
- persistent storage
- backup
- create own pods
- …
Gagan Delouri am 9. September 2017 um 04:55 Uhr
Hi,
When I try and run the command below i get this and its just waiting and never completes. can you please help me with this:
root@administrator-virtual-machine:~# sudo kubeadm init –skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.5
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)
[certificates] Using the existing CA certificate and key.
[certificates] Using the existing API Server certificate and key.
[certificates] Using the existing API Server kubelet client certificate and key.
[certificates] Using the existing service account token signing key.
[certificates] Using the existing front-proxy CA certificate and key.
[certificates] Using the existing front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/admin.conf”
[kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/scheduler.conf”
[apiclient] Created API client, waiting for the control plane to become ready
Marc Wäckerlin – Docker Swarm and GlusterFS am 9. September 2017 um 21:35 Uhr
[…] Geo Replication […]
Marc Wäckerlin am 9. September 2017 um 22:10 Uhr
Hi Gagan, sorry for me that has worked, reproducably. After testing kubernetes, I now decided use docker swarm,which is integrated in docker since 1.12. That’s much simpler and better. I am running it since a week now and step by step migrating my services from local docker to the swarm.
Gagan Delouri am 10. September 2017 um 12:28 Uhr
Hi,
thanks so much for your message. I think, I got some working. I got master and nodes working now:
NAME STATUS AGE VERSION
administrator-virtual-machine Ready 1d v1.7.5
node-1 Ready 15m v1.7.5
node-2 Ready 15m v1.7.5
root@administrator-virtual-machine:~#
Should I proceed further with the next steps or should I use the docker instead? I mean is there any difference, as I want to install the dashboard, and Pods as the next step.
Marc Wäckerlin am 11. September 2017 um 14:19 Uhr
Hi Gagan, it’s a basic decision, what you need and what you want to use. For distributing docker containers on several nodes, kubernetes provides a solution, and docker swarm provides another solution. For me, after some evaluation (which produced this blog), I decided to go on with docker swarm. I just updated my experiances. So it is up to you to decide. IMHO, docker swarm is simple and straight forward.
In addition you need a distributed filesystem, regardless whether you work with docker swarm or kubernetes, both have no built-in shared file system. I decided to use gluster, but any other solution is possible, such als cephs, nfs, samba, etc.
Alberto am 7. Dezember 2017 um 12:45 Uhr
Hi! First thanks for this guide. It’s beeing quite hard for me to keep clear with Kubernetes.
Before installing whatever I need, I have a big question to ask concerning Kubernetes’ architecture that you can probably answer since I found little information about it.
What happens if Kubernetes’ master node crashes? I know there is HA configurations explained in K8s guides and other few places but it seems that the “Standard” conf doesn’t imply any HA (HA proxy, +1master-nodes, …), which upsets me somehow.
I’m I misundestarding something¿?
Thank you!!!
Marc Wäckerlin am 18. Dezember 2017 um 13:30 Uhr
Sorry, Alberto, I don’t know. Probably it depends on what «crash» means. After my first tests with kubernetes, I decided to use docker swarm, which is much simpler in it’s setup. Where as kubernetes was bad documented, complex and error prone, docker swarm is lean and smart. I am still looking for a good distributed filesystem, glusterfs is extremely slow and lacks the same as kubernetes: bad documented, complex and error prone. Currently I am looking at rancher and minio. I’ll write a new blog, as soon as I got my successful first steps. Rancher can be on top of kubernetes or docker swarm and includes a volume storage management, so it looks promising.