Get a Grip on Kubernetes

Setup

k8sSetting up a Kubernetes cluster can be a daunting task when doing it manually. Although it’s helpful for beginners to use scripts like kube.sh (for K8s 1.3) or kubeadm (for K8s 1.4), they have their limits if you want to build up something that meets productive requirements like load balancing and high availability, especially of K8s controllers.
Fortunately, there’s good advice: Kelsey Hightower from Google has written the great tutorial Kubernetes the hard way. This text is less written for first-timers but for people who already have some experience in setting up K8s clusters ‘the lazy way’ using the afrementioned scripts. In no less than ten chapters you will be guided through the (sometimes painful) process of building a cluster. The insights won are invaluable, but the following necessary step would be setting up a cluster using techniques for automation, because you really don’t want to setup each node manually, one after another.

Automation

Fortunately, again, we’re not left alone, because Lorenzo Nicora from OpenCredo has written an article based on Hightower’sthat covers the automation gap: his three-part-series ‘Kubernetes from scratch to AWS with Terraform and Ansible‘ follows Hightower’s approach and provides a well-written framework for setting up an (almost) production-ready Kubernetes 1.3 cluster. Nicora’s methodusing Terraform and Ansible is clean and perfectly working. At the end you will have set up a cluster with three etcd-, controller- and worker nodes (running CoreOs VMs), respectively.
For commonpurposes and small environments this will be sufficient. However, Nicora admits that there are still simplifications and limits (see closing remarks of each article in his series), and you might tinker with advanced playbooks for security. Building a complete cluster in less than half an hour is impressive work, nonetheless! One issue, however, caused ourteam quite a headache, and that concerns K8s networking and DNS resolution.

Most articles on Kubernetes end with installing a simple application, perhaps a web server hosting a basic website, and that’s it. But if we’re talking about the needs of a production cluster this won’t be sufficient. And when we deployed the Socks Shop in a freshly set-up K8s cluster (using Nicora’s approach) we noticed that something went wrong, because the site’s images weren’t loading. Further experiments revealed that DNS queries didn’t work and the shop’s several containers didn’t talk to each other.

Kubernetes DNS

If you want to make your containers communicate with each other by name you will have to use an additional network plugin: kubedns. This container is basically a wrapper around SkyDNS. To make this run with AWS you will have to specify parameters cluster-dns and cluster-domain, the former was first appearing in variables.tf.

A solution for DNS queries of internet services is adding an additional entry on each host’s resolv.conf, like 8.8.8.8 which is Google’s DNS. This configuration gets handed down to all containers on that host thus making DNS queries outside the cluster possible. kubeDNS allows further configuration settings, see Inheriting DNS from the node.

Further networking

There is a bit of network documentation on the Kubernetes documentation website. Hightower’s K8s installation path makes use of the kubenet plugin and installs a cbr0 bridge, too. The cbr0 bridge is the common link/veth between a pod and its host. The inconspicious sentence “[The kubenet plugin] is also currently incompatible with the flannel experimental overlay.” is yet to be explored, because it doesn’t give the slightest hint how to proceed instead; in the approach presented here flannel isn’t necessary, anyway, because we are using AWS.

AWS makes a difference, because due to its VPCs you will get a (practically) unlimited number of subnets. If you are on a cloud provider who won’t give you private subnets to be shared between hosts (or if you build up a K8s cluster on premise), then Kubernetes can’t assign unique IPs to pods and services which means you would need to install flannel thatsets up a VPC for you. A flannel backend is available for AWS, but its task is just updating the VPC routing table (which can be accomplished by setting –allocate-node-cidrs=true for kube-controller-manager). Note there’s a limit of 50 routes per routing table. Here’s a short discussion on all this.

Little helpers

Though Kubernetes and Containers are modern techniques, they don’t spare you knowing and using the old tools, as the packets wandering around are still the same. If you want to check all http traffic crossing the cbr0 interface, issue a

tcpdump -n -XX -i cbr0 tcp port http

Ports can also be called with their appropriate port number, e.g.

tcpdump -n -XX -i eth0 udp port 53

will dump all DNS traffic on eth0.

and you’ll see what’s going on. Also watching docker0 and eth0 on a K8s worker will reveal possible network / routing issues.

Not every K8s pod may let you into, or you can’t find a logfile for a specific process running inside a container. kubectl has a logs parameter that gives you access to a container’s log:

kubectl --namespace=<ns> logs <pod-name> -c <container-name>

If you don’t know the container’s name start the aforementioned command without the “-c” parameters: kubectl will list you all containers withing that pod available.

The kubelet service configuration can be found at /etc/systemd/system. Should you ever need to alter its content you will have to restatr the kubelet. Do it this way:

About Manfred Berndtgen

Manfred Berndtgen, maintainer of this site, is a part-time researcher with enough spare time for doing useless things and sharing them with the rest of the world. His main photographic subjects are made of plants or stones, and since he's learning Haskell everything seems functional to him.