Denis Gladkikh


My personal blog about software development

  • 23 Jun 2017
  • kubernetes, k8s, docker, kubectl, kubeadm, letsencrypt

I spoke too soon. Actually I had a lot of problems in previous setup.

To be honest - kubernetes certainly requires a lot of debugging to set it up correctly, but when you finally do that - it pays off.

And you actually do not need pod network, when you have just one server, but I am planning to expand it to minimum of two servers, so I choose the hard path.


Pods could not connect outside world.

To fix that I rolled back to docker version 1.11.2. Just uninstall the latest one, which is probably installed as docker-ce

sudo apt-get purge docker-ce

After that take a look on some files left by docker.

sudo find / -name '*docker*'

In my case it left some configurations under systemd, /var/lib/docker and /var/run/docker/ and because of that I could not install docker-engine, as it fails to install because probably scripts in systemd setups docker0 network before the just-installed-previous-version of docker does. So just clean all of that, reboot and install docker.

Pods could not resolve DNS

Next problem was with DNS. Pods could not resolve DNS names. I saw that it can connect over IP addresses, but all DNS calls were timing out.

My problem was similar (maybe actually exactly the same) to Misadventures with kube dns. So the problem was with /etc/resolve.conf which by default has a value of nameserver, this value exists because of local DNS cache. Ubuntu probably knows that when it cannot connect to local DNS cache it just goes after that to the DNS nameservers defined on network interfaces. But kubelet is not so smart, it just takes default /etc/resolve.conf and uses it as source of truth.

You will find out a line in this file, that you should not modify this file manually, because it is auto-generated file by resolvconf.

I have found a root case, why I had this record. It is because of NetworkManager, solution was simple, see nameserver in resolv.conf won’t go away!.

I was told in k8s.slack, that it is a known issue, and at some point it will be fixed.


I looked on helm as on set of rules to use to setup some of the important configurations, including kubernetes-dashboard, heapster and nginx-ingress-controller.

Most of the formulas which I have tried do not support RBAC yet. You can turn it off. But I like that I have an ability to disable access to most of the kubernetes API endpoints for most of applications.

So I went different route and decided to maintain my own configurations. That requires some time to learn configurations.


Join k8s.slack

Join, you can get help from some developers of kubernetes or plugins for kubernetes or some other souls, who had similar issues.

kubectl explain

Kubernetes has decent documentation. But sometime it is much easier to look on API Reference.

And you can get quick access to it from command line, like

kubectl explain roles

Use ingress

Install nginx-ingress-controller. Look on the examples. If you use kubernetes which is initialized by kubeadm you need to use combination of kubeadm and rbac.

Read how it can be configured with annotations.

Basic auth for nginx

I could not find an example of how to configure basic auth for nginx ingress. Just make sure you are aware of format for ngx_http_auth_basic_module. Plain text can have a format of user:{PLAIN}password, after that just base64 it.

Use kube-lego

Kube-lego allows you to automatically configure TLS and generate LetsEncrypt certificates. Just be aware, that by default it is using Staging authority, which generates fake certificates, and when you will switch to Production - you will probably have similar issue to Issue with switching from LE staging to LE prod: 403 urn:acme:error:unauthorized: No registration exists matching provided key”. You can find solution to this issue in that thread. It is as simple as you need to delete a secret kube-lego-account.

Non kubernetes services can be ingressed as well with kubernetes

I have mentioned before that I have a SecuritySpy server on one of my Mac Mini boxes. You can configure Kubernetes, so it will redirect traffic to this service inside your home network, and deal wit TSL, automatic generation of LetsEncrypt certificates.

To do that you need to configure Endpoint, Service and Ingress. Look on Services without selector.

  • 20 Jun 2017
  • kubernetes, k8s, docker, kubectl

Docker is great. Managing Docker can be a pain. Docker-compose could not answer on all of the issues. So it always was wrapped with supporting shell scripts, similar to what I have built for docker-splunk. I have heard a lot about Kubernetes, saw it everywhere, have read a lot of articles, but it always felt over complicated for my home infrastructure.

But the day has come. And I am so glad that I have finally looked on Kubernetes, there are so many great things about it, so many features I have missed:

  • jobs. No need to have special containers with embedded cron jobs anymore.
  • init containers. Forget about building large shell scripts in containers to support some special initialization scenarios, just use init containers.
  • good dependency management between everything.
  • configuration and secrets management out of the box. Forget about hundreds of environment variables.
  • ingress out of box. nginx-proxy is great, but it had some issues with latest versions of docker and docker service.

If you want to play with Kubernetes - use minikube. That will allow you to setup simple single node Kubernetes deployment on VM.

NOTE: docker currently has an issue with time to be out of sync on VM, minikube inherits this issue. See known-issues for details and workaround.

If you are ready for next step - use kubeadm to setup Kubernetes cluster (or single node) on own infrastructure (bare metal).

Just for reference, below are the versions I have used

$ kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:03:38Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:34:20Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}

$ sudo docker version
 Version:      17.03.1-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64

 Version:      17.03.1-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   c6d412e
 Built:        Mon Mar 27 17:14:09 2017
 OS/Arch:      linux/amd64
 Experimental: false
$ uname -a
Linux outcoldbuntu 4.8.0-56-generic #61~16.04.1-Ubuntu SMP Wed Jun 14 11:58:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.2 LTS
Release:    16.04
Codename:   xenial

Install kubernetes on Ubuntu

Mostly just follow the manual from Using kubeadm to Create a Cluster. Few caveats I list below


Kubernetes does not require latest Docker version. And actually kubeadm showed a warning that version is higher than latest validated version, which is 1.12 for current moment.

I have not seen any issues with latest version of Docker, and because I already had it installed I kept the latest version.

To install latest Docker just follow the manual on how to install Docker on Ubuntu.


You probably want to use flannel as a pod network add-on. This add-on works out of box. Manual above suggest you to use --pod-network-cidr= with kubeadm init, so don’t forget it

sudo kubeadm init --pod-network-cidr=

Because I am creating one node Kubernetes deployment I needed to use

kubectl taint nodes --all

Which allows me to schedule pods on this node.

Also perform the steps to give kubectl access to the kubernetes API

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

You can also just copy admin.conf under ~/.kube/config, as this is a default place where kubectl is looking for config.


This is a place, when things start to be more complicated. Kubernetes is built with extensibility in mind, this is why there are always a lot of options.

If you are not sure which network addon to use, use flannel. Reason is quoted

Flannel is a very simple overlay network that satisfies the Kubernetes requirements. Many people have reported success with Flannel and Kubernetes.

To set it up use

$ kubectl apply -f
$ kubectl apply -f

After that you should see that current node has a ready status

$ kubectl get nodes
NAME           STATUS    AGE       VERSION
outcoldbuntu   Ready     1h        v1.6.5

kubectl config

If you want to use multiple clusters from one environment you can manually merge multiple configuration files into one. For example on my Mac I already used configuration from minikube, which has been saved under ~/.kube/config. But to be able to connect to just created cluster on my Ubuntu box I copied admin.conf file to my local box and merged all the values from admin.conf to ~/.kube/config. After that I see multiple contexts defined

$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
          kubernetes-admin@kubernetes   kubernetes   kubernetes-admin   
*         minikube                      minikube     minikube

And I can switch between them

$ kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".

Kubernetes Dashboard

To see metrics on Dashboard you need to install Heapster.

The easiest way to start with Heapster and Dashboard is to use official add-on configurations

First create standalone Heapster deployment

kubectl apply -f

After that create dashboard deployment

kubectl apply -f

You can get access to the dashboard now

$ kubectl proxy

And open http://localhost:8001/ui

Some follow up