I recently watched Joe Beda’s TGIk on kubeadm. This was an excellent introduction to kubeadm and I found some time Sunday night to play around with it. The kubeadm installation guide was pretty straight forward but I hit a few gotchas getting my cluster working. Below are the steps I took to get a kubeadm cluster running with Vagrant and Fedora 27. To get started I installed virtual box and vagrant on my laptop. Once both packages were installed I ran my bootstrap script to create 3 cluster nodes:
$ git clone https://github.com/Matty9191/kubernetes.git
$ kubernetes/scripts/create-kubeadm-cluster
The script will create 3 Fedora 27 Vagrant boxes and install the latest version of Kubernetes (a version can be passed to the script as arg1 as well). It will also update the hosts file, add a couple of sysctl values and create the Kubernetes yum repository file. To prepare the cluster you will need to pick one node to run the control plane functions (etcd, API server, scheduler, controller). Once you identify a node you can log in and fire up the kubelet:
$ systemctl enable kubelet && systemctl start kubelet
To provision the control plane you can can run kubeadm init:
$ kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.10.0.0/16 --apiserver-advertise-address=10.10.10.101
1.30
[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kub1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 10.10.10.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 83.004315 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kub1 as master by adding a label and a taint
[markmaster] Master kub1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 078ce3.486bf3405ee8b160
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token XXXXXXX 10.10.10.101:6443 --discovery-token-ca-cert-hash sha256:388761e7df7b5026d1194254b710ed6de9e83da8a0600314dc22353b90bc9b31
The kubeadm command line listed above specifies the pod and service CIDR ranges as well as the IP the API server should listen for requests on. If you encounter a warning or a kubeadm error you can use kubeadm reset
to revert the changes that were made. This will return your system to the state it was in prior to init running. To finish the installation you need to set up a networking solution. I’m using flannel’s host-gw but there are several other options available. To deploy flannel I created a kubeconfig using the “To start using your cluster” steps listed above:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Next I ran curl to retrieve the deployment manifest (you should review the YAML file before applying it to the cluster) and kubectl create to apply the host-gw flannel configuration:
$ curl https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml | sed 's/vxlan/host-gw/g' | sed 's/kube-subnet-mgr\"/&\, \"-iface\"\, \"eth1\"/' > flannel.yml
$ kubectl create -f flannel.yml
If no errors were generated the remaining nodes can be joined to the cluster with kubeadm join:
$ kubeadm join --token XXXXXXXXX 10.10.10.101:6443 --discovery-token-ca-cert-hash sha256:f4df190b26844aecabd03510f79ef48a2501f48b810da739f9380616fb83369d
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.10.10.101:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.10.101:6443"
[discovery] Requesting info from "https://10.10.10.101:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.10.101:6443"
[discovery] Successfully established connection with API Server "10.10.10.101:6443"
[bootstrap] Detected server version: v1.8.8
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
If everything went smotthly you should have a 3-node cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kub1 Ready master 3m v1.9.3
kub2 Ready <none> 36s v1.9.3
kub3 Ready <none> 20s v1.9.3
And a number of pods should be running in the kube-system namespace:
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
etcd-kub1 1/1 Running 0 15m
kube-apiserver-kub1 1/1 Running 0 15m
kube-controller-manager-kub1 1/1 Running 0 15m
kube-dns-545bc4bfd4-lqwm2 3/3 Running 0 16m
kube-flannel-ds-25zfb 1/1 Running 0 14m
kube-flannel-ds-46r98 1/1 Running 0 5m
kube-flannel-ds-fnmf5 1/1 Running 0 5m
kube-proxy-czhb5 1/1 Running 0 5m
kube-proxy-dv4fb 1/1 Running 0 16m
kube-proxy-mw2x8 1/1 Running 0 5m
kube-scheduler-kub1 1/1 Running 0 15m
While the installation process was relatively straight forward there were a few gotchas: