Local development environment with Kubernetes in 2019

Local development environment with Kubernetes in 2019

It's 2019, and Docker for Desktop comes with local Kubernetes support enabled out of the box.

This post is about setting up a local Kubernetes development environment using Docker for Desktop.

This Git repo contains the files used in this blog post: https://github.com/AlexanderAllen/k8-local-nginx-ingress-tls


Let's make sure Kubertes is running with kubectl get pods -n kube-system.

NAME                                         READY   STATUS    RESTARTS   AGE
etcd-docker-for-desktop                      1/1     Running   0          44d
kube-apiserver-docker-for-desktop            1/1     Running   0          44d
kube-controller-manager-docker-for-desktop   1/1     Running   0          44d
kube-dns-86f4d74b45-6vz6v                    3/3     Running   0          44d
kube-proxy-kmsg7                             1/1     Running   0          44d
kube-scheduler-docker-for-desktop            1/1     Running   0          44d

Install Helm

We're going to be attempting to setup a local environment that mimics production as much as possible, and so that includes trying to set up TLS. We'll be using Helm/Tiller to manage TLS certificates for us.

If you haven't installed Helm already, you can follow the instructions here to install it on your local system.

If you're on OSX, it's:

brew install kubernetes-helm

then:

helm init --history-max 200
$HELM_HOME has been configured at ~/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

After that to verify you can run helm version:

helm version
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}

You should see an entry for both client and server.

Install cert-manager

Here are the commands I'm running:

# Create CRDs - Custom Resource Definitions.
kubectl apply -f cert-man-crds.yml

# Create a namespace to run cert-manager in
kubectl create namespace cert-manager

# Disable resource validation on the cert-manager namespace
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true

# Add the Jetstack Helm repository to Helm. This repository contains the cert-manager Helm chart.
helm repo add jetstack https://charts.jetstack.io

# Finally, install the chart into the cert-manager namespace:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.0/cert-manager.yaml --validate=false

...

# Create the staging (development) certificate issuer:
kubectl apply -f staging-issuer.yml
clusterissuer.certmanager.k8s.io/letsencrypt-staging created

Install K8 Service, Deployment

# Create Deployment and Service objects.
kubectl apply -f deployment-svc.yml
service/hello-svc created
deployment.apps/hello created

Create the nginx ingress controller

kubectl apply -f official-ingress-controller-nginx.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

Create the nginx ingress load balancer

kubectl apply -f load-balancer-svc.yaml
service/ingress-nginx created

Create the ingress rules

kubectl apply -f ingress.yml
ingress.extensions/hello-ingress created

Verify the ingress object

kubectl get ingress
NAME            HOSTS       ADDRESS   PORTS     AGE
hello-ingress   localhost             80, 443   20s

And now our app should be available on http://localhost:80:

sucess!

The message "your connection is not private" appears because the certificate issuer that we're using (Let's Encrypt) is providing us with a "fake" certificate, via cert-manager. But if you're showing this page, then it means you're set. Just click on Advanced then Proceed:

Click on Advanced, then Proceed to localhost

If you proceed to your new local hello world app, you should see the message "helloworld", along with your happy fake certificate issued by "cert-manager.local", courtesy of cert-manager.

For instructions on how to get rid of the "Not Secure" message, check my post Using "fake" certificates for development on OSX with Cert Manager, Let's Encrypt Staging, and Kubernetes Helm

From there on it should be happy sailing !!!


Verification

Verify the ingress object we created:

kubectl describe ingress
Name:             hello-ingress
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  letsencrypt-staging terminates localhost
Rules:
  Host       Path  Backends
  ----       ----  --------
  localhost  
                hello-svc:80 (10.1.0.42:5678)
Annotations:
  certmanager.k8s.io/cluster-issuer:                 letsencrypt-staging
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"certmanager.k8s.io/cluster-issuer":"letsencrypt-staging","kubernetes.io/ingress.class":"nginx"},"name":"hello-ingress","namespace":"default"},"spec":{"rules":[{"host":"localhost","http":{"paths":[{"backend":{"serviceName":"hello-svc","servicePort":80}}]}}],"tls":[{"hosts":["localhost"],"secretName":"letsencrypt-staging"}]}}

  kubernetes.io/ingress.class:  nginx
Events:
  Type    Reason             Age    From          Message
  ----    ------             ----   ----          -------
  Normal  CreateCertificate  4m22s  cert-manager  Successfully created Certificate "letsencrypt-staging"

You can verify in the list of recent events that the certificate got created:

kubectl get events
LAST SEEN   FIRST SEEN   COUNT   NAME                                      KIND          SUBOBJECT                TYPE     REASON                  SOURCE                        MESSAGE
7m          7m           1       hello-ingress.15a6fe2cec9da278            Ingress                                Normal   CreateCertificate       cert-manager                  Successfully created Certificate "letsencrypt-staging"
7m          7m           1       letsencrypt-staging.15a6fe2cf9d83c64      Certificate                            Normal   Generated               cert-manager                  Generated new private key
7m          7m           1       letsencrypt-staging.15a6fe2cfbb61164      Certificate                            Normal   GenerateSelfSigned      cert-manager                  Generated temporary self signed certificate
7m          7m           1       letsencrypt-staging.15a6fe2cfd1b47e0      Certificate                            Normal   OrderCreated            cert-manager                  Created Order resource "letsencrypt-staging-33880631"

Endpoints

kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
docker-for-desktop   Ready    master   47d   v1.10.11

kubectl get deployments
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello   1         1         1            1           14m

kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
hello-svc    ClusterIP   10.102.51.133   <none>        80/TCP    12m
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   47d

kubectl get endpoints
NAME         ENDPOINTS           AGE
hello-svc    10.1.0.42:5678      11m
kubernetes   192.168.65.3:6443   47d


Troubleshooting cert-manager installation

When installing cert-manager to the kube-system namespace, I ran into a missing required field "caBundle" webhook validation error:

helm install --name cert-manager --namespace kube-system jetstack/cert-manager --version v0.8.0
Error: validation failed: error validating "": error validating data: [ValidationError(ValidatingWebhookConfiguration.webhooks[0].clientConfig): missing required field "caBundle" in io.k8s.api.admissionregistration.v1beta1.WebhookClientConfig, ValidationError(ValidatingWebhookConfiguration.webhooks[1].clientConfig): missing required field "caBundle" in io.k8s.api.admissionregistration.v1beta1.WebhookClientConfig, ValidationError(ValidatingWebhookConfiguration.webhooks[2].clientConfig): missing required field "caBundle" in io.k8s.api.admissionregistration.v1beta1.WebhookClientConfig]

This (closed) Github issue provided some pointers:

https://github.com/jetstack/cert-manager/issues/1143

Namely, following the default, now updated official instructions from cert-manager itself: https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html.

Following those instructions, I disabled the webhook validation, and the install went through. The key is the --validate=false flag in the kubectl apply -f command:

kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.0/cert-manager.yaml --validate=false

Inspiration for this post

Credits: