Working with Kubernetes Ingress Nginx
Previously this week ...
While working on the Digital Ocean Kubernetes, I set up a native load balancer service to serve the Ghost CMS. Whenever a Kubernetes service has
spec.type: LoadBalancer, it automatically creates an actual virtual Load Balancer on DO like this:
Here's the manifest:
# manifest.yml kind: Service apiVersion: v1 metadata: name: ghost spec: type: LoadBalancer selector: app: ghost ports: - name: ghost-backend protocol: TCP port: 80 targetPort: 2368
It worked. I could visit the Ghost CMS on it's IP, which is provided by
kubectl get svc once you run
kubectl apply -f manifest.yml.
That's all good and dandy, except God forbid you ever have to change or restart the load balancer definition (on
manifest.yml above). If you do so, then the external IP address changes and you have to rejigger your DNS records to point to the new external IP. And that could take time, which is bad for availability.
I also knew that there were other types of Load Balancers for Kubernetes out there, with more flexibility.
This post confirmed some of my suspicions:
Just pray you don't have to rebuild the load balancer, or the DNS will have to be updated. This is where I wish there was a dynamic IP for the load balancer.
And so as it happens, Digital Ocean published a guide on How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes.
And today's goal was to set up a "real" K8 (Kubernetes) Load Balancer. Now here's where a soup of technical terms comes up, so let me explain the different kind of resources involved:
- Ingress Resources: These are routing rules that determine which HTTP and HTTPS traffic goes to which service.
- Ingress Controllers: They're in charge of perform the actual routing from the Internet to the Services
For today we chose the Kubernetes Ingress. Both the Kubernetes and Nginx ingresses uses Nginx, both the former is maintained by the Kubernetes organization, and the later by Nginx company.
It took about two hours to of set up Kubernetes Ingress, serving only unencrypted (HTTP) traffic. Then it was time to set up the HTTPS part.
For HTTPS, which requires creating TLS/SSL certificates, I used this project called Cert Manager. Cert Manager automatically provisions and manages TLS certificates in Kubernetes. You can read about how to set it up in the Digital Ocean tutorial as well. Cert Manager is installed in the cluster via this other project called Helm.
I messed up some of the TLS configuration in the process and had to tear down the resources created by Cert Manager, fix the error, and re-run the certificate provisioning process. Cert Manager is using Let's Encrypt, which provides free TLS certificates. Let's Encrypt throttles the amount of requests you can do to it's servers to about 5 requests for production servers, and some more for test servers (fake/test TLS certificates). Since I already used my allotted requests on the first round of cert provisioning (which had errors), now I'm waiting for Let's Encrypt to un-throttle me and provide me with the "real" certificates. Once that's done, the Cert Manager running in Kubernetes should take care of automatically assigning them to the appropriate Services. Here's the "waiting" status:
In the meantime, Cert Manager is cool enough to "issue" a temporary certificate that while it's not really validated, shows that the site is using HTTPS:
As you can see on the certificate, it says "Kubernetes Ingress Controller Fake Certificate". That's us! That's our Ingress Controller issuing the "fake" certificate. That should be replaced by the real one automatically and soon, I hope.
That's for today's Kubernetes lesson. Hopefully I'll get to create another post detailing the troubleshooting and teardown of the Ingress Controller and certificate provisioning process. That's more useful information to write about. This was just my first "real" post in this new blog.