This post explains how to set up Ingress for Kubernetes on Amazon EKS and make your Kubernetes services available to the internet.
What is Ingress?
Services you deploy in your Kubernetes cluster are not, by default, visible to anyone outside the cluster.
An Ingress is a special type of Kubernetes object that exposes one or more Services to the internet. It’s an abstraction that covers load balancing, HTTP routing, and SSL termination.
While many Kubernetes resources are “write once, run anywhere” on any cloud platform, Ingress behaves quite differently on different platforms. This is because each platform has its own way of hooking up Kubernetes services to the outside internet.
Most Kubernetes platforms implement some basic Ingress functionality out of the box. You can also add a third-party ingress controller to augment or replace the platform’s basic features.
In Kubernetes, an ingress controller is not the same as an Ingress resource. An ingress controller runs on your cluster, waits for you to create Ingress resources, and manages controller-specific handling for HTTP requests based on those Ingress specs.
When the controller notices an Ingress that is marked with certain special annotations, it comes to life and creates new resources to implement the ingress flow. This might include Kubernetes pods containing reverse proxies, or an external load balancer. Most controllers provide some configuration parameters in the form of controller-specific annotations you can apply to Ingress resources.
Currently, Amazon EKS ships with only a very basic ingress system. For practical purposes, you will almost certainly want to install a more powerful ingress controller.
Here are some possible choices:
Option #1: Kubernetes Service with type: LoadBalancer
This is the “native” ingress option supported by EKS. It does not actually use the Ingress resource at all. Just create a Kubernetes Service and set its type to “LoadBalancer”, and then EKS will deploy an ELB to receive traffic on your behalf.
This approach has a few drawbacks:
- Each Service spawns its own ELB, incurring extra cost.
- You cannot link more than one Service to a DNS hostname, because ELBs offer no ability to route to different targets based on HTTP Host headers or request paths.
- Most recent Kubernetes codebases have already switched to the newer Ingress system and do not configure themselves with externally-visible Services.
- You miss the flexibility of modern ingress controllers like Nginx, including features like automatic TLS certificate management and OAuth security mix-ins.
Option #2: alb-ingress-controller
This is a third-party project (Helm chart here) that spawns ALBs to correspond to specially-marked Ingress resources in Kubernetes. It tries to automatically manage ALB target groups and routing rules to match the specs it sees on each Ingress resource.
Advantages:
- Works with the new Ingress resources rather than raw Services.
- Multiple Services can share one Ingress using host-based or path-based routing, if you set up the Ingress specs manually. (but note, this is different from the one-Ingress-per-service model that you will find in most Kubernetes documentation and public Helm charts).
- Includes support for lots of tweakable options on ALBs and target groups, like security groups and health checks.
But there are some drawbacks too:
- Creates a new ALB for each Ingress resource. So if you follow the common pattern where each Kubernetes Service has its own dedicated Ingress, then you will end up with multiple ALBs instead of one shared ALB.
- Doesn’t always maintain the links between target groups and worker nodes properly. Sometimes it fails to get a target group into a “healthy” state on start-up, or drops live nodes from an active target group for no apparent reason. (Anecdotally, I have found the ALB ingress controller to be more reliable when its target type is set to “instance” rather than “pod”).
A brief note about health check settings on ALB target groups: by default, ALBs want to see a “200 OK” response on HTTP requests to “/” before enabling a target group. If you are just setting up your cluster, you might not yet have a service set up to respond to “/” requests. This will prevent any target group from registering as “healthy”, even if you have working endpoints on other paths. As a temporary fix, you could configure the ALB to accept “404 Not Found” as a healthy response.
Option #3: nginx-ingress-controller with ELB support
Recent versions of the standard Nginx ingress controller (Helm chart here) now have the ability to create AWS ELBs to accept traffic. I have not tried this approach because it doesn’t offer as much flexibility as alb-ingress-controller.
Note, however, that you will find many Kubernetes guides on the web that assume you are using the Ngninx ingress controller, because it is platform-neutral and includes nice flexibility for routing and manipulating the traffic passing through it.
A working compromise: alb-ingress-controller + Nginx
For a practical Kubernetes cluster on Amazon, I recommend a combination of two ingress controllers: alb-ingress-controller to serve as the first hop, plus nginx-ingress-controller for final routing.
The advantage of this configuration is that you can use one ALB for the whole cluster, and still benefit from the standardized and flexible Nginx-based configuration for individual Services.
With a single ALB, you minimize the ongoing cost, plus have the ability to run multiple services on a single DNS CNAME.
To set this up, deploy both ingress controllers into Kubernetes. The standard Helm charts work fine. Then manually create a single ALB Ingress resource that deploys one ALB for the whole cluster, with all the AWS-specific settings like health checks and ACM-managed TLS certificates. This main Ingress will have only one route, which forwards all traffic to the nginx-ingress-controller Service.
Here is an example of what the Kubernetes manifest for this singleton controller might look like:
The Nginx ingress controller, when used in this mode, does not create any AWS resources and is not visible to the internet. It just sits there functioning as a reverse proxy, passing requests from the ALB inward to other services in the cluster that register with it.
All Nginx configuration comes from Kubernetes Ingress resources. You can feel free to set up multiple independent services each with their own Ingress resource, and the Nginx controller will cleverly merge their routing rules into a single Nginx configuration for its reverse-proxy pods.
So far I have only noticed a couple of minor issues with this approach:
- alb-ingress-controller sometimes drops healthy targets out of a target group, as mentioned above.
- Because there is an extra layer of proxying along the request path, you have to be careful about which HTTP headers to pass inward and outward, and how to correctly track client IP addresses and HTTP vs. HTTPs in X-Forwarded-* headers.
Important note on IAM Roles & Garbage Collection
All of the above options involve pods within the Kubernetes cluster creating AWS resources like ELBs, ALBs, and Target Groups. This has two important implications:
First, you need to give permission for Kubernetes workers to manage ELBs/ALBs, for example by adding them to the IAM role used by the worker machines.
Second, beware that these resources often don’t get cleaned up automatically. You will have to do some manual garbage collection from time to time. ELBs/ALBs are of particular concern because you pay every hour they are running, even if they are not receiving traffic.
Conclusion
I hope you have found this information helpful! Feel free to contact me at dmaas@maasdigital.com.