Exposing Kubernetes applications on-premise vs. public cloud vs. bare-metal
Compared to the Docker compose deployments, exposing the Kubernetes application might be overwhelming at first. You have multiple seemingly similar options, what Kubernetes resources to use, and not all features might be available based on the used infrastructure.
Basics of exposing Kubernetes application
When you create a pod (directly or, e.g., via Deployment), all of its services are available only on the internal IP address available from within the Kubernetes cluster. It means you are not able to access it from the outside, but since the IP address is assigned dynamically, you are not able to access it effectively even from the inside (e.g., accessing the database pod from the application pod).
Luckily Kubernetes supports exposing pod’s services via object Service, which should cover all the use cases by supporting the following types.
ClusterIP
ClusterIP is the most basic type. By default, it creates an internal Load Balancer in front of your pods (you can imagine it as HAProxy in front of multiple VMs). This Load Balancer has its own IP address, and more importantly, Kubernetes creates a DNS record for this IP address that can be used as a single point of communication by other pods.
There is a special case of a ClusterIP object called headless service that will create just a DNS record pointing directly to the IP address of your pod. This is useful for single pod deployments or particularly useful for StatefulSet controllers.
Creates:
- ClusterIP object that routes traffic to the targetPort of the pods.
Primary use case:
- Exposing pod services to other pods.
NodePort
NodePort exposes ports on the hosting node similarly to the ports section in Docker compose. Port from the 30000–32768 range is auto-allocated (unless explicitly specified) and bound to the host’s external IP address.
In most cases, you’ll probably want to avoid NodePort. The restricted port range is making it useless for interfacing with users (some people have trouble remembering the four-digit PIN of the credit card, so good luck with that). In the multinode Kubernetes cluster, NodePort won’t give you a single point of communication for the users or external services (you would need to create and configure an external Load Balancer yourself).
Creates:
- ClusterIP object that routes traffic to the targetPort of the pods.
- NodePort object. Traffic is routed from the external IP address and port to the ClusterIP object, which is consequently routed to the targetPort of the pods.
Primary use case:
- Exposing pod services to the external Load Balancer.
LoadBalancer
LoadBalancer is the most advanced type of Service object, which covers all the issues of the NodePort. It creates an external Load Balancer bound to the external IP address and port (implementation heavily depends on used infrastructure, but about this later). LoadBalancer solves both issues of the NodePort. It can be bound to ports below 30000, and it creates a single point of access for users making the deployment highly available. The main downside of the LoadBalancer service is its absence in some Kubernetes clusters.
Creates:
- ClusterIP object that routes traffic to the targetPort of the pods.
- NodePort object. Traffic is routed from the external IP address and port to the ClusterIP object, which is consequently routed to the targetPort of the pods.
- External Load Balancer. Traffic is routed from the external Load Balancer to the NodePort object, from the NodePort object to the ClusterIP object, and finally from the ClusterIP object to the targetPort of the pods.
Primary use case:
- Exposing pod services to the users or external services.
Ingress
LoadBalancer seems to be a clear choice when exposing your pods to the users. There is one crucial downside, why using it directly is not practical. With the LoadBalancer service, you can serve only one service per port. This becomes an issue in the case of web services because you probably won’t dedicate the whole cluster to the single web application (except it belongs to the HR, you want to keep those guys happy).
Specifically for exposing web applications, Kubernetes contains an object called Ingress, which represents Reverse Proxy rules that supports routing HTTP requests based on hostname and path. Ingress is implemented by Ingress Controller, which is basically a pod running a specific Reverse Proxy (e.g., Nginx or Traefik).
Creates:
- Deployment running Ingress Controller (Reverse Proxy).
- LoadBalancer service (by default) bound to ports 80 and 443, routing all traffic to Ingress Controller pods.
- Reverse Proxy rules that route traffic from Ingress Controller pods to ClusterIP objects targeting pods with web applications.
Primary use case:
- Exposing web applications to the users.
As you can see, even with Ingress, the LoadBalancer service is crucial for exposing your services to users. Let’s have a look at how it’s implemented in various kinds of infrastructures.
Public cloud
If you are deploying an application to Kubernetes public cloud provider, you are out of the woods in most cases. Creating LoadBalancer or Ingress Controller resources will automatically create a Load Balancer service in the cloud for you and point the requested ports to your Kubernetes cluster. There is a slight chance that your provider doesn’t support this auto-creation feature. In that case, you have two options. Create a Load Balancer in the cloud manually and point it to the NodePort of your Ingress Controller (you can set it up just with NodePorts) or switch your provider for a better one.
Bare-metal/Virtualization
The situation with clusters running on bare-metal or virtualization (without user-defined networking) is more complicated than in the public cloud but much merrier than in the private cloud. At least two projects are implementing the LoadBalancer service for these deployments — MetalLB¹ and OpenELB². Both of these services support two techniques to achieve HA:
- BGP — needs to be supported by a physical router
- L2 Mode
OpenELB additionally supports virtual IP failover in VIP Mode (based on Keepalived).
Private cloud
The situation in private clouds is usually the most complicated because it heavily depends on cloud capabilities. You can face two situations:
- LoadBalancer service is supported, or at least the cloud provides Load Balancer as a Service (LBaaS) functionality. In this case, it’s the same situation as with the Public cloud.
- Any LBaaS is not supported (most private clouds I’ve seen). In these situations, you will create additional networking port resource with your Kubernetes cluster in your private cloud. You can use this port for VIP Mode with OpenELB.
Local development
In the case of running a single worker local Kubernetes cluster for development, you probably don’t want to set up MetaLB or OpenELB services. Here you have at least two options:
- Run Ingress Controller on HostNetwork³ — this runs Ingress Controller pods directly on the host network, binding to host ports 80 and 443.
- Use lightweight distribution like k3s, which is deployed with the Klipper Service Load Balancer⁴. Klipper has one significant disadvantage compared to any other LoadBalancer implementation. It won’t create an actual Load Balancer but only forwarding iptables rules redirecting traffic from host ports to ClusterIP ports. This behavior is ok for single-node clusters making it the best solution for local development, which needs to be portable with public cloud provider deployments.
Conclusion
Exposing applications to external users is one of the most platform-dependent parts of Kubernetes. This article aimed to show you which Kubernetes object you’ll need to use for your use case and point you in the right direction in case your platform doesn’t support everything out of the box.