A few ways to expose your Kubernetes service

I have a few dozen services in my homelab Kubernetes cluster. These include this blog, Harbor container registry, Silverbullet journalling / knowledge platform, Jellyfin, Changedetection, and many more. As I kept adding application to my cluster, different groups of services started to emerge and I had to think about the best ways to expose them. Some of these services I only need when I troubleshoot or run tofu apply from my laptop, some of them only when I am at home, some are needed for other services in the cluster, and some are required to be exposed for my friends around the world, and some to the public - like this blog.

Port-forwarding via Kubernetes API

Let’s start with the simplest one - internal service, that you need access to intermittently. Vault is a good example. I have terraform code that runs against vault and a make job that uses kubectl port-forward command to get access to vault.

It looks like this:

.PHONY: vault-port-forward
vault-port-forward: ## Forward local port 8200 to Vault service
	@$(KUBECTL) port-forward -n vault svc/vault 8200:8200 > /dev/null 2>&1 & \
	PID=$$!; \
	echo $$PID > .vault-port-forward.pid; \
	echo "Started Vault port forwarding (PID: $$PID)"

Running this make command will exposes vault on my localhost via Kubernetes API. It gets the job done when I need to run a command against the API, or access the UI.

Home network defined DNS records

An internal DNS zone is great for accessing services. I use Cilium with L2Announcement and Gateway combination that allocates IP addresses pool that allow accessing cluster services. I use .internal DNS zone as it’s reserved for internal uses. You may have an external machine with DNS capabilities (dnsmasq or Unbound ar both great fit for it). I used OPNSense Unbound as well as Unifi DNS policies. You only need 1 wildcard record - for example, *.svc.internal and point it to your L2 allocated IP address that routes traffic to the cluster.

This approach does not offer any TLS functionality out of the box, and relies on cluster internals (some PKI capable deployment, like Vault) to issue certificates.

Cloudflare tunnel

Cloudflare tunnel is a service that runs inside your cluster. It connects to the Cloudflare infrastructure and authenticates itself via tunnel key and tunnel id. That’s how it’s mapped to your account. What I really like about tunnels is that it connects to the closest Cloudflare server to your location. Cloudflare is distributed geograpically - there’s a high likelihood an edge location is very close to you. When a service is exposed via Cloudflare, you connect to Cloudflare edge closest to you (eg on your laptop in another country). Request then flows from in the following way:

Request (client) -> Cloudflare edge -> Cloudflare backbone -> Cloudflare edge (server side) -> your home server

with response following:

Respone (your home server) -> Cloudflare edge -> Cloudflare backbone -> Cloudflare edge (client side) -> your laptop

This is really fast. There’s a reason Cloudflare does not tolerate streaming services (Plex / Jellyfin). It uses high speed Cloudflare backbone. Therefore, use sparingly. There are no mentions of Cloudflare blocking accounts because of streaming volumes, but it’s better to stay under the radar. I use it for accessing Longhorn UI, Vault, Jellyseer and Argo. Services I may need to access remotely and are essential for my infrastructure, therefore they have an additional authentication layer via Cloudflare.

There are a few caveats with this setup. Cloudflare tunnel sits inside your cluster and is accessing internal endpoints of your services. It’s essentially a backdoor, so proper risk assessment is required. Additionally, it’s easy to shoot yourself in the foot with improper access policies. However, convenience of this setup is hard to beat. It also has terraform provider so setting up services and policies is a breeze.

Edge instance with reverse proxy

I run a VPS instance in a well-connected datacenter from a known provider (the cheapest instance available) that is close to my home. Having edge service running in a data center helps with latency and speed when you’re (far) away from home. Data centers are better connected then your home internet. Instead of:

your location -> home connection

you have:

your location -> datacenter -> your home

It’s more likely to achieve higher speeds. For services that are not running in Cloudflare (e.g. streaming) I connect via a cloud VPS. VPS to my home server is connected via tailscale, so I don’t have to open ports on my router.

Edge instance is a Caddy server. I have a load balancer with a fixed LAN IP address (exposed via Cilium L2Announcement) and my home server exposes it via tailscale. With ACLs I restrict edge to only access this IP address, therefore allowing to route traffic to and from home load balancer, keeping surface area as small as possible.

Conclusion

These are some ways to expose your Kubernetes services both internally (at home) and publicly. Pangolin is self-hosted Cloudflare alternative. Fully owned infrastructure to access services is alleviating most of worries around Cloudflare, but adds quite a bit of overhead. I had no time to check it out yet.