Hands on : Kubernetes Ingress

What I Practiced and Learned
In this hands-on, I implemented Kubernetes Ingress using Minikube and NGINX Ingress Controller, debugged common issues, and understood how traffic flows from a domain to pods inside a cluster.
This blog is based purely on practical execution, not theory.
1. Application & Pod Setup
I started with a simple Node.js backend application, deployed using Kubernetes Deployment with 2 replicas.

2. Creating the Ingress Resource
Next, I created an Ingress resource to expose the application using a custom domain.

Initially, the ingress was created but had no address assigned.
Ingress alone does nothing unless an Ingress Controller is running.
3. Enabling NGINX Ingress Controller in Minikube
Since this was a Minikube cluster, I enabled the NGINX Ingress addon:
Minikube deployed:
ingress-nginx-controller
admission webhook jobs

4. Ingress Gets Address
After enabling ingress, I checked again:

This time, the ingress received the Minikube node IP.
In cloud clusters → external LoadBalancer
In Minikube → node IP is used
5. Mapping Domain Using /etc/hosts
Since this is local Kubernetes, DNS must be manually mapped.
I added the entry:

This allows requests like foo.bar.com to reach the Minikube cluster.
7. Initial Routing Error
When I first tested: curl foo.bar.com/bar/users
I received: {"error":"Route not found"}

Reason : Ingress forwarded /bar/users, but the backend only had /users.
9. Verifying Ingress Controller Health
Finally, I verified the ingress controller pod:

This confirmed:
Controller is running
NGINX reloads on config change
Webhook and certs working correctly
Final Working Flow
At the end:
curl foo.bar.com/bar
curl foo.bar.com/bar/users
Both worked successfully through Ingress → Service → Pod.
From this I understand :
Ingress does not expose apps by itself
Ingress Controller is mandatory
pathType: Prefix ❌ does NOT support regex
Use ImplementationSpecific + use-regex
Minikube ingress uses node IP, not LoadBalancer
/etc/hosts is required for local domains