Avatar

Omnath Ganapure

Cloudsmith

Read Resume

A Deep Dive into Kubernetes Traffic Flow

3 min read
Article
A Deep Dive into Kubernetes Traffic Flow

Today focused on the practical application of Kubernetes Services. While deployments manage our pods, services are the "secret sauce" that makes those pods reachable, stable, and scalable.

1. The Core Problem: Ephemeral Pods

In Kubernetes, pods are ephemeral. When a pod is deleted and recreated (auto-healing), it receives a dynamic IP address. If you try to connect directly to a pod's IP, your connection will break as soon as that pod restarts.

2. The Solution: Service Discovery & Labels

Kubernetes Services solve this by not relying on IP addresses. Instead, they use Labels and Selectors.

  • Labels: A "tag" or "stamp" you put on a pod (e.g., app: sample-python-app).

  • Selectors: The Service is configured with a selector that matches that label

  • Discovery: Even if a pod’s IP changes, the Service automatically discovers it because the label remains the same.

3. Three Ways to Expose Your App

  • ClusterIP (Default): The app is only accessible internally within the Kubernetes cluster.

  • NodePort: The app is exposed on the Worker Node’s IP address on a specific port (e.g., 30007). This is ideal for sharing the app within your organization’s internal network.

  • LoadBalancer: This creates a Public IP address through a cloud provider (like AWS or Azure). The Cloud Control Manager provisions an external load balancer so anyone on the internet can access your app.

Through Hands On : Verifying Kubernetes Service Load Balancing using Kubeshark

I verified how Kubernetes Services distribute traffic across multiple pod replicas using NodePort, curl, and Kubeshark.
The goal was to understand why some requests appear load-balanced while others do not, using real network traffic instead of assumptions.

1) I deployed a Node.js application with 2 replicas behind a NodePort Service.

Both pods were in Running state and had different Pod IPs:

  • 10.244.0.36

  • 10.244.0.37

The service configuration:

  • Service Type: NodePort

  • NodePort: 30007

  • Target Port: 3000

This confirmed that the Service was correctly routing traffic to both pods.

A Deep Dive into Kubernetes Traffic Flow

2) Generating Traffic using curl

A Deep Dive into Kubernetes Traffic Flow

All responses returned 200 OK, confirming the application was healthy.

However, this alone does not prove the load balancing, So i moved to traffic inspection.

3) Observing Traffic using Service Map

A Deep Dive into Kubernetes Traffic Flow

In the Service Map view, I observed traffic flowing from the Service IP (10.244.0.1) to:

  • CoreDNS

  • Only one of the application pods initially

At first glance, it looked like traffic was not evenly distributed.

A Deep Dive into Kubernetes Traffic Flow

But after some more consecutive requests

A Deep Dive into Kubernetes Traffic Flow

5) Why Traffic Appeared Uneven

Kubernetes Services:

  • Use iptables/IPVS

  • Load balance based on connection hashing

  • Do not round-robin every request

Since kube-probe:

  • Uses stable connections

  • Comes from the same source

  • Reuses ports

Traffic sticks to the same pod.

Final Conclusion that i noticed

  • Kubernetes Service load balancing is connection-based, not request-based

  • Health check traffic (/health) comes from kubelet and sticks to one pod

  • External client traffic (/users) distributes correctly

  • Kubeshark provides packet-level visibility, making it an excellent debugging tool

Share this article:
2026 — Built by Omnath Ganapure