Securing Kubernetes: The Network Policy Reality
Published in January 2026

TL;DR: How does the Kubernetes community approach network policies? We asked 530 practitioners. 83% use network policies in some form, but 60% struggle with understanding traffic flows. Observability tools lead validation strategies at 42%, while many still discover issues through production incidents or auditor findings.
By default, Kubernetes allows any pod to communicate with any other pod in the cluster—a design decision that simplifies initial setup but creates security challenges at scale.
Network policies offer a solution, acting as firewall rules that control ingress and egress traffic between workloads.
Disclosure: Tigera sponsored this research. They provided the overarching theme but had no input on the actual questions or analysis.
Despite being available since Kubernetes 1.7, adoption patterns vary significantly across organizations.
This survey aimed to understand how practitioners approach network policies, including who uses them, the challenges they face, and how they validate their configurations before deploying them to production.
The data shows widespread adoption of network policies, yet visibility, debugging, and validation remain persistent challenges.
Do You Use Network Policies in Your Kubernetes Clusters?
The first poll asked a straightforward question about adoption. The options ranged from comprehensive adoption to selective use cases.
The data shows that 83% of respondents use network policies in some form, with the largest group applying them comprehensively across their clusters.
The distinction between “everywhere” and “ingress/egress only” reflects different security postures: some teams prioritize controlling traffic at cluster boundaries while others extend controls to internal service-to-service communication.
The 17% who selected “Other” sparked discussion in the comments.
Gergely R highlighted a gap in the poll: “Missing the option you said many use: avoid entirely. That’s the only valid choice, since Kubernetes isn’t multi-tenant.”
This perspective represents teams who have consciously decided that the operational complexity of network policies outweighs their security benefits, particularly in single-tenant environments where the blast radius of any compromise is already contained.
Guillermo Q highlighted the readability problem: “From a security perspective, it’s a mess. Understanding network policies and what can connect is tough.”
Understanding how policies interact across namespaces was a common challenge throughout the survey.
What this means for you:
Network policies have reached mainstream adoption among security-conscious teams, but the “completeness” of implementation varies significantly.
Organizations often start with ingress/egress controls and expand to east-west traffic as their security posture matures.
If you’re not using network policies, you’re in a shrinking minority.
Although the decision to avoid them entirely can be valid, depending on your threat model and operational capacity.
What’s Your Biggest Network Policy Challenge?
The second poll explored what practitioners find most difficult about network policies.
On the KubeFM podcast, Ori Shoshan described a key design issue: “Network policy direction is inverted. It’s easier to say ‘I will call these services’ than as a server to list all services that will call me.”
Sixty percent of respondents identified traffic flow understanding as their biggest challenge.
This aligns with the inverted model Ori described: when you need to connect to another team’s service, you cannot simply update your own configuration.
You need them to modify their network policy to allow you in.
At scale, this creates coordination overhead that compounds with each new service dependency.
The remaining 40% split evenly between visibility (“can’t see what’s blocked”) and testing (“no safe testing method”).
Without traffic visibility, testing policies becomes a matter of guesswork.
And without safe testing environments, teams resort to trial and error in production—or avoid network policies altogether.
What this means for you:
If your team struggles with network policies, you’re experiencing a well-documented design limitation rather than a skills gap.
The challenge isn’t writing policies; it’s understanding the cumulative effect of policies across services owned by different teams.
Tools that visualize traffic flows and policy interactions can be helpful, but the fundamental coordination problem persists.
How Do You Validate Network Policies?
The third poll asked how teams validate network policies before trusting them in production.
During a KubeFM episode, Jen shared: "A customer spent weeks debugging policies— they’d opened all the obvious ports, like 5432 for Postgres and 6379 for Redis_. The application still wasn’t working."_
The issue turned out to be a documented but easily missed port pool for inter-pod communication.
So how does this work for the rest of the community?
Observability tools lead as the primary validation method, used by 42% of respondents.
This reflects the shift toward understanding what traffic actually flows before defining what should be allowed.
This approach reduces the risk of blocking legitimate traffic.
Manual testing remains common at 29%, suggesting many teams still rely on deploying policies and verifying application behavior through direct testing.
While time-consuming, this approach provides high confidence when done thoroughly.
Separate test clusters, used by 22.5%, offer isolation for policy experimentation but come with their own challenges.
As one Twitter respondent noted, “Maintaining a separate cluster is hard due to cost. Tools with automated testing will be preferred.”
The comments revealed a more candid reality.
Vincent said, "My auditors let me know."
The comment that received multiple laugh reactions.
His follow-up was telling: "My CTO often asks if the site is down after I apply network policies."
These comments reflect a common pattern in which network policy validation occurs reactively, through incident reports or compliance audits, rather than proactively, through systematic testing.
Douglas H mentioned attempting to use illuminatio for validation, only to discover it was archived in 2023.
The tool was originally designed to test CNI plugin conformance against the Network Policy API, highlighting how even purpose-built validation tools struggle to maintain relevance in this space.
What this means for you:
While observability-first approaches lower risk, many organizations still find policy issues through audits or incidents.
If you’re relying on “deploy and see what breaks,” you’re not alone—but investing in traffic flow visibility before policy enforcement can prevent outages that erode trust in the technology.
Platform Distribution and Engagement
LinkedIn generated the majority of responses at 70.8% (375 responses), followed by Telegram at 20.6% (109 responses), Twitter at 6.8% (36 responses), and Mastodon at 1.9% (10 responses).
Engagement varied significantly across polls.
The adoption question attracted 305 responses, while the challenges poll drew 65: a pattern that suggests practitioners are more comfortable sharing what they use than discussing operational difficulties.
The validation poll recovered to 160 responses, indicating stronger interest in practical how-to guidance.
The substantive discussion primarily took place on Telegram, where Vincent’s comments about auditors and CTO reactions sparked recognition from others facing similar dynamics.
LinkedIn comments tended toward tool recommendations and technical clarifications, while Twitter and Mastodon had minimal discussion despite their combined 46 responses.
Summary
Network policies are widely adopted among security-focused Kubernetes teams, with 83% of them utilizing them. Remember: adoption is high, but implementation clarity varies.
Operational challenges remain: 60% struggle to understand traffic flows and often validate reactively through audits or incidents, rather than through proactive testing.
The community’s responses reveal a technology that sits uncomfortably between security requirements and operational practicality.
Teams adopt network policies because compliance demands it or because defense-in-depth principles recommend it, but the day-to-day experience involves coordination overhead, visibility gaps, and validation uncertainty.
Observability tools offer the most promising path forward, allowing teams to understand actual traffic patterns before defining restrictive policies.
For organizations beginning their network policy journey, the key takeaway is to start with visibility only—rather than immediate enforcement—to avoid triggering “site is down” incidents that could undermine confidence in the technology.
