Skip to main content

Network Policies Deep Dive

Network policies are one of the tools we can use for traffic shaping within K8s clusters. They allow us to shape traffic using selectors, policies, and L3 and L4 identifiers. To enforce network policies, a Kubernetes cluster requires a CNI supporting network policies to be installed. Some popular options are Calico and Cilium.

Closer look at a network policy

Let's take a look at an example showing a network policy allowing traffic:

  • From pods labeled app:backend in namespaces labeled env:production.
  • To pods labeled app:db in the namespace production-db.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-production-backend
namespace: production-db # [Target filter] applies to pods in this namespace
spec:
podSelector:
matchLabels:
app: db # [Target filter] applies to pods with this label
policyTypes:
- Ingress # [Direction] implemented as a filter on incoming connections
ingress:
- from:
- namespaceSelector:
matchLabels:
env: production # [Source filter] applies to namespaces with this label
- podSelector:
matchLabels:
app: backend # [Source filter] applies to pods with this label

Setting security scope via default network policies

Two common approaches for working with network policies are:

  • Allow all traffic between pods, protect some pods by applying ingress network policies to them.
  • Block all traffic between pods except allowed traffic by network policies.

You can apply both approaches (allow/block all) within your cluster (e.g. by applying network policies based on namespaces).

Default deny network policy

To block all traffic within a namespace, you can apply a default deny network policy to that namespace. After applying this policy, only traffic allowed by network policies, such as those generated automatically from ClientIntents, will be allowed within that namespace.

Such a policy only blocks ingress traffic, so it does not prevent pods from connecting to the Internet. Enforcement is performed at the ingress to pods. This is appropriate for achieving in-cluster service-to-service zero trust and network segmentation within a namespace.

Here is such a policy that you can apply to any namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
spec:
podSelector: { }
policyTypes:
- Ingress

Global ingress default deny policy with Calico

If using Calico as your CNI for network policies, you can deploy a global (cluster-wide, across multiple namespaces), ingress-type default deny network policy. After applying this policy, only traffic allowed by network policies, such as those generated automatically from ClientIntents, will be allowed throughout the cluster.

It's important to exempt Otterize and Calico and the Kubernetes infrastructure itself from this policy, so they can function correctly by receiving calls from the Kubernetes API server.

Such a policy only blocks ingress traffic, so it does not prevent pods from connecting to the Internet. Enforcement is performed at the ingress to pods. This is appropriate for achieving in-cluster service-to-service zero trust and network segmentation across an entire cluster.

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: global-deny-ingress
spec:
# Exclude kube-system, calico-system, calico-apiserver and otterize-system
namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system", "calico-apiserver", "otterize-system"}
types:
- Ingress # Ingress-only

Tip: Using a global egress default deny policy with Calico

If using Calico for network policies, you might choose to deploy a global (cluster-wide, across multiple namespaces) default egress deny network policy, in order to restrict egress traffic going out of the cluster. If so, it is important to exempt Otterize and Calico from this policy so that they are able to function correctly. Also, note that Otterize-managed network policies only apply to traffic within the cluster.

To work well with an ingress default deny policy, an egress policy should not prevent traffic within the cluster the global default deny ingress policy will already block all in-cluster traffic by default, and ClientIntents will then allow intended traffic.

Here is an example of how you can achieve this by explicitly allowlisting certain pod IPs. You will need to substitute the appropriate IPs for your pods. This is usually part of your CNI or cluster configuration.

apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny-egress
spec:
namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system", "calico-apiserver"}
types:
- Egress
egress:
# Allow all namespaces to communicate to DNS pods
- action: Allow
protocol: UDP
destination:
selector: 'k8s-app == "kube-dns"'
ports: ["53"]
# Allow all namespaces to talk to the internet on port 443
- action: Allow
protocol: TCP
destination:
nets: ["0.0.0.0/0"]
ports: ["443"]
# Allow all namespaces to talk on egress to any port inside the cluster or in the VPC.
# Effectively, ingress default deny would still block any intra-cluster traffic
# not explicitly allowed by a dedicated network policy.
- action: Allow
protocol: TCP
destination:
# update these IP addresses to match your pods
nets: ["10.0.0.0/16","172.20.0.0/16"]

Working with third-party Helm charts and a default deny policy

After applying a default deny policy, you will need network policies for intended traffic to go through. The Otterize network mapper generates ClientIntents for your cluster, but we recommend you use ClientIntents for services that are functional clients, rather than purely operational infrastructure (e.g. checkoutservice is a client, while Prometheus is purely operational infrastructure). So you will need to enable traffic to/from such infrastructure outside of Otterize.

Many third-party Helm charts come with a ready-made network policy to use when deployed into a cluster that uses a default deny policy. Simply enable them using the values found in the chart.

tip

We recommend using these explicit network policies rather than ClientIntents, as this relieves the infrastructure or platform team in your organization from maintaining ClientIntents for infrastructure services: instead, the Helm chart will always have the most accurate infrastructural network policies.

Auto-generating network policies for external traffic

The intents operator defaults to automatically generate network policies for Kubernetes Services (type LoadBalancer and NodePort), and Ingress traffic when an intent will generate a network policy that can block external traffic. To disable this feature, consult the documentation for the intents operator.

Let's look at an example from our demo. We have a frontend service being accessed from multiple sources:

  • loadgenerator calls it from within the cluster to generate traffic, by accessing the frontend ClusterIP Service.
  • frontend-external is a Service with type LoadBalancer directing traffic from outside the cluster to the frontend pods. The LoadBalancer type means that a cloud provider load balancer will be created and point traffic from the Internet to these pods.

By applying the following intents file:

apiVersion: k8s.otterize.com/v1alpha3
kind: ClientIntents
metadata:
name: loadgenerator
spec:
service:
name: loadgenerator
calls:
- name: frontend

Otterize will generate a network policy allowing access from the loadgenerator service to the frontend service. Once a single network policy matches a pod, other traffic not allowed by existing network policies to the frontend will get blocked. In our case that means that the frontend-external LoadBalancer won't be able to communicate with frontend.

To overcome this, Otterize will automatically generate a network policy to allow traffic from frontend-external to frontend by relying on the existence of the LoadBalancer Service as an indicator of intent between the two.

Why doesn't Otterize always generate network policies for ingress types? Because if no network policies exist, automatically generating a network policy to allow frontend-external -> frontend would block existing traffic like loadgenerator -> frontend.

How intents translate to network policies

Let's follow an example scenario and track how Otterize configures network policies when we apply intents.

Deploy example

Our example consists of two pods: an HTTP server and a client that calls it.

Expand to see the example YAML files
apiVersion: v1
kind: Namespace
metadata:
name: otterize-tutorial-npol
  1. Deploy the client and server using kubectl:

    kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml
  2. Check that the client and server pods were deployed:

    kubectl get pods -n otterize-tutorial-npol

    You should see:

    NAME                      READY   STATUS    RESTARTS   AGE
    client-5689997b5c-grlnt 1/1 Running 0 35s
    server-6698c58cbc-v9n9b 1/1 Running 0 34s
  3. The client intents to call the server are declared with this intents.yaml file:

apiVersion: k8s.otterize.com/v1alpha3
kind: ClientIntents
metadata:
name: client
namespace: otterize-tutorial-npol
spec:
service:
name: client
calls:
- name: server

Let's apply it:

kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/intents.yaml

Track artifacts

After applying the intents file Otterize performed multiple actions to allow access from the client to the server using network policies:

  • Create a network policy allowing traffic from the [client, namespace -labeled] pods to [server-labeled] pods
  • Label the client pods
  • Label the client pod namespaces
  • Label the server pods
  1. Let's look at the generated network policy:

    kubectl describe networkpolicies -n otterize-tutorial-npol access-to-server-from-otterize-tutorial-npol

    You should see (without the comments):

    Name:         access-to-server-from-otterize-tutorial-npol
    # [Target filter] namespace
    Namespace: otterize-tutorial-npol
    Created on: 2022-09-08 19:12:24 +0300 IDT
    Labels: intents.otterize.com/network-policy=true
    Annotations: <none>
    Spec:
    # [Target filter] pods with this label
    PodSelector: intents.otterize.com/server=server-otterize-tutorial-np-7e16db
    Allowing ingress traffic:
    To Port: <any> (traffic allowed to all ports)
    From:
    # [Source filter] namespaces with this label
    NamespaceSelector: intents.otterize.com/namespace-name=otterize-tutorial-npol
    # [Source filter] pods with this label
    PodSelector: intents.otterize.com/access-server-otterize-tutorial-np-7e16db=true
    Not affecting egress traffic
    # [Direction]
    Policy Types: Ingress
  2. And we can also see that the client and server pods are now labeled:

    kubectl get pods -n otterize-tutorial-npol --show-labels

    You should see:

    NAME                      READY   STATUS    RESTARTS   AGE     LABELS
    client-5cb67b748-l25vg 1/1 Running 0 7m57s intents.otterize.com/access-server-otterize-tutorial-np-7e16db=true,intents.otterize.com/client=true,intents.otterize.com/server=client-otterize-tutorial-np-699302,pod-template-hash=5cb67b748,credentials-operator.otterize.com/service-name=client
    server-564b56f596-54str 1/1 Running 0 7m56s intents.otterize.com/server=server-otterize-tutorial-np-7e16db,pod-template-hash=564b56f596,credentials-operator.otterize.com/service-name=server

    When we break down the label structure we can see:

  • For the server - intents.otterize.com/server=server-otterize-tutorial-np-7e16db
    • intents.otterize.com/server - Label prefix for servers
    • server - Server pod name
    • otterize-tutorial-np - Server pod namespace (might be truncated)
    • 7e16db - Hash for server pod name and and namespace
  • For the client - intents.otterize.com/access-server-otterize-tutorial-np-7e16db=true
    • intents.otterize.com/access - Label prefix for clients
    • server - Server pod name
    • otterize-tutorial-np - Server pod namespace (might be truncated)
    • 7e16db - Hash for server pod name and and namespace
  1. Finally, let's look at the namespace label with:
    kubectl get namespace otterize-tutorial-npol --show-labels
    You should see:
    NAME                     STATUS   AGE   LABELS
    otterize-tutorial-npol Active 36s intents.otterize.com/namespace-name=otterize-tutorial-npol,kubernetes.io/metadata.name=otterize-tutorial-npol
    With the new label added by Otterize - intents.otterize.com/namespace-name=otterize-tutorial-npol