Skip to main content

Network policies deep dive

Network policies are one of the tools we can use for traffic shaping within K8s clusters. They allow us to shape traffic using selectors, policies, and L3 and L4 identifiers. To enforce network policies, a Kubernetes cluster requires a CNI supporting network policies to be installed. Some popular options are Calico and Cilium.

Closer look at a network policy

Let's take a look at an example showing a network policy allowing traffic:

  • From pods labeled app:backend in namespaces labeled env:production.
  • To pods labeled app:db in the namespace production-db.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-production-backend
namespace: production-db # [Target filter] applies to pods in this namespace
spec:
podSelector:
matchLabels:
app: db # [Target filter] applies to pods with this label
policyTypes:
- Ingress # [Direction] implemented as a filter on incoming connections
ingress:
- from:
- namespaceSelector:
matchLabels:
env: production # [Source filter] applies to namespaces with this label
- podSelector:
matchLabels:
app: backend # [Source filter] applies to pods with this label

Setting security scope via default network policies

Two common approaches for working with network policies are:

  • Allow all traffic between pods, protect some pods by applying ingress network policies to them.
  • Block all traffic between pods except allowed traffic by network policies.

You can apply both approaches (allow/block all) within your cluster (e.g. by applying network policies based on namespaces).

Default deny network policy

To block all traffic within a namespace (e.g. production) you can apply a default deny network policy like the following example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: { }
policyTypes:
- Ingress

Auto-generating network policies for external traffic

The intents operator defaults to automatically generate network policies for Kubernetes Services (type LoadBalancer and NodePort), and Ingress traffic when an intent will generate a network policy that can block external traffic. To disable this feature, consult the documentation for the intents operator.

Let's look at an example from our demo. We have a frontend service being accessed from multiple sources:

  • loadgenerator calls it from within the cluster to generate traffic, by accessing the frontend ClusterIP Service.
  • frontend-external is a Service with type LoadBalancer directing traffic from outside the cluster to the frontend pods. The LoadBalancer type means that a cloud provider load balancer will be created and point traffic from the Internet to these pods.

By applying the following intents file:

apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: loadgenerator
spec:
service:
name: loadgenerator
calls:
- name: frontend

Otterize will generate a network policy allowing access from the loadgenerator service to the frontend service. Once a single network policy matches a pod, other traffic not allowed by existing network policies to the frontend will get blocked. In our case that means that the frontend-external LoadBalancer won't be able to communicate with frontend.

To overcome this, Otterize will automatically generate a network policy to allow traffic from frontend-external to frontend by relying on the existence of the LoadBalancer Service as an indicator of intent between the two.

Why doesn't Otterize always generate network policies for ingress types? Because if no network policies exist, automatically generating a network policy to allow frontend-external -> frontend would block existing traffic like loadgenerator -> frontend.

How intents translate to network policies.

Let's follow an example scenario and track how Otterize configures network policies when we apply intents.

Deploy example

Our example consists of two pods: an HTTP server and a client that calls it.

Expand to see the example YAML files
apiVersion: v1
kind: Namespace
metadata:
name: otterize-tutorial-npol
  1. Deploy the client and server using kubectl:

    kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml
  2. Check that the client and server pods were deployed:

    kubectl get pods -n otterize-tutorial-npol

    You should see:

    NAME                      READY   STATUS    RESTARTS   AGE
    client-5689997b5c-grlnt 1/1 Running 0 35s
    server-6698c58cbc-v9n9b 1/1 Running 0 34s
  3. The client intents to call the server are declared with this intents.yaml file:

apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: client
namespace: otterize-tutorial-npol
spec:
service:
name: client
calls:
- name: server
type: http

Let's apply it:

kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/intents.yaml

Track artifacts

After applying the intents file Otterize performed multiple actions to allow access from the client to the server using network policies:

  • Create a network policy allowing traffic from the [client, namespace -labeled] pods to [server-labeled] pods
  • Label the client pods
  • Label the client pod namespaces
  • Label the server pods
  1. Let's look at the generated network policy:

    kubectl describe networkpolicies -n otterize-tutorial-npol access-to-server-from-otterize-tutorial-npol

    You should see (without the comments):

    Name:         access-to-server-from-otterize-tutorial-npol
    # [Target filter] namespace
    Namespace: otterize-tutorial-npol
    Created on: 2022-09-08 19:12:24 +0300 IDT
    Labels: intents.otterize.com/network-policy=true
    Annotations: <none>
    Spec:
    # [Target filter] pods with this label
    PodSelector: intents.otterize.com/server=server-otterize-tutorial-np-7e16db
    Allowing ingress traffic:
    To Port: <any> (traffic allowed to all ports)
    From:
    # [Source filter] namespaces with this label
    NamespaceSelector: intents.otterize.com/namespace-name=otterize-tutorial-npol
    # [Source filter] pods with this label
    PodSelector: intents.otterize.com/access-server-otterize-tutorial-np-7e16db=true
    Not affecting egress traffic
    # [Direction]
    Policy Types: Ingress
  2. And we can also see that the client and server pods are now labeled:

    kubectl get pods -n otterize-tutorial-npol --show-labels

    You should see:

    NAME                      READY   STATUS    RESTARTS   AGE     LABELS
    client-5cb67b748-l25vg 1/1 Running 0 7m57s intents.otterize.com/access-server-otterize-tutorial-np-7e16db=true,intents.otterize.com/client=true,intents.otterize.com/server=client-otterize-tutorial-np-699302,pod-template-hash=5cb67b748,credentials-operator.otterize.com/service-name=client
    server-564b56f596-54str 1/1 Running 0 7m56s intents.otterize.com/server=server-otterize-tutorial-np-7e16db,pod-template-hash=564b56f596,credentials-operator.otterize.com/service-name=server

    When we break down the label structure we can see:

  • For the server - intents.otterize.com/server=server-otterize-tutorial-np-7e16db
    • intents.otterize.com/server - Label prefix for servers
    • server - Server pod name
    • otterize-tutorial-np - Server pod namespace (might be truncated)
    • 7e16db - Hash for server pod name and and namespace
  • For the client - intents.otterize.com/access-server-otterize-tutorial-np-7e16db=true
    • intents.otterize.com/access - Label prefix for clients
    • server - Server pod name
    • otterize-tutorial-np - Server pod namespace (might be truncated)
    • 7e16db - Hash for server pod name and and namespace
  1. Finally, let's look at the namespace label with:
    kubectl get namespace otterize-tutorial-npol --show-labels
    You should see:
    NAME                     STATUS   AGE   LABELS
    otterize-tutorial-npol Active 36s intents.otterize.com/namespace-name=otterize-tutorial-npol,kubernetes.io/metadata.name=otterize-tutorial-npol
    With the new label added by Otterize - intents.otterize.com/namespace-name=otterize-tutorial-npol