NetworkPolicy automation
Otterize automates pod-to-pod access control with network policies, within your cluster.
Instead of managing pod identities, labeling clients, servers and namespaces, and manually authoring individual network policies, Otterize implements intent-based access control (IBAC). You just declare what calls the client pods intend to make, and everything is automatically wired together so only intended calls are allowed.
In this tutorial, we will:
- Deploy a server pod, and two client pods calling it.
- Declare that the first client intends to call the server.
- See that a network policy was autogenerated to allow just that, and block the (undeclared) calls from the other client.
Prerequisites
Prepare a Kubernetes cluster that supports network policies
Before you start, you'll need a Kubernetes cluster with a CNI that supports NetworkPolicies.
Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.
- Minikube
- Google GKE
- AWS EKS
- Azure AKS
If you don't have the Minikube CLI, first install it.
Then start your Minikube cluster with Calico, in order to enforce network policies.
minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico
The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.
- gcloud CLI
- Console
To use the gcloud CLI for this tutorial, first install and then initialize it.
To enable network policy enforcement when creating a new cluster:
Run the following command:
gcloud container clusters create CLUSTER_NAME --enable-network-policy --zone=ZONE
(Replace CLUSTER_NAME
with the name of the new cluster and ZONE
with your zone.)
To enable network policy enforcement for an existing cluster, perform the following tasks:
Run the following command to enable the add-on:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=ENABLED
(Replace CLUSTER_NAME
with the name of the cluster.)
Then enable network policy enforcement on your cluster, re-creating your cluster's node pools with network policy enforcement enabled:
gcloud container clusters update CLUSTER_NAME --enable-network-policy
(Replace CLUSTER_NAME
with the name of the cluster.)
To enable network policy enforcement when creating a new cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
On the Google Kubernetes Engine page, click Create.
Configure your cluster as desired.
From the navigation pane, under Cluster, click Networking.
Select the checkbox to Enable network policy.
Click Create.
To enable network policy enforcement for an existing cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click Edit network policy.
Select the checkbox to Enable network policy for master and click Save Changes.
Wait for your changes to apply, and then click Edit network policy again.
Select the checkbox to Enable network policy for nodes.
Click Save Changes.
Starting August 29, 2023, you can configure the built-in VPC CNI add-on to enable network policy support.
To spin up a new cluster, use the following eksctl
ClusterConfig
, and save it to a file called cluster.yaml
.
Spin up the cluster using eksctl create cluster -f cluster.yaml
. This will spin up a cluster called network-policy-demo
in us-west-2
.
The important bit is the configuration for the VPC CNI addon:
configurationValues: |-
enableNetworkPolicy: "true"
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: network-policy-demo
version: "1.27"
region: us-west-2
iam:
withOIDC: true
vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true
addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy
managedNodeGroups:
- name: x86-al2-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
For guides that deploy the larger set of services, Kafka and ZooKeeper are also deployed, and you will also need the EBS CSI driver to accommodate their storage needs. Follow the AWS guide for the EBS CSI add-on to do so. If you're not using the VPC CNI, you can set up the Calico network policy controller using the following instructions:
Visit the official documentation, or follow the instructions below:- Spin up an EKS cluster using the console, AWS CLI or
eksctl
. - Install Calico for network policy enforcement, without replacing the CNI:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-crs.yaml
You can set up an AKS cluster using this guide.
For network policy support, no setup is required: Azure AKS comes with a built-in network policy implementation called Azure Network Policy Manager. You can choose whether you'd like to use this option or Calico when you create a cluster.
Read more at the official documentation site.You can now install (or reinstall) Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you:
- See what's happening visually in your browser, through the "access graph";
- Avoid using SPIRE (which can be installed with Otterize) for issuing certificates, as Otterize Cloud provides a certificate service.
So either forego browser visualization and:
Install Otterize in your cluster, without Otterize Cloud
You'll need Helm installed on your machine to install Otterize as follows:
helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace
This chart is a bundle of the Otterize intents operator, Otterize credentials operator, Otterize network mapper, and SPIRE.
Initial deployment may take a couple of minutes.
You can add the --wait
flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be Ready
using kubectl get pods -n otterize-system -w
.
After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system
:
NAME READY STATUS RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-spire-agent-lcbq2 1/1 Running 0 9s
otterize-spire-server-0 2/2 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s
Or choose to include browser visualization and:
Install Otterize in your cluster, with Otterize Cloud
Create an Otterize Cloud account
If you don't already have an account, browse to https://app.otterize.com to set one up.
If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.
Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.
Install Otterize OSS, connected to Otterize Cloud
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
- Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
- Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
- Follow the instructions to install Otterize with enforcement on (use the toggle to make
Enforcement mode: active
)
- Follow the instructions to install Otterize with enforcement on (use the toggle to make
More details, if you're curious
Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what would happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.
Deploy the server and the two clients
Our simple example consists of three pods: an HTTP server and two clients that call it.
Expand to see the example YAML files
- namespace.yaml
- server.yaml
- client.yaml
- client-other.yaml
apiVersion: v1
kind: Namespace
metadata:
name: otterize-tutorial-npol
apiVersion: apps/v1
kind: Deployment
metadata:
name: server
namespace: otterize-tutorial-npol
spec:
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: node:19
command: [ "/bin/sh","-c" ]
args: [ "echo \"Hi, I am the server, you called, may I help you?\" > index.html; npx --yes http-server -p 80 " ]
---
apiVersion: v1
kind: Service
metadata:
name: server
namespace: otterize-tutorial-npol
spec:
selector:
app: server
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
namespace: otterize-tutorial-npol
spec:
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: alpine/curl
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do echo \"Calling server...\"; if ! timeout 2 curl -si server 2>/dev/null; then echo \"curl timed out\"; fi; sleep 2; done" ]
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-other
namespace: otterize-tutorial-npol
spec:
selector:
matchLabels:
app: client-other
template:
metadata:
labels:
app: client-other
spec:
containers:
- name: client-other
image: alpine/curl
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do echo \"Calling server...\"; if ! timeout 2 curl -si server 2>/dev/null; then echo \"curl timed out\"; fi; sleep 2; done" ]
- Deploy the two clients and the server in their namespace using
kubectl
:
kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/all.yaml
Optional: check deployment status
kubectl get pods -n otterize-tutorial-npol
You should see
NAME READY STATUS RESTARTS AGE
client-596bcb48d5-pnjxc 1/1 Running 0 8s
client-other-f56d65d7f-z2wg2 1/1 Running 0 8s
server-6bb4784ccc-wtz7f 1/1 Running 0 8s
Let's monitor both client attempts to call the server with additional terminal windows, so we can see the effects of our changes in real time.
- Open a new terminal window [client] and tail the client log:
kubectl logs -f --tail 1 -n otterize-tutorial-npol deploy/client
Expected output
At this point the client should be able to communicate with the server:
Calling server...
HTTP/1.1 200 OK
...
Hi, I am the server, you called, may I help you?
- Open another terminal window [client-other] and tail the client-other log:
kubectl logs -f --tail 1 -n otterize-tutorial-npol deploy/client-other
Expected output
At this point the client should be able to communicate with the server:
Calling server...
HTTP/1.1 200 OK
...
Hi, I am the server, you called, may I help you?
If you've attached Otterize OSS to Otterize Cloud, you can now browse to your account at https://app.otterize.com and see the access graph for your cluster:
Apply intents
We will now declare that the client intends to call the server.
When the intents YAML is applied, creating a custom resource of type ClientIntents
,
Otterize will add a network policy to allow the intended calls
(client → server) and fail all unintended calls (e.g., client-other → server).
You can click on the services or the lines connecting them to see which ClientIntents you need to apply to make the connection go green!
- Here is the
intents.yaml
declaration of the client, which we will apply below:
apiVersion: k8s.otterize.com/v1alpha3
kind: ClientIntents
metadata:
name: client
namespace: otterize-tutorial-npol
spec:
service:
name: client
calls:
- name: server
See it in action
Keep an eye on the logs being tailed in the [client-other] terminal window,
and apply this intents.yaml
file in your main terminal window using:
kubectl apply -f https://docs.otterize.com/code-examples/automate-network-policies/intents.yaml
Client intents are the cornerstone of intent-based access control (IBAC).
as expected since it didn't declare its intents:
Calling server...
HTTP/1.1 200 OK
...
Hi, I am the server, you called, may I help you? # <- before applying the intents file
Calling server... # <- after applying the intents file
curl timed out
Calling server...
curl timed out
Not seeing the time out?
If client-other isn't timing out, then the installed CNI plugin likely does not support network policies. Consult the docs for your Kubernetes distribution or head back to the Calico installation section to install one. For example, Minikube does not start by default with a CNI that supports network policies but you can ask it to start with one that does, such as Calico.
- And in the [client] terminal you should see that calls go through, as expected since they were declared:
Calling server...
HTTP/1.1 200 OK
...
Hi, I am the server, you called, may I help you?
- You should also see that a new network policy was created:
kubectl get NetworkPolicies -n otterize-tutorial-npol
This should return:
NAME POD-SELECTOR AGE
access-to-server-from-otterize-tutorial-npol otterize/server=server-otterize-tutorial-np-7e16db 6s
If you've attached Otterize OSS to Otterize Cloud, go back to see the access graph in your browser:
It's now clear what happened:
- The server is now protected, and is also blocking some of its clients. Click on it to see what to do about it.
- Calls from [client] are declared and therefore allowed (green line).
- Calls from [client-other] are not declared and therefore blocked (red line). Click on the line to see what to do about it.
Otterize did its job of both protecting the server and allowing intended access.
What did we accomplish?
Controlling access through network policies no longer means touching network policies at all.
Clients simply declare what they need to access with their intents files.
The next
kubectl apply
ensures that network policies automatically reflect the intended pod-to-pod access.
Expand to see what happened behind the scenes
Otterize generated a specific network policy on the ingress of the pods of a server, allowing the server to be accessed by the pods of a client. Otterize uses labels to define the network policy and associate it with a server in a namespace and a client in a namespace, as follows:
- The server's pods are given a label
intents.otterize.com/server
whose value uniquely represents that server. The network policy stipulates that it applies to the ingress of server pods with this label. - The client's pods are given a label
intents.otterize.com/access-...
derived from the server's uniqueintents.otterize.com/server
value. The network policy stipulates that only client pods with this matching label can access the server. - The client's namespace is given a label
intents.otterize.com/namespace-name
whose value is the namespace of the client. The network policy stipulates that only client pods whose namespaces have this label can access the server. This is used to allow cross-namespace intents.
Otterize saved us from doing all this work by simply declaring the client's intents in intents.yaml
,
while the appropriate network policies were managed automatically behind the scenes.
Further information about network policies and Otterize can be found here.
Try to create an intents file yourself for client-other, and apply it to allow this other client to call the server.
What's next
- Get started with the Otterize network mapper to help you bootstrap intents files for use in intent-based access control (IBAC).
Teardown
To remove the deployed examples run:
kubectl delete namespace otterize-tutorial-npol