Skip to main content

Network policies on AWS EKS with the VPC CNI

This tutorial will walk you through deploying an AWS EKS cluster with the AWS VPC CNI add-on, while enabling the new network policy support on EKS with Otterize.


Step one: Create an AWS EKS cluster with the AWS VPC CNI plugin

Before you start, you'll need an AWS Kubernetes cluster. Having a cluster with a CNI that supports NetworkPolicies is required for this tutorial.

Save this yaml as cluster-config.yaml:

kind: ClusterConfig

name: np-ipv4-127
region: us-west-2
version: "1.27"

withOIDC: true

publicAccess: true
privateAccess: true

- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy

- name: small-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "t3.large" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
team: "eks"

Then run the following command to create your AWS cluster. Don't have eksctl? Install it now.

eksctl create cluster -f cluster-config.yaml

Once your AWS EKS has finished deploying the control pane and node group, the next step is deploying Otterize as well as a couple of clients and a server to see how they are affected by network policies.

Step two: Install the Otterize agents

Install Otterize on your cluster

You can now install Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you see what's happening visually in your browser, through the "access graph".

So either forego browser visualization and:

Install Otterize in your cluster, without Otterize Cloud

You'll need Helm installed on your machine to install Otterize as follows:

helm repo add otterize
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace

This chart is a bundle of the Otterize intents operator, Otterize credentials operator, Otterize network mapper, and SPIRE. Initial deployment may take a couple of minutes. You can add the --wait flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be Ready using kubectl get pods -n otterize-system -w.

After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system:

NAME                                                       READY  STATUS  RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-spire-agent-lcbq2 1/1 Running 0 9s
otterize-spire-server-0 2/2 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s

Or choose to include browser visualization and:

Install Otterize in your cluster, with Otterize Cloud

Create an Otterize Cloud account

If you don't already have an account, browse to to set one up.

If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.

Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.

Install Otterize OSS, connected to Otterize Cloud

If no Kubernetes clusters are connected to your account, click the "Connect your cluster" button to:

  1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
  2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide " link and running the Helm commands shown there. Choose Enfocement mode: disabled to apply shadow mode on every server until you're ready to protect it.
More details, if you're curious

Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.

The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will not create network policies to restrict pod-to-pod traffic, or create Kafka ACLs to control access to Kafka topics. Instead, it will report to Otterize Cloud what would happen if enforcement were to be enabled, guiding you to implement IBAC without blocking intended access.

Finally, you'll need to install the Otterize CLI (if you haven't already) to interact with the network mapper:

Install the Otterize CLI
brew install otterize/otterize/otterize-cli

More variants are available at the GitHub Releases page.

Deploy a server and two clients

So that we have some pods to look at (and protect), you can install our simple clients and server demo app that will deploy a server and 2 clients.

kubectl apply -f

Once you have that installed and running your Otterize access graph should look something like this:

Access Graph

Step three: Create an intent

Now that you have Otterize installed, the next step is to create an intent which will enable access to the server from the client. If you enable protection on the server without declaring an intent, the client will be blocked.


You can click on the services or the lines connecting them to see which ClientIntents you need to apply to make the connection go green!

otterize network-mapper export --server server.otterize-tutorial-eks | kubectl apply -f -

Running this command will generate the following ClientIntents for each client connected to server and apply it to your cluster. You could also place it in a Helm chart or apply it some other way, instead of piping it directly to kubectl.

kind: ClientIntents
name: client
namespace: otterize-tutorial-eks
name: client
- name: server
kind: ClientIntents
name: client-other
namespace: otterize-tutorial-eks
name: client-other
- name: server

At which point you should see that the server service is ready to be protected:

One intent applied

And you can then protect the server service by applying the following yaml file:

kind: ProtectedService
name: server
namespace: otterize-tutorial-eks

name: server

Protect the server by applying the resource:

kubectl apply -f

And you should see your access graph showing the service as protected:

Protected Service

What's next

Have a look at the guide on how to deploy protection to a larger, more complex application one step at a time.


To remove the deployed examples run:

kubectl delete -f protect-server.yaml
otterize network-mapper export --server server.otterize-tutorial-eks | kubectl delete -f -
kubectl delete -f
helm uninstall otterize -n otterize-system
eksctl delete cluster -f cluster-config.yaml