Protecting one service with network policies
Otterize enables intent-based access control (IBAC). In this guide, we'll roll out IBAC gradually, protecting just one service, and taking it all the way to production. We'll show how this can be done quickly, safely, and reproducibly:
- Choose one service to protect. Until you ensure its intended clients will have access, you'll run in "shadow mode": no network policies will actually be created against this server.
- Declare all its clients' intents to call it — which may be done automatically using the network mapper. See that it would now allow those clients if protection (using network policies) were turned on.
- Turn on protection for this one service: it is now secure against unintended access.
- Take it to production by understanding how this would also not break other production-relevant access such as ingress and policy management (e.g. Kyverno), and by putting this into your CI/CD process.
The goal is to show you how to realize zero trust, in production, in a matter of hours or days, even if it's just for one or a few services at first. It is that easy.
This guide uses the Google microservices demo (a simple e-commerce application), deployed to a Kubernetes cluster, for illustration.
Note: all the capabilities of IBAC are within Otterize OSS, while the access graph in Otterize Cloud will guide us visually at each step.
Prerequisites
Prepare a cluster
Before you start, you'll need a Kubernetes cluster.
Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.
- Minikube
- Google GKE
- AWS EKS
- Azure AKS
If you don't have the Minikube CLI, first install it.
Then start your Minikube cluster with Calico, in order to enforce network policies.
minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico
The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.
- gcloud CLI
- Console
To use the gcloud CLI for this tutorial, first install and then initialize it.
To enable network policy enforcement when creating a new cluster:
Run the following command:
gcloud container clusters create CLUSTER_NAME --enable-network-policy --zone=ZONE
(Replace CLUSTER_NAME
with the name of the new cluster and ZONE
with your zone.)
To enable network policy enforcement for an existing cluster, perform the following tasks:
Run the following command to enable the add-on:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=ENABLED
(Replace CLUSTER_NAME
with the name of the cluster.)
Then enable network policy enforcement on your cluster, re-creating your cluster's node pools with network policy enforcement enabled:
gcloud container clusters update CLUSTER_NAME --enable-network-policy
(Replace CLUSTER_NAME
with the name of the cluster.)
To enable network policy enforcement when creating a new cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
On the Google Kubernetes Engine page, click Create.
Configure your cluster as desired.
From the navigation pane, under Cluster, click Networking.
Select the checkbox to Enable network policy.
Click Create.
To enable network policy enforcement for an existing cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click Edit network policy.
Select the checkbox to Enable network policy for master and click Save Changes.
Wait for your changes to apply, and then click Edit network policy again.
Select the checkbox to Enable network policy for nodes.
Click Save Changes.
Starting August 29, 2023, you can configure the built-in VPC CNI add-on to enable network policy support.
To spin up a new cluster, use the following eksctl
ClusterConfig
, and save it to a file called cluster.yaml
.
Spin up the cluster using eksctl create cluster -f cluster.yaml
. This will spin up a cluster called network-policy-demo
in us-west-2
.
The important bit is the configuration for the VPC CNI addon:
configurationValues: |-
enableNetworkPolicy: "true"
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: network-policy-demo
version: "1.27"
region: us-west-2
iam:
withOIDC: true
vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true
addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy
managedNodeGroups:
- name: x86-al2-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
For guides that deploy the larger set of services, Kafka and ZooKeeper are also deployed, and you will also need the EBS CSI driver to accommodate their storage needs. Follow the AWS guide for the EBS CSI add-on to do so. If you're not using the VPC CNI, you can set up the Calico network policy controller using the following instructions:
Visit the official documentation, or follow the instructions below:- Spin up an EKS cluster using the console, AWS CLI or
eksctl
. - Install Calico for network policy enforcement, without replacing the CNI:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-crs.yaml
You can set up an AKS cluster using this guide.
For network policy support, no setup is required: Azure AKS comes with a built-in network policy implementation called Azure Network Policy Manager. You can choose whether you'd like to use this option or Calico when you create a cluster.
Read more at the official documentation site.Deploy the demo set of services
To deploy these into your cluster:
kubectl create namespace otterize-ecom-demo
kubectl apply -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/ecom-demo.yaml
Create an Otterize Cloud account
If you don't already have an account, browse to https://app.otterize.com to set one up.
If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.
Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.
Install Otterize OSS
If no Kubernetes clusters are connected to your account, click the "Connect your cluster" button to:
- Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
- Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there. You'll want to use
mode=defaultShadow
so you're in shadow mode on every server until you're ready to protect it.
More details, if you're curious
Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will not create network policies to restrict pod-to-pod traffic, or create Kafka ACLs to control access to Kafka topics. Instead, it will report to Otterize Cloud what would happen if enforcement were to be enabled, guiding you to implement IBAC without blocking intended access.
Optional: check that the demo was deployed.
To see all the pods in the demo:
kubectl get pods -n otterize-ecom-demo
The pods should all be ready and running:
NAME READY STATUS RESTARTS AGE
adservice-65494cbb9d-5lrv6 1/1 Running 0 115s
cartservice-6d84fc45bb-hdtwn 1/1 Running 0 115s
checkoutservice-5599486df-dvj9n 1/1 Running 3 (79s ago) 115s
currencyservice-6d64686d74-lxb7x 1/1 Running 0 115s
emailservice-7c6cbfbbd7-xjxlt 1/1 Running 0 115s
frontend-f9448d7d4-6dmnr 1/1 Running 0 115s
kafka-0 1/1 Running 2 (83s ago) 115s
loadgenerator-7f6987f59-bchgm 1/1 Running 0 114s
orderservice-7ffdbf6df-wzzfd 1/1 Running 0 115s
otterize-ecom-demo-zookeeper-0 1/1 Running 0 115s
paymentservice-86855d78db-zjjfn 1/1 Running 0 115s
productcatalogservice-5944c7f666-2rjc6 1/1 Running 0 115s
recommendationservice-6c8d848498-zm2rm 1/1 Running 0 114s
redis-cart-6b79c5b497-xpms2 1/1 Running 0 115s
shippingservice-85694cb9bd-v54xp 1/1 Running 0 114s
You can now browse the web app of this demo, if you wish:
- K8s
- Minikube
To get the externally-accessible URL where your demo front end is available, run:
kubectl get service -n otterize-ecom-demo frontend-external | awk '{print $4}'
The result should be similar to (if running on AWS EKS):
a11843075fd254f8099a986467098647-1889474685.us-east-1.elb.amazonaws.com
Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior. (The URL might take some time to populate across DNS servers. Note that we are accessing an HTTP and not an HTTPS website.)
To get the externally-accessible URL where your demo front end is available, run:
kubectl port-forward -n otterize-ecom-demo service/frontend-external 8080:80 &
The demo is now accessible at:
http://localhost:8080
Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior.
Seeing the access graph
In the Otterize Cloud UI, your cluster should now show all 3 Otterize OSS operators — the network mapper, intents operator, and credentials operator — as connected, with a green status.

And when you go back to the access graph (and select your cluster from the dropdown, if needed), you should see the following map for the demo running in your cluster:
The graph shows services (nodes) connected by arrows (edges) indicating that one service (acting as a client) called another service (acting as a server). The arrows may be due to calls discovered by the network mapper, or to declared client intents YAMLs, or both.
In fact the graph shows a lot of interesting insights, such as:
- For each service you can see its namespace and environment. You can also see its state as a server and as a client, if applicable.
- As a server, you'll see whether it's protected against unauthorized access:
- unprotected;
- protected.
- As a client, you'll see whether it's allowed to call its servers:
- would be blocked from making some of its calls if the corresponding servers were protected;
- allowed to make all its calls even when all its servers are protected; or
- blocked now from making some of its calls.
- As a server, you'll see whether it's protected against unauthorized access:
- An arrow (→) indicates an intent by a client to call a server. It's derived from any discovered intent — a call discovered by the network mapper — and any explicitly declared intent that the client declared in its
ClientIntents
YAML.- Its color indicates the blocking status of calls from that client to that server:
- would be blocked if the server were protected;
- allowed even when the server is protected; or
- blocked right now.
- Its color indicates the blocking status of calls from that client to that server:
For our demo, the services and the arrows are all yellow because the servers aren't (yet) protected, and because we haven't declared any intents so calls would be blocked if the servers were protected.
The graph can reveal more information but this should suffice for the moment.
Otterize can configure several access control mechanisms, such as Istio authorization policies and Kafka ACLs, and the access graph can take into account their combined state. But for this demo, we're only using network policies, so let's adjust the access graph view to only take these network policies into account: at the top right, toggle on "Use in access graph" for network policies, toggle off for the others.

Choose one service to protect
Now let's prepare to protect just one service, but remain in shadow mode: no actual network policies, yet. We'll verify no intended access would be blocked before turning on the network policy protection.
Which service should you protect? That's up to you: maybe you have a particularly sensitive one that's higher priority; maybe you'd rather start with a less important one, until you feel confident.
In our case, we'll choose the productcatalogservice
.
Zoom the access graph a bit to enlarge it around the productcatalogservice
:
Click on the productcatalogservice
to show more details about it:

We can see that:
- As a server, it's currently unprotected (specifically, by network policies); that's expected, as we haven't yet turned on protection.
- It would block its clients if it were protected (because there would be no network policies allowing their access).
- To authorize its clients' access, we're told to declare their intents (which would generate those network policies — this is what IBAC means, after all).
Go ahead and close the productcatalogservice
details. It's time to declare its clients' intents.
Optional: Understanding intents from the access graph
The access graph shows three services calling the productcatalogservice
: frontend
, recommendationservice
, and checkoutservice
. The access graph shows an arrow between a client and a server if the network mapper discovered calls were happening ("discovered intent"), or if the client explicitly declared an intent to call the server, or both.
Click, for example, on the yellow arrow from frontend
→ productcatalogservice
:

We see that:
- There is a discovered intent, but without a corresponding declared intent.
- Access would therefore be blocked, once the
productcatalogservice
server is protected.
Declare client intents
The graph visually tells us we';; need to declare all 3 of those clients' intents:
frontend
→productcatalogservice
.recommendationservice
→productcatalogservice
.checkoutservice
→productcatalogservice
.
But you don't actually have to look at the graph, nor know in advance the way the demo app is supposed to work. You can auto-generate the intents.
It's likely you'll want the client devs who own the frontend
, recommendationservice
, and checkoutservice
to eventually own those intent declarations, evolve them with their client code as their clients' needs change, review and approve them when they do, etc. They can then serve themselves, and make sure they can access the servers they need, while those servers remain protected.
But if you're just getting started with IBAC, and want to first see it in production before getting client devs involved, you can just auto-generate the needed client intents. In fact, you don't need to know in advance which clients call the server: the network mapper will tell you all you need. Just make sure there is representative traffic (load) in your cluster so that the network mapper will see all the expected call patterns.
Let's ask the network mapper to export all the client intents it discovered for the clients of productcatalogservice
:
otterize network-mapper export --server productcatalogservice.otterize-ecom-demo
Here's the output:
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: checkoutservice
namespace: otterize-ecom-demo
spec:
service:
name: checkoutservice
calls:
- name: cartservice
- name: currencyservice
- name: emailservice
- name: kafka
- name: paymentservice
- name: productcatalogservice
- name: shippingservice
---
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: frontend
namespace: otterize-ecom-demo
spec:
service:
name: frontend
calls:
- name: adservice
- name: cartservice
- name: checkoutservice
- name: currencyservice
- name: productcatalogservice
- name: recommendationservice
- name: shippingservice
---
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: recommendationservice
namespace: otterize-ecom-demo
spec:
service:
name: recommendationservice
calls:
- name: productcatalogservice
These are indeed the 3 clients of the productcatalogservice
.
The network mapper detected that these clients will call many servers besides the productcatalogservice
, as you would expect by looking at the access graph.
Even though we're only looking to protect the productcatalogservice
now, it's best to declare all of those calls from those 3 clients: those intents reflect the actual intent of the code, declaring them won't interfere with anything, and it will get us ready to protect those other servers too, in the future.
Let's apply these client intents:
otterize network-mapper export --server productcatalogservice.otterize-ecom-demo \
| kubectl apply -f -
If we now look at the access graph, lo and behold: the lines and arrows from all 3 clients to productcatalogservice
are now solid green: all the necessary intents have been declared. (And a lot of other lines are now green too, since these clients also call many other servers; these other servers may soon be ready to protect too.)
You can look into at any of the calls to the productcatalogservice
and see they would not be blocked if this server were protected, e.g.:

And you can verify the productcatalogservice
would not block any of its discovered clients by clicking on it:

The server is still yellow because it's unprotected — let's fix that.
Protect the productcatalogservice
Now that we've verified no intended clients would be blocked, we can safely protect the server.
To do so, recall that we configured Otterize OSS to be in the defaultShadow
mode: by default, it's in shadow mode for all services, not actually managing network policies for them. To protect a service is a simple matter of applying a ProtectedService
YAML for it, overriding the default for it:
apiVersion: k8s.otterize.com/v1alpha2
kind: ProtectedService
metadata:
name: productcatalogservice
namespace: otterize-ecom-demo
spec:
name: productcatalogservice
Let's apply this file to our cluster:
kubectl apply -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/guides/protect-1-service-network-policies/protect-productcatalogservice.yaml
This has two effects:
- Applies a default-deny ingress network policy to the
productcatalogservice
to protect it against unauthorized (undeclared) access. - Creates and manages network policies (including managing labels on client pods, this namespace, and this server pod) for all declared access. In other words, it enforces network policies only for this
productcatalogservice
server.
Let's look again at the access graph to see what happened in the cluster:
Sure enough, the productcatalogservice
is green: it's protected against unauthorized access, and allowing authorized clients since its clients' arrows are green. Clicking on it confirms this:

Ready for production
Will load balancers, ingress, and other external traffic be affected?
The intents operator automatically detects resources of kind Service
(with type LoadBalancer
or NodePort
), or of kind Ingress
, and creates network policies to allow external traffic to relevant pods.
You do not need to configure anything to get this to work. Learn more here.
Will admission webhook controllers, e.g. policy validators like Kyverno, be affected?
Since you are not placing a global default-deny policy that would affect controllers in your cluster, only default-deny network policies on individual pods, Otterize will not affect calls to admission webhook controllers and they will continue functioning as before.
Working with Otterize in CI/CD
We recommend placing the ClientIntents
and ProtectedService
resource YAMLs alongside the services that own them, in their respective Git repositories:
- The
ProtectedService
YAMLs alongside the servers they are protecting, e.g. in the Helm chart belonging to the server. ClientIntents
YAMLs, whether they were generated from the network mapper or created and maintained by the client developer teams, alongside each client, e.g. in the Helm chart belonging to the client.
In summary
So what have we learned? You can gradually roll out IBAC and drive towards zero trust, service by service, in a safe, predictable, and quick way, by following 4 simple steps:
- Choose a service to protect, say
<NAME>.<NAMESPACE>
. - Export its client's intents:
otterize network-mapper export --server <NAME>.<NAMESPACE> > protect-<NAME>.yaml
. - Declare those intents:
kubectl apply -f protect-<NAME>.yaml
. - Now protect that server by
kubectl apply -f
with the following YAML:
apiVersion: k8s.otterize.com/v1alpha2
kind: ProtectedService
metadata:
name: <NAME>
namespace: <NAMESPACE>
spec:
name: <NAME>
Lather, rinse repeat, protecting service after service as you grow more comfortable, with the access graph providing visibility at each step of the way.
Teardown
To remove the deployed demo run:
kubectl delete -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/all.yaml
kubectl delete -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/ecom-demo.yaml
kubectl delete namespace otterize-ecom-demo