Visual tutorial: IBAC with network policies
Otterize enables intent-based access control (IBAC). Building on the previous Kubernetes cluster mapping tutorial, we'll actually roll out intent-based access control (IBAC) in the cluster using Kubernetes network policies to control access, first in shadow mode without enforcement, and then with enforcement on.
All the capabilities of IBAC are within Otterize OSS, while the access graph in Otterize Cloud will guide us visually in these steps.
In this tutorial, we will:
- Start where the previous tutorial left off: with a demo based on the Google microservices demo (a simple e-commerce application) deployed to a Kubernetes cluster, with Otterize OSS installed in the cluster and integrated with Otterize Cloud.
- Use the access graph and shadow mode along with intents to see what would happen once enforcement is turned on.
- Turn on enforcement and verify that what happened is what was expected.
Prerequisites
The following steps are only needed if you haven't already run through the Kubernetes cluster mapping tutorial.
Prepare a cluster
Before you start, you'll need a Kubernetes cluster.
Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.
- Minikube
- Google GKE
- AWS EKS
- Azure AKS
If you don't have the Minikube CLI, first install it.
Then start your Minikube cluster with Calico, in order to enforce network policies.
minikube start --cpus=4 --memory 8192 --disk-size 32g --cni=calico
The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.
- gcloud CLI
- Console
To use the gcloud CLI for this tutorial, first install and then initialize it.
To enable network policy enforcement when creating a new cluster:
Run the following command:
gcloud container clusters create CLUSTER_NAME --enable-network-policy --zone=ZONE
(Replace CLUSTER_NAME
with the name of the new cluster and ZONE
with your zone.)
To enable network policy enforcement for an existing cluster, perform the following tasks:
Run the following command to enable the add-on:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=ENABLED
(Replace CLUSTER_NAME
with the name of the cluster.)
Then enable network policy enforcement on your cluster, re-creating your cluster's node pools with network policy enforcement enabled:
gcloud container clusters update CLUSTER_NAME --enable-network-policy
(Replace CLUSTER_NAME
with the name of the cluster.)
To enable network policy enforcement when creating a new cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
On the Google Kubernetes Engine page, click Create.
Configure your cluster as desired.
From the navigation pane, under Cluster, click Networking.
Select the checkbox to Enable network policy.
Click Create.
To enable network policy enforcement for an existing cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click Edit network policy.
Select the checkbox to Enable network policy for master and click Save Changes.
Wait for your changes to apply, and then click Edit network policy again.
Select the checkbox to Enable network policy for nodes.
Click Save Changes.
- Spin up an EKS cluster using the console, AWS CLI or
eksctl
. - Install Calico for network policy enforcement, without replacing the CNI:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-crs.yaml
You can set up an AKS cluster using this guide.
For network policy support, no setup is required: Azure AKS comes with a built-in network policy implementation called Azure Network Policy Manager. You can choose whether you'd like to use this option or Calico when you create a cluster.
Read more at the official documentation site.Deploy the demo set of services
To deploy these into your cluster:
kubectl create namespace otterize-ecom-demo
kubectl apply -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/ecom-demo.yaml
Create an Otterize Cloud account
If you don't already have an account, browse to https://app.otterize.com to set one up.
If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.
Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.
Install Otterize OSS
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
- Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
- Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
More details, if you're curious
Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will not create network policies to restrict pod-to-pod traffic, or create Kafka ACLs to control access to Kafka topics. Instead, it will report to Otterize Cloud what would happen if enforcement were to be enabled, guiding you to implement IBAC without blocking intended access.
Optional: check that the demo was deployed...
To see all the pods in the demo:
kubectl get pods -n otterize-ecom-demo
The pods should all be ready and running:
NAME READY STATUS RESTARTS AGE
adservice-65494cbb9d-5lrv6 1/1 Running 0 115s
cartservice-6d84fc45bb-hdtwn 1/1 Running 0 115s
checkoutservice-5599486df-dvj9n 1/1 Running 3 (79s ago) 115s
currencyservice-6d64686d74-lxb7x 1/1 Running 0 115s
emailservice-7c6cbfbbd7-xjxlt 1/1 Running 0 115s
frontend-f9448d7d4-6dmnr 1/1 Running 0 115s
kafka-0 1/1 Running 2 (83s ago) 115s
loadgenerator-7f6987f59-bchgm 1/1 Running 0 114s
orderservice-7ffdbf6df-wzzfd 1/1 Running 0 115s
otterize-ecom-demo-zookeeper-0 1/1 Running 0 115s
paymentservice-86855d78db-zjjfn 1/1 Running 0 115s
productcatalogservice-5944c7f666-2rjc6 1/1 Running 0 115s
recommendationservice-6c8d848498-zm2rm 1/1 Running 0 114s
redis-cart-6b79c5b497-xpms2 1/1 Running 0 115s
shippingservice-85694cb9bd-v54xp 1/1 Running 0 114s
Optional: Browse the demo
- K8s
- Minikube
To get the externally-accessible URL where your demo front end is available, run:
kubectl get service -n otterize-ecom-demo frontend-external | awk '{print $4}'
The result should be similar to (if running on AWS EKS):
a11843075fd254f8099a986467098647-1889474685.us-east-1.elb.amazonaws.com
Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior. (The URL might take some time to populate across DNS servers. Note that we are accessing an HTTP and not an HTTPS website.)
To get the externally-accessible URL where your demo front end is available, run:
kubectl port-forward -n otterize-ecom-demo service/frontend-external 8080:80 &
The demo is now accessible at:
http://localhost:8080
Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior.
Seeing the access graph
In the Otterize Cloud UI, your cluster should now show all 3 Otterize OSS operators — the intents operator, network mapper, and credentials operator — as connected, with a green status.
And when you go back to the access graph (and select your cluster from the dropdown, if needed), you should see the following map for the demo running in your cluster:
Each service is shown as a node in the access graph, while the thick lines (edges) connecting the services show access between them, as detected by the network mapper.
If you haven't already run through the Kubernetes cluster mapping tutorial, you might browse just the section about visualizing the cluster via the access graph as you see it now, before returning here.
Try out IBAC with shadow mode
Now let's start to roll out access controls, but remain in shadow mode: no actual enforcement of controls, yet.
We'll declare that the frontend
intends to call the recommendationservice
.
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: frontend
spec:
service:
name: frontend
calls:
- name: recommendationservice
We expect this will provide secure access, allowing the intended access from the frontend
while protecting the recommendationservice
from unintended access.
Apply this intents file with:
kubectl apply -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/phase-1.yaml
Look at the access graph again:
The thick green line from frontend
to recommendationservice
, representing the discovered intent from the network mapper, no longer has an empty center, but rather a solid black center, representing the explicitly declared intent.
Click on that frontend
→ recommendationservice
line:

- We can see the
frontend
can call therecommendationservice
, and will be guaranteed access even once enforcement is turned on.
Click on the recommendationservice
itself:

- We can see it's not protected now (we're in shadow mode, and there are no default-deny network policies in place).
- We can also see it would not block any clients once protection is enabled.
- And there is no warning about it remaining unprotected once enforcement is turned on. All is ready for turning on enforcement and protecting this service from any unintended calls.
Declare more intents
Let’s add another intent, this time from recommendationservice
to productcatalogservice
.
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: recommendationservice
spec:
service:
name: recommendationservice
calls:
- name: productcatalogservice
Apply this intents file with:
kubectl apply -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/phase-2.yaml
Look at the access graph again:
As before, the line from recommendationservice
→ productcatalogservice
now has a solid black center, and no warnings. That's what we expected.
But two other lines, frontend
→ productcatalogservice
and checkoutservice
→ productcatalogservice
, have turned orange. And the productcatalogservice
lock icon has turned red. Why?
Click on one of those orange lines:

- This access is not blocked now — because we're still in shadow mode (otherwise the line would have been red).
- But access would be blocked once enforcement is turned on. To prevent that, we're told to declare and apply an intent for this call.
Click on the productcatalogservice
:

- We can see it's not protected now, as before.
- But we can also see it would block any clients once protection is enabled, which is why the lock is red.
- And there is an explicit warning to apply the missing intents from all its clients before turning on enforcement.
Let's add those intents for the frontend
and checkoutservice
.
- frontend
- checkoutservice
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: frontend
spec:
service:
name: frontend
calls:
- name: recommendationservice
- name: productcatalogservice
apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: checkoutservice
spec:
service:
name: checkoutservice
calls:
- name: productcatalogservice
Apply these intents files with:
kubectl apply -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/phase-3.yaml
Let's go back to the access graph:
All is well again: the productcatalogservice
will be protected, and its 3 clients will still have access, after enforcement is turned on.
- Pick a service to protect.
- Make sure all its clients declare and intents to call it.
- When you're ready, turn on enforcement.
The access graph and shadow mode allow us to gain confidence by showing what would happen and highlighting any problems.
Protect everything easily
Could we somehow automatically bootstrap this for the whole cluster and protect all services, without breaking any intended calls? Yes!
The network mapper keeps track of all attempted calls, after all: those are the discovered intents. If you are confident that all of those calls are intended and appropriate, you can use that information to automatically generate intent declarations and apply them.
Let's use the Otterize CLI (installation and reference to export all discovered intents as YAML declarations:
otterize network-mapper export -n otterize-ecom-demo --output-type dir --output intents
You can apply them using:
kubectl apply -f intents
Or, equivalently, just use the already-generated intents files included in this docs location:
kubectl apply -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/all.yaml
Look at the access graph again:
The graph confirms that all (but two) services would be protected, and no intended calls would be blocked, once we apply protection.
What about those last two?
Note that the two leftmost services would not be protected. That's because they have no discovered clients, and hence did not get intents generated and applied for them.
They may not even be callable. But if they are callable but are not being called now, you may want to protect them (and all others) with a global default-deny network policy. Check the "Global default deny" checkbox at the top of the access graph to see what would happen in that case. Note this only informs Otterize that such a policy is in place; it does not put it in place, so you'll need to do it yourself.)
Enable enforcement
With the confidence we gained, let's enable enforcement (via network policies) by upgrading your Otterize installation to remove the intentsOperator.operator.enableEnforcement=false
flag.
At the top of the access graph, click the Configure cluster button; or in the clusters page, clicking on the Connection guide → link for your cluster.
Then run the Helm commands shown there, and specifically follow the instructions to install Otterize with enforcement on (not in shadow mode). Namely, omit the following flag in the Helm command:
--set intentsOperator.operator.enableEnforcement=false
Let's look at the access graph again:
Note that all (but two) of the lock icons are locked, indicating the services are protected. And all the locks and edges are green, indicating no call attempts (discovered by the network mapper) are being blocked.
From now on, if a client attempts a server call that wasn't covered by one of the declared intents, that would be discovered by the network mapper and show up as (new) discovered intents. Remember that the network mapper discovers attempted access, not just successful access. In this case, a red line would appear from that client to that server, and the lock on that server would turn red: calls from that client are being blocked.
That may be because:
- The calls didn't happen when the network mapper was building its map from which the intents were bootstrapped, in which case you may choose to generate all the intents again, or or just create and apply the new ones manually.
- Or... the client maliciously called this server, but is being blocked by the network policies. IBAC has saved the day!
Optional: see the generated network policies
To list all generated network policies run:
kubectl get netpol -n otterize-ecom-demo
Let's inspect one of these network policies with:
kubectl get netpol -n otterize-ecom-demo access-to-recommendationservice-from-otterize-ecom-demo -o yaml
The result should be:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-to-recommendationservice-from-otterize-ecom-demo
namespace: otterize-ecom-demo
...
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
intents.otterize.com/namespace-name: otterize-ecom-demo
podSelector:
matchLabels:
intents.otterize.com/access-recommendationservic-otterize-ecom-demo-850ad9: "true"
podSelector:
matchLabels:
intents.otterize.com/server: recommendationservic-otterize-ecom-demo-850ad9
policyTypes:
- Ingress
Optional: browse the demo to verify it still works
You can play with the demo in your browser to see it still works as intended, while everything in it is protected against unintended and potentially malicious access.
- K8s
- Minikube
To get the externally-accessible URL where your demo front end is available, run:
kubectl get service -n otterize-ecom-demo frontend-external | awk '{print $4}'
The result should be similar to (if running on AWS EKS):
a11843075fd254f8099a986467098647-1889474685.us-east-1.elb.amazonaws.com
Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior. (The URL might take some time to populate across DNS servers. Note that we are accessing an HTTP and not an HTTPS website.)
To get the externally-accessible URL where your demo front end is available, run:
kubectl port-forward -n otterize-ecom-demo service/frontend-external 8080:80 &
The demo is now accessible at:
http://localhost:8080
Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior.
What's next
- Learn how to manage secure access for Kafka using the demo lab tutorial.
Teardown
To remove the deployed demo run:
kubectl delete -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/all.yaml
kubectl delete -n otterize-ecom-demo -f https://docs.otterize.com/code-examples/shadow-mode/ecom-demo.yaml
kubectl delete namespace otterize-ecom-demo