Skip to main content

Istio AuthorizationPolicy automation

Otterize automates mTLS-based, HTTP-level pod-to-pod access control with Istio authorization (authZ) policies, within your Kubernetes cluster.

Implementing this kind of access control with Istio is complicated. For example, authorization policies select servers by label, and clients by service account, so both of those need to be created or updated.

To help you avoid manually managing complicated authorization policies per server, Otterize implements intent-based access control (IBAC). You just declare what calls the client pods intend to make, and everything is automatically wired together so only intended calls are allowed.

In this tutorial, we will:

  • Deploy an Istio demo application with two client pods and one server pod.
  • Declare that the first client intends to call the server with a specific HTTP path and method.
  • See that an Istio authorization policy was autogenerated to allow just that, and to block the (undeclared) calls from the other client.


Prepare a Kubernetes cluster

Before you start, you'll need a Kubernetes cluster. Having a cluster with a CNI that supports NetworkPolicies isn't required for this tutorial, but is recommended so that your cluster works with other tutorials.

Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.

If you don't have the Minikube CLI, first install it.

Then start your Minikube cluster with Calico, in order to enforce network policies.

minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico

The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.

You can now install (or reinstall) Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you:

  1. See what's happening visually in your browser, through the "access graph";

So either forego browser visualization and:

Install Otterize in your cluster, without Otterize Cloud

You'll need Helm installed on your machine to install Otterize as follows:

helm repo add otterize
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace \
--set networkMapper.istiowatcher.enable=true \
--set intentsOperator.operator.enableNetworkPolicyCreation=false
This example disables network policy enforcement.

This chart is a bundle of the Otterize intents operator, the Otterize credentials operator, and the Otterize network mapper. Initial deployment may take a couple of minutes. You can add the --wait flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be Ready using kubectl get pods -n otterize-system -w.

After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system:

NAME                                                       READY  STATUS  RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s

Or choose to include browser visualization and:

Install Otterize in your cluster, with Otterize Cloud

Create an Otterize Cloud account

If you don't already have an account, browse to to set one up.

If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.

Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.

Install Otterize OSS, connected to Otterize Cloud

If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:

  1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
  2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide " link and running the Helm commands shown there.
    1. Follow the instructions to install Otterize with enforcement on (use the toggle to make Enforcement mode: active)
More details, if you're curious

Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.

The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what would happen if it created network policies to restrict pod-to-pod traffic, and created Kafka ACLs to control access to Kafka topics. While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.

Install and configure Istio

Install Istio in the cluster via Helm
helm repo add istio
helm repo update
helm install istio-base istio/base -n istio-system --create-namespace
helm install istiod istio/istiod -n istio-system --wait
Add HTTP methods and request paths to Istio exported metrics

Apply this configuration in the istio-system namespace, propagating it to all namespaces covered by the mesh.

kubectl apply -f -n istio-system
kind: Telemetry
name: mesh-default
namespace: istio-system
- providers:
- name: envoy
- providers:
- name: prometheus
- tagOverrides:
value: request.method
value: request.path

HTTP request paths and methods aren't exported in Envoy's connection metrics by default, but we do want to capture those details when creating the network map. That way we not only have better visibility of the calling patterns, e.g. in the access graph, but we can also use that information to automatically generate fine-grained intents and enforce them with Istio authorization policies.

Deploy the two clients and the server

Deploy a simple example consisting of client and other-client calling nginx over HTTP:

kubectl apply -n otterize-tutorial-istio -f

Apply intents

We will now declare that the client intends to call the server at a particular HTTP path using a specific HTTP method.

When the intents YAML is applied, creating a custom resource of type ClientIntents, Otterize will add an Istio authorization policy to allow the intended call (client server with the declared path and method) and block all unintended calls (e.g., client-other server).


You can click on the services or the lines connecting them to see which ClientIntents you need to apply to make the connection go green!

  1. Here is the intents.yaml declaration of the client, which we will apply below:
kind: ClientIntents
name: client
namespace: otterize-tutorial-istio
name: client
- name: nginx
type: http
- path: /client-path
methods: [ GET ]

To apply it, use:

kubectl apply -n otterize-tutorial-istio -f

See it in action

Optional: check deployment status
Check that the client and server pods were deployed
kubectl get pods -n otterize-tutorial-istio

You should see

NAME                           READY   STATUS    RESTARTS   AGE
client-68b775f766-749r4 2/2 Running 0 32s
nginx-c646898-2lq7l 2/2 Running 0 32s
other-client-74cc54f7b5-9rctd 2/2 Running 0 32s

monitor both client attempts to call the server with additional terminal windows, so we can see the effects of our changes in real time.

  1. Open a new terminal window [client] and tail the client log:
kubectl logs -f --tail 1 -n otterize-tutorial-istio deploy/client
Expected output

At this point the client should be able to communicate with the server:

Calling server...
HTTP/1.1 200 OK
hello from /client-path
  1. Open another terminal window [client-other] and tail the other-client log:
kubectl logs -f --tail 1 -n otterize-tutorial-istio deploy/other-client
Expected output

At this point the client should be able to communicate with the server:

Calling server...
HTTP/1.1 200 OK
hello from /other-client-path

Keep an eye on the logs being tailed in the [other-client] terminal window, and apply this intents.yaml file in your main terminal window using:

kubectl apply -f

Client intents are the cornerstone of intent-based access control (IBAC).

as expected since it didn't declare its intents:

Calling server...
HTTP/1.1 200 OK
hello from /other-client-path # <- before applying the intents file
Calling server... # <- after applying the intents file
curl timed out
  1. And in the [client] terminal you should see that calls go through, as expected since they were declared:
Calling server...
HTTP/1.1 200 OK
hello from /client-path
  1. You should also see that a new Istio authorization policy was created:
kubectl get -n otterize-tutorial-istio

This should return:

NAME                                                                AGE
authorization-policy-to-nginx-from-client.otterize-tutorial-istio 6s

If you've attached Otterize OSS to Otterize Cloud, go back to see the access graph in your browser:

Access graph

And upon clicking the green arrow: Access graph

It's now clear what happened:

  1. The server is now protected, and is also blocking some of its clients.
  2. Calls from [client] [nginx] are declared and therefore allowed (green arrow).
  3. Calls from [client-other] [nginx] are not declared and therefore blocked (red arrow). Click on the arrow to see what to do about it.

Otterize did its job of both protecting the server and allowing intended access.

What did we accomplish?

  • Controlling access through Istio authorization policies no longer means touching authorization policies at all.

  • The server is now protected, and can be accessed only by clients which declared their intents, authenticated via mTLS connection with specific certificates.

  • Clients simply declare what they need to access with their intents files.

  • The next kubectl apply ensures that authorization policies automatically reflect the most recent intended pod-to-pod access.

Expand to see what happened behind the scenes

Otterize generated a specific Istio authorization policy on the ingress of the pod of the server, allowing the server to be accessed by the pod of the client, based on that client's declared intent. Otterize uses labels to define the authorization policy and associate it with a server in a namespace, and uses service accounts to identify clients, as Istio requires. This happens as follows:

  1. The server's pod is given a label whose value uniquely represents that server. The Istio authorization policy stipulates that it applies to the ingress of server pods with this label.
  2. The client's service account is looked up through its pod, and used in the policy. The authorization policy stipulates that only services with this service account can access the server. In the event that the service account is shared by multiple services, an Event is placed on the ClientIntent to warn about this, which is also picked up as a warning in Otterize Cloud, if connected.

Otterize saved us from doing all this work: by simply declaring the client's intents in intents.yaml, all the appropriate configuration was managed automatically behind the scenes.

Learn more about Istio authorization policies and Otterize.

Bonus tutorial

Try to create an intents file yourself for client-other, and apply it to allow this other client to call the server.

What's next


To remove Istio and the deployed examples run:

helm uninstall istio-base -n istio-system
kubectl delete namespace otterize-tutorial-istio