Istio AuthorizationPolicy automation
Otterize automates mTLS-based, HTTP-level pod-to-pod access control with Istio authorization (authZ) policies, within your Kubernetes cluster.
Implementing this kind of access control with Istio is complicated. For example, authorization policies select servers by label, and clients by service account, so both of those need to be created or updated.
To help you avoid manually managing complicated authorization policies per server, Otterize implements intent-based access control (IBAC). You just declare what calls the client pods intend to make, and everything is automatically wired together so only intended calls are allowed.
In this tutorial, we will:
- Deploy an Istio demo application with two client pods and one server pod.
- Declare that the first client intends to call the server with a specific HTTP path and method.
- See that an Istio authorization policy was autogenerated to allow just that, and to block the (undeclared) calls from the other client.
Prerequisites
Prepare a Kubernetes cluster
Before you start, you'll need a Kubernetes cluster. Having a cluster with a CNI that supports NetworkPolicies isn't required for this tutorial, but is recommended so that your cluster works with other tutorials.
Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.
- Minikube
- Google GKE
- AWS EKS
- Azure AKS
If you don't have the Minikube CLI, first install it.
Then start your Minikube cluster with Calico, in order to enforce network policies.
minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico
The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.
- gcloud CLI
- Console
To use the gcloud CLI for this tutorial, first install and then initialize it.
To enable network policy enforcement when creating a new cluster:
Run the following command:
gcloud container clusters create CLUSTER_NAME --enable-network-policy --zone=ZONE
(Replace CLUSTER_NAME
with the name of the new cluster and ZONE
with your zone.)
To enable network policy enforcement for an existing cluster, perform the following tasks:
Run the following command to enable the add-on:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=ENABLED
(Replace CLUSTER_NAME
with the name of the cluster.)
Then enable network policy enforcement on your cluster, re-creating your cluster's node pools with network policy enforcement enabled:
gcloud container clusters update CLUSTER_NAME --enable-network-policy
(Replace CLUSTER_NAME
with the name of the cluster.)
To enable network policy enforcement when creating a new cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
On the Google Kubernetes Engine page, click Create.
Configure your cluster as desired.
From the navigation pane, under Cluster, click Networking.
Select the checkbox to Enable network policy.
Click Create.
To enable network policy enforcement for an existing cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click Edit network policy.
Select the checkbox to Enable network policy for master and click Save Changes.
Wait for your changes to apply, and then click Edit network policy again.
Select the checkbox to Enable network policy for nodes.
Click Save Changes.
Starting August 29, 2023, you can configure the built-in VPC CNI add-on to enable network policy support.
To spin up a new cluster, use the following eksctl
ClusterConfig
, and save it to a file called cluster.yaml
.
Spin up the cluster using eksctl create cluster -f cluster.yaml
. This will spin up a cluster called network-policy-demo
in us-west-2
.
The important bit is the configuration for the VPC CNI addon:
configurationValues: |-
enableNetworkPolicy: "true"
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: network-policy-demo
version: "1.27"
region: us-west-2
iam:
withOIDC: true
vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true
addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy
managedNodeGroups:
- name: x86-al2-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
For guides that deploy the larger set of services, Kafka and ZooKeeper are also deployed, and you will also need the EBS CSI driver to accommodate their storage needs. Follow the AWS guide for the EBS CSI add-on to do so. If you're not using the VPC CNI, you can set up the Calico network policy controller using the following instructions:
Visit the official documentation, or follow the instructions below:- Spin up an EKS cluster using the console, AWS CLI or
eksctl
. - Install Calico for network policy enforcement, without replacing the CNI:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-crs.yaml
You can set up an AKS cluster using this guide.
For network policy support, no setup is required: Azure AKS comes with a built-in network policy implementation called Azure Network Policy Manager. You can choose whether you'd like to use this option or Calico when you create a cluster.
Read more at the official documentation site.You can now install (or reinstall) Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you:
- See what's happening visually in your browser, through the "access graph";
So either forego browser visualization and:
Install Otterize in your cluster, without Otterize Cloud
You'll need Helm installed on your machine to install Otterize as follows:
helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace \
--set networkMapper.istiowatcher.enable=true \
--set intentsOperator.operator.enableNetworkPolicyCreation=false
This chart is a bundle of the Otterize intents operator, the Otterize credentials operator, and the Otterize network mapper.
Initial deployment may take a couple of minutes.
You can add the --wait
flag for Helm to wait for deployment to complete and all pods to be Ready
, or manually watch for all pods to be Ready
using kubectl get pods -n otterize-system -w
.
After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system
:
NAME READY STATUS RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s
Or choose to include browser visualization and:
Install Otterize in your cluster, with Otterize Cloud
Create an Otterize Cloud account
If you don't already have an account, browse to https://app.otterize.com to set one up.
If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.
Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.
Install Otterize OSS, connected to Otterize Cloud
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
- Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
- Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
- Follow the instructions to install Otterize with enforcement on (use the toggle to make
Enforcement mode: active
)
- Follow the instructions to install Otterize with enforcement on (use the toggle to make
More details, if you're curious
Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what would happen if it created network policies to restrict pod-to-pod traffic, and created Kafka ACLs to control access to Kafka topics. While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.
Install and configure Istio
Install Istio in the cluster via Helm
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install istio-base istio/base -n istio-system --create-namespace
helm install istiod istio/istiod -n istio-system --wait
Add HTTP methods and request paths to Istio exported metrics
Apply this configuration in the istio-system
namespace, propagating it to all namespaces covered by the mesh.
kubectl apply -f https://docs.otterize.com/code-examples/network-mapper/istio-telemetry-enablement.yaml -n istio-system
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
metrics:
- providers:
- name: prometheus
overrides:
- tagOverrides:
request_method:
value: request.method
request_path:
value: request.path
HTTP request paths and methods aren't exported in Envoy's connection metrics by default, but we do want to capture those details when creating the network map. That way we not only have better visibility of the calling patterns, e.g. in the access graph, but we can also use that information to automatically generate fine-grained intents and enforce them with Istio authorization policies.
Deploy the two clients and the server
Deploy a simple example consisting of client
and other-client
calling nginx
over HTTP:
kubectl apply -n otterize-tutorial-istio -f https://docs.otterize.com/code-examples/istio-authorization-policies/all.yaml
Apply intents
We will now declare that the client intends to call the server at a particular HTTP path using a specific HTTP method.
When the intents YAML is applied, creating a custom resource of type ClientIntents
,
Otterize will add an Istio authorization policy to allow the intended call
(client → server with the declared path and method) and block all unintended calls (e.g., client-other → server).
You can click on the services or the lines connecting them to see which ClientIntents you need to apply to make the connection go green!
- Here is the
intents.yaml
declaration of the client, which we will apply below:
apiVersion: k8s.otterize.com/v1alpha3
kind: ClientIntents
metadata:
name: client
namespace: otterize-tutorial-istio
spec:
service:
name: client
calls:
- name: nginx
type: http
HTTPResources:
- path: /client-path
methods: [ GET ]
To apply it, use:
kubectl apply -n otterize-tutorial-istio -f https://docs.otterize.com/code-examples/istio-authorization-policies/intents.yaml
See it in action
Optional: check deployment status
kubectl get pods -n otterize-tutorial-istio
You should see
NAME READY STATUS RESTARTS AGE
client-68b775f766-749r4 2/2 Running 0 32s
nginx-c646898-2lq7l 2/2 Running 0 32s
other-client-74cc54f7b5-9rctd 2/2 Running 0 32s
monitor both client attempts to call the server with additional terminal windows, so we can see the effects of our changes in real time.
- Open a new terminal window [client] and tail the client log:
kubectl logs -f --tail 1 -n otterize-tutorial-istio deploy/client
Expected output
At this point the client should be able to communicate with the server:
Calling server...
HTTP/1.1 200 OK
...
hello from /client-path
- Open another terminal window [client-other] and tail the other-client log:
kubectl logs -f --tail 1 -n otterize-tutorial-istio deploy/other-client
Expected output
At this point the client should be able to communicate with the server:
Calling server...
HTTP/1.1 200 OK
...
hello from /other-client-path
Keep an eye on the logs being tailed in the [other-client] terminal window,
and apply this intents.yaml
file in your main terminal window using:
kubectl apply -f https://docs.otterize.com/code-examples/istio-authorization-policies/intents.yaml
Client intents are the cornerstone of intent-based access control (IBAC).
as expected since it didn't declare its intents:
Calling server...
HTTP/1.1 200 OK
...
hello from /other-client-path # <- before applying the intents file
Calling server... # <- after applying the intents file
curl timed out
Terminated
- And in the [client] terminal you should see that calls go through, as expected since they were declared:
Calling server...
HTTP/1.1 200 OK
...
hello from /client-path
- You should also see that a new Istio authorization policy was created:
kubectl get authorizationpolicies.security.istio.io -n otterize-tutorial-istio
This should return:
NAME AGE
authorization-policy-to-nginx-from-client.otterize-tutorial-istio 6s
If you've attached Otterize OSS to Otterize Cloud, go back to see the access graph in your browser:
And upon clicking the green arrow:
It's now clear what happened:
- The server is now protected, and is also blocking some of its clients.
- Calls from [client] → [nginx] are declared and therefore allowed (green arrow).
- Calls from [client-other] → [nginx] are not declared and therefore blocked (red arrow). Click on the arrow to see what to do about it.
Otterize did its job of both protecting the server and allowing intended access.
What did we accomplish?
Controlling access through Istio authorization policies no longer means touching authorization policies at all.
The server is now protected, and can be accessed only by clients which declared their intents, authenticated via mTLS connection with specific certificates.
Clients simply declare what they need to access with their intents files.
The next
kubectl apply
ensures that authorization policies automatically reflect the most recent intended pod-to-pod access.
Expand to see what happened behind the scenes
Otterize generated a specific Istio authorization policy on the ingress of the pod of the server, allowing the server to be accessed by the pod of the client, based on that client's declared intent. Otterize uses labels to define the authorization policy and associate it with a server in a namespace, and uses service accounts to identify clients, as Istio requires. This happens as follows:
- The server's pod is given a label
intents.otterize.com/server
whose value uniquely represents that server. The Istio authorization policy stipulates that it applies to the ingress of server pods with this label. - The client's service account is looked up through its pod, and used in the policy.
The authorization policy stipulates that only services with this service account can access the server.
In the event that the service account is shared by multiple services, an Event is placed on the
ClientIntent
to warn about this, which is also picked up as a warning in Otterize Cloud, if connected.
Otterize saved us from doing all this work: by simply declaring the client's intents in intents.yaml
,
all the appropriate configuration was managed automatically behind the scenes.
Try to create an intents file yourself for client-other, and apply it to allow this other client to call the server.
What's next
- Get started with the Otterize network mapper for Istio to help you bootstrap intents files with HTTP resources for use in intent-based access control (IBAC).
Teardown
To remove Istio and the deployed examples run:
helm uninstall istio-base -n istio-system
kubectl delete namespace otterize-tutorial-istio