Istio HTTP-level access mapping
With its Istio watcher enabled, the network mapper allows you to map pod-to-pod Istio traffic within your K8s cluster.
In this tutorial, we will:
- Install the Istio service mesh in our cluster.
- Deploy 2 clients calling a server (in this case, an nginx reverse-proxy) over HTTP using different paths.
- Map their calls using the network mapper and its Istio watcher component.
Prerequisites
Prepare a Kubernetes cluster
Before you start, you'll need a Kubernetes cluster. Having a cluster with a CNI that supports NetworkPolicies isn't required for this tutorial, but is recommended so that your cluster works with other tutorials.
Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.
- Minikube
- Google GKE
- AWS EKS
- Azure AKS
If you don't have the Minikube CLI, first install it.
Then start your Minikube cluster with Calico, in order to enforce network policies.
minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico
The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.
- gcloud CLI
- Console
To use the gcloud CLI for this tutorial, first install and then initialize it.
To enable network policy enforcement when creating a new cluster:
Run the following command:
gcloud container clusters create CLUSTER_NAME --enable-network-policy --zone=ZONE
(Replace CLUSTER_NAME
with the name of the new cluster and ZONE
with your zone.)
To enable network policy enforcement for an existing cluster, perform the following tasks:
Run the following command to enable the add-on:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=ENABLED
(Replace CLUSTER_NAME
with the name of the cluster.)
Then enable network policy enforcement on your cluster, re-creating your cluster's node pools with network policy enforcement enabled:
gcloud container clusters update CLUSTER_NAME --enable-network-policy
(Replace CLUSTER_NAME
with the name of the cluster.)
To enable network policy enforcement when creating a new cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
On the Google Kubernetes Engine page, click Create.
Configure your cluster as desired.
From the navigation pane, under Cluster, click Networking.
Select the checkbox to Enable network policy.
Click Create.
To enable network policy enforcement for an existing cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click Edit network policy.
Select the checkbox to Enable network policy for master and click Save Changes.
Wait for your changes to apply, and then click Edit network policy again.
Select the checkbox to Enable network policy for nodes.
Click Save Changes.
Starting August 29, 2023, you can configure the built-in VPC CNI add-on to enable network policy support.
To spin up a new cluster, use the following eksctl
ClusterConfig
, and save it to a file called cluster.yaml
.
Spin up the cluster using eksctl create cluster -f cluster.yaml
. This will spin up a cluster called network-policy-demo
in us-west-2
.
The important bit is the configuration for the VPC CNI addon:
configurationValues: |-
enableNetworkPolicy: "true"
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: network-policy-demo
version: "1.27"
region: us-west-2
iam:
withOIDC: true
vpc:
clusterEndpoints:
publicAccess: true
privateAccess: true
addons:
- name: vpc-cni
version: 1.14.0
attachPolicyARNs: #optional
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
configurationValues: |-
enableNetworkPolicy: "true"
- name: coredns
- name: kube-proxy
managedNodeGroups:
- name: x86-al2-on-demand
amiFamily: AmazonLinux2
instanceTypes: [ "m6i.xlarge", "m6a.xlarge" ]
minSize: 0
desiredCapacity: 2
maxSize: 6
privateNetworking: true
disableIMDSv1: true
volumeSize: 100
volumeType: gp3
volumeEncrypted: true
tags:
team: "eks"
For guides that deploy the larger set of services, Kafka and ZooKeeper are also deployed, and you will also need the EBS CSI driver to accommodate their storage needs. Follow the AWS guide for the EBS CSI add-on to do so. If you're not using the VPC CNI, you can set up the Calico network policy controller using the following instructions:
Visit the official documentation, or follow the instructions below:- Spin up an EKS cluster using the console, AWS CLI or
eksctl
. - Install Calico for network policy enforcement, without replacing the CNI:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.12.6/config/master/calico-crs.yaml
You can set up an AKS cluster using this guide.
For network policy support, no setup is required: Azure AKS comes with a built-in network policy implementation called Azure Network Policy Manager. You can choose whether you'd like to use this option or Calico when you create a cluster.
Read more at the official documentation site.You can now install Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you:
- See what's happening visually in your browser, through the "access graph";
So either forego browser visualization and:
Install the Otterize network mapper in your cluster with the Istio watcher component enabled, and without Otterize Cloud
You'll need Helm installed on your machine to install Otterize as follows:
helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace --set networkMapper.istiowatcher.enable=true
This chart is a bundle of the Otterize intents operator, the Otterize credentials operator, and the Otterize network mapper with the Istio watcher component enabled.
Initial deployment may take a couple of minutes.
You can add the --wait
flag for Helm to wait for deployment to complete and all pods to be Ready
, or manually watch for all pods to be Ready
using kubectl get pods -n otterize-system -w
.
After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system
:
NAME READY STATUS RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-istio-watcher-5c664987d-2mvw9 1/1 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s
Or choose to include browser visualization and:
Install Otterize in your cluster, with Otterize Cloud
Create an Otterize Cloud account
If you don't already have an account, browse to https://app.otterize.com to set one up.
If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.
Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.
Install Otterize OSS, connected to Otterize Cloud
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
- Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
- Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
- Follow the instructions to install Otterize with enforcement on (use the toggle to make
Enforcement mode: active
) - And add the following flag to the Helm command:
--set networkMapper.istiowatcher.enable=true
- Follow the instructions to install Otterize with enforcement on (use the toggle to make
More details, if you're curious
Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what would happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.
Finally, you'll need to install the Otterize CLI (if you haven't already) to interact with the network mapper:
Install the Otterize CLI
- Mac
- Windows
- Linux
- Brew
- Apple Silicon
- Intel 64-bit
brew install otterize/otterize/otterize-cli
curl -LJO https://get.otterize.com/otterize-cli/v1.0.2/otterize_macOS_arm64_notarized.zip
tar xf otterize_macOS_arm64_notarized.zip
sudo cp otterize /usr/local/bin # optionally move to PATH
curl -LJO https://get.otterize.com/otterize-cli/v1.0.2/otterize_macOS_x86_64_notarized.zip
tar xf otterize_macOS_x86_64_notarized.zip
sudo cp otterize /usr/local/bin # optionally move to PATH
- Scoop
- 64-bit
scoop bucket add otterize-cli https://github.com/otterize/scoop-otterize-cli
scoop update
scoop install otterize-cli
Invoke-WebRequest -Uri https://get.otterize.com/otterize-cli/v1.0.2/otterize_windows_x86_64.zip -OutFile otterize_Windows_x86_64.zip
Expand-Archive otterize_Windows_x86_64.zip -DestinationPath .
# optionally move to PATH
- 64-bit
wget https://get.otterize.com/otterize-cli/v1.0.2/otterize_linux_x86_64.tar.gz
tar xf otterize_linux_x86_64.tar.gz
sudo cp otterize /usr/local/bin # optionally move to PATH
More variants are available at the GitHub Releases page.
Install and configure Istio
Install Istio in the cluster via Helm
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install istio-base istio/base -n istio-system --create-namespace
helm install istiod istio/istiod -n istio-system --wait
Create a namespace for our demo application and label it for Istio injection
kubectl create namespace otterize-tutorial-istio-mapping
kubectl label namespace otterize-tutorial-istio-mapping istio-injection=enabled
Add HTTP methods and request paths to Istio exported metrics
Apply this configuration in the istio-system
namespace, propagating it to all namespaces covered by the mesh.
kubectl apply -f https://docs.otterize.com/code-examples/network-mapper/istio-telemetry-enablement.yaml -n istio-system
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
name: mesh-default
namespace: istio-system
spec:
accessLogging:
- providers:
- name: envoy
metrics:
- providers:
- name: prometheus
overrides:
- tagOverrides:
request_method:
value: request.method
request_path:
value: request.path
HTTP request paths and methods aren't exported in Envoy's connection metrics by default, but we do want to capture those details when creating the network map. That way we not only have better visibility of the calling patterns, e.g. in the access graph, but we can also use that information to automatically generate fine-grained intents and enforce them with Istio authorization policies.
Deploy demo to simulate traffic
Let's add services and traffic to the cluster and see how the network mapper builds the map.
Deploy the following simple example — client
, client2
and nginx
, communicating over HTTP:
kubectl apply -n otterize-tutorial-istio-mapping -f https://docs.otterize.com/code-examples/network-mapper/istio.yaml
Map the cluster
The Istio watcher component of the network mapper starts querying Envoy sidecars for HTTP connections and builds an in-memory network map as soon as it's installed. The Otterize CLI allows you to interact with the network mapper to grab a snapshot of current mapped traffic, reset its state, and more.
For a complete list of the CLI capabilities read the CLI command reference.
Extract and see the network map
You can get the network map by calling the CLI list
or export
commands.
The export
output format can be yaml
(Kubernetes client intents files) and json
.
The following shows the CLI output filtered for the namespace (otterize-tutorial-istio-mapping
) of the example above.
Note the HTTP-level details in the list
and export
results. For example, the exported client intents YAML files contain specific path and method information for each intended call.
- Image
- List
- Export as intents
- Export as JSON
Visualize the overall pod-to-pod network map built up so far, as an image. Note that this image is actually built from information from the network mapper's sniffer (based on DNS requests and open TCP connections), and does not require the Istio watcher (which only supplies fine-grained, HTTP-level information). To retrieve HTTP-level information, use the
list
orexport
commands.otterize network-mapper visualize -n otterize-tutorial-istio-mapping -o otterize-tutorial-istio-map.png
For the simple example above, you should get an image file that looks like:
List the pod-to-pod network map built up so far:
otterize network-mapper list -n otterize-tutorial-istio-mapping
For the simple example above, you should see:
client in namespace otterize-tutorial-istio-mapping calls:
- nginx in namespace otterize-tutorial-istio-mapping
- path /client-path, methods: [GET]
client2 in namespace otterize-tutorial-istio-mapping calls:
- nginx in namespace otterize-tutorial-istio-mapping
- path /client2-path, methods: [POST]Repeating lines showing calls to common services like
prometheus
orjaeger
were omitted for simplicity.
Export as YAML client intents (the default format) the pod-to-pod network map built up so far:
otterize network-mapper export -n otterize-tutorial-istio-mapping
For the simple example above, you should see (concatenated into one YAML file):
apiVersion: k8s.otterize.com/v1alpha3
kind: ClientIntents
metadata:
name: client
namespace: otterize-tutorial-istio-mapping
spec:
service:
name: client
calls:
- name: nginx
type: http
HTTPResources:
- path: /client-path
methods: [GET]
---
apiVersion: k8s.otterize.com/v1alpha3
kind: ClientIntents
metadata:
name: client2
namespace: otterize-tutorial-istio-mapping
spec:
service:
name: client2
calls:
- name: nginx
type: http
HTTPResources:
- path: /client2-path
methods: [POST]
Export as JSON the pod-to-pod network map built up so far:
otterize network-mapper export -n otterize-tutorial-istio-mapping --format json
For the simple example above, you should see:
[
{
"kind": "ClientIntents",
"apiVersion": "k8s.otterize.com/v1alpha3",
"metadata": {
"name": "client",
"namespace": "otterize-tutorial-istio-mapping"
},
"spec": {
"service": {
"name": "client"
},
"calls": [
{
"name": "nginx",
"type": "http",
"HTTPResources": [
{
"path": "/client-path",
"methods": ["GET"]
}
]
}
]
}
},
{
"kind": "ClientIntents",
"apiVersion": "k8s.otterize.com/v1alpha3",
"metadata": {
"name": "client2",
"namespace": "otterize-tutorial-istio-mapping"
},
"spec": {
"service": {
"name": "client2"
},
"calls": [
{
"name": "nginx",
"type": "http",
"HTTPResources": [
{
"path": "/client2-path",
"methods": ["POST"]
}
]
}
]
}
}
]
Show the access graph in Otterize Cloud
If you've attached Otterize OSS to Otterize Cloud, you can now also see the access graph in your browser:
Note, for example, that the client
→ nginx
arrow is yellow. Clicking on it shows:

The access graph reveals several types of information and insights, such as:
- Seeing the network map for different clusters, seeing the subset of the map for a given namespace, or even — according to how you've mapped namespaces to environments — seeing the subset of the map for a specific environment.
- Revealing detailed HTTP-level information about the calls being made (e.g.
GET
to/client-path
), as reported by the Istio watcher. - Filtering the map to include recently-seen traffic, since some date in the past. That way you can eliminate calls that are no longer relevant, without having to reset the network mapper and start building a new map.
- Showing more specifics about access, if the intents operator is also connected: understand which services are protected or would be protected, and which client calls are being blocked or would be blocked. We'll see more of that in the Istio AuthorizationPolicy tutorial.
What's next
The network mapper is a great way to bootstrap IBAC. It generates client intents files that reflect the current topology of your services; those can then be used by each client team to grant them easy and secure access to the services they need, and as their needs evolve, they need only evolve the intents files. We'll see more of that below.
Where to go next?
- Learn how to roll out Istio authorization-policy-based access control using intents.
- If you haven't already, see the automate network policies tutorial.
- Or go to the next tutorial to automate secure access for Kafka.
Teardown
To remove Istio and the deployed examples run:
helm uninstall istio-base -n istio-system
helm uninstall istiod -n istio-system
kubectl delete namespace otterize-tutorial-istio-mapping