Skip to main content

Network mapping a Kubernetes cluster

The network mapper allows you to map pod-to-pod traffic within your K8s cluster.

In this tutorial, we will:

  • Deploy a server, and two clients calling it.
  • Map their communication using the network mapper.

Prerequisites

Prepare a Kubernetes cluster

Before you start, you'll need a Kubernetes cluster. Having a cluster with a CNI that supports NetworkPolicies isn't required for this tutorial, but is recommended so that your cluster works with other tutorials.

Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.

If you don't have the Minikube CLI, first install it.

Then start your Minikube cluster with Calico, in order to enforce network policies.

minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico

The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.

You can now install Otterize in your cluster (if it's not already installed), and optionally connect to Otterize Cloud. Connecting to Cloud lets you:

  1. See what's happening visually in your browser, through the "access graph";
  2. Avoid using SPIRE (which can be installed with Otterize) for issuing certificates, as Otterize Cloud provides a certificate service.

So either forego browser visualization and:

Install Otterize in your cluster, without Otterize Cloud

You'll need Helm installed on your machine to install Otterize as follows:

helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace

This chart is a bundle of the Otterize intents operator, Otterize credentials operator, Otterize network mapper, and SPIRE. Initial deployment may take a couple of minutes. You can add the --wait flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be Ready using kubectl get pods -n otterize-system -w.

After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system:

NAME                                                       READY  STATUS  RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-spire-agent-lcbq2 1/1 Running 0 9s
otterize-spire-server-0 2/2 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s

Or choose to include browser visualization and:

Install Otterize in your cluster, with Otterize Cloud

Create an Otterize Cloud account

If you don't already have an account, browse to https://app.otterize.com to set one up.

If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.

Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.

Install Otterize OSS, connected to Otterize Cloud

If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:

  1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
  2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide " link and running the Helm commands shown there.
    1. Follow the instructions to install Otterize with enforcement on (use the toggle to make Enforcement mode: active)
More details, if you're curious

Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.

The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what would happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.

Finally, you'll need to install the Otterize CLI (if you haven't already) to interact with the network mapper:

Install the Otterize CLI
brew install otterize/otterize/otterize-cli

More variants are available at the GitHub Releases page.

Deploy demo to simulate traffic

Let's add services and traffic to the cluster and see how the network mapper builds the map. Deploy the following simple example client, client2 and server, communicating over HTTP:

kubectl apply -n otterize-tutorial-mapper -f https://docs.otterize.com/code-examples/network-mapper/all.yaml

Map the cluster

The network mapper starts to sniff traffic and build an in-memory network map as soon as it's installed. The Otterize CLI allows you to interact with the network mapper to grab a snapshot of current mapped traffic, reset its state and more.

For a complete list of the CLI capabilities read the CLI command reference.

Extract and see the network map

You can get the network map by calling the CLI visualize, list or export commands. The visualize output format can be PNG or SVG. The export output format can be yaml (Kubernetes client intents files) and json. The following shows the CLI output filtered for the namespace (otterize-tutorial-mapper) of the example above.

  1. Visualize the pod-to-pod network map built up ("sniffed") so far, as an image:

    otterize network-mapper visualize -n otterize-tutorial-mapper -o otterize-tutorial-map.png
  2. For the simple example above, you should get an image file that looks like:

    Otterize tutorial map

Show the access graph in Otterize Cloud

If you've attached Otterize OSS to Otterize Cloud, you can now also see the access graph in your browser:

Access graph

The access graph reveals several types of information and insights, such as:

  1. Seeing the network map for different clusters, seeing the subset of the map for a given namespace, or even according to how you've mapped namespaces to environments seeing the subset of the map for a specific environment.
  2. Filtering the map to include recently-seen traffic, since some date in the past. That way you can eliminate calls that are no longer relevant, without having to reset the network mapper and start building a new map.
  3. If the intents operator is also connected, the access graph now reveals more specifics about access: understand which services are protected or would be protected, and which client calls are being blocked or would be blocked. We'll see more of that in the next couple of tutorials

Note, for example, that the client server arrow is yellow. Clicking on it shows:

Client to server edge info

What's next

The network mapper is a great way to bootstrap IBAC. It generates client intents files that reflect the current topology of your services; those can then be used by each client team to grant them easy and secure access to the services they need, and as their needs evolve, they need only evolve the intents files. We'll see more of that below.

Where to go next?

Teardown

To remove the deployed examples run:

kubectl delete namespace otterize-tutorial-mapper