Skip to main content

Mapping a Kubernetes network

The network mapper allows you to map pod-to-pod traffic within your K8s cluster.

In this tutorial, we will:

  • Deploy a server, and two clients calling it.
  • Map their communication using the network mapper.

Prerequisites

Install Otterize on your cluster

To deploy Otterize, head over to Otterize Cloud and create a Kubernetes cluster on the Integrations page, and follow the instructions.

We will also need the Otterize CLI.

Tutorial

Deploy demo to simulate traffic

Let's add services and traffic to the cluster and see how the network mapper builds the map. Deploy the following simple example client, client2 and server, communicating over HTTP:

kubectl apply -n otterize-tutorial-mapper -f https://docs.otterize.com/code-examples/network-mapper/all.yaml

Map the cluster

The network mapper starts to sniff traffic and build an in-memory network map as soon as it's installed. The Otterize CLI allows you to interact with the network mapper to grab a snapshot of current mapped traffic, reset its state and more.

For a complete list of the CLI capabilities read the CLI command reference.

Extract and see the network map

You can get the network map by calling the CLI visualize, list or export commands. The visualize output format can be PNG or SVG. The export output format can be yaml (Kubernetes client intents files) and json. The following shows the CLI output filtered for the namespace (otterize-tutorial-mapper) of the example above.

  1. Visualize the pod-to-pod network map built up ("sniffed") so far, as an image:

    otterize network-mapper visualize -n otterize-tutorial-mapper -o otterize-tutorial-map.png
  2. For the simple example above, you should get an image file that looks like:

    Otterize tutorial map

Show the access graph in Otterize Cloud

If you've attached Otterize OSS to Otterize Cloud, you can now also see the access graph in your browser:

Access graph

The access graph reveals several types of information and insights, such as:

  1. Seeing the network map for different clusters, seeing the subset of the map for a given namespace, or even according to how you've mapped namespaces to environments seeing the subset of the map for a specific environment.
  2. Filtering the map to include recently-seen traffic, since some date in the past. That way you can eliminate calls that are no longer relevant, without having to reset the network mapper and start building a new map.
  3. If the intents operator is also connected, the access graph now reveals more specifics about access: understand which services are protected or would be protected, and which client calls are being blocked or would be blocked. We'll see more of that in the next couple of tutorials

Note, for example, that the client server arrow is yellow. Clicking on it shows:

Client to server edge info

What's next

The network mapper is a great way to bootstrap IBAC. It generates client intents files that reflect the current topology of your services; those can then be used by each client team to grant them easy and secure access to the services they need, and as their needs evolve, they need only evolve the intents files. We'll see more of that below.

Where to go next?

Teardown

To remove the deployed examples run:

kubectl delete namespace otterize-tutorial-mapper