Skip to main content

Kafka topic-level access mapping

With its Kafka watcher enabled, the network mapper allows you to map topic-level access to Kafka servers within your Kubernetes cluster. This provides a clear picture of which Kafka topics are being accessed and with which operations. In this tutorial, we will:

  • Deploy a Kafka broker, and three clients that call it.
  • Discover which topics are being accessed by those clients, and with which operations, using the Otterize network mapper's Kafka watcher.

We will not be doing any access control in this demo, just purely mapping client-to-Kafka access at the topic and operation level.


Prepare a Kubernetes cluster

Before you start, you'll need a Kubernetes cluster. Having a cluster with a CNI that supports NetworkPolicies isn't required for this tutorial, but is recommended so that your cluster works with other tutorials.

Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.

If you don't have the Minikube CLI, first install it.

Then start your Minikube cluster with Calico, in order to enforce network policies.

minikube start --cpus=4 --memory 4096 --disk-size 32g --cni=calico

The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.

You can now install Otterize in your cluster, and optionally connect to Otterize Cloud. Connecting to Cloud lets you see what's happening visually in your browser, through the "access graph".

So either forego browser visualization and:

Install Otterize in your cluster with the Kafka watcher component enabled, without Otterize Cloud
helm repo add otterize
helm repo update
helm install otterize otterize/network-mapper -n otterize-system --create-namespace \
--set kafkawatcher.enable=true \
--set kafkawatcher.kafkaServers={"kafka-0.kafka"}

Or choose to include browser visualization and:

Install Otterize in your cluster, with Otterize Cloud

Create an Otterize Cloud account

If you don't already have an account, browse to to set one up.

If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.

Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.

Install Otterize OSS, connected to Otterize Cloud

Head over to the Clusters page and create a cluster. Follow the connection guide that opens to connect your cluster, and make the following changes:

  1. Under mTLS and Kafka support choose Otterize Cloud.

  2. Note that enforcement is disabled, we will enable it later. The configuration tab should look like this: Cluster connection guide

  3. Copy the Helm command and add the following flags:

    --set intentsOperator.operator.enableNetworkPolicyCreation=false \
    --set networkMapper.kafkawatcher.enable=true \
    --set networkMapper.kafkawatcher.kafkaServers={"kafka-0.kafka"}

Finally, you'll need to install the Otterize CLI (if you haven't already) to interact with the network mapper:

Install the Otterize CLI
brew install otterize/otterize/otterize-cli

More variants are available at the GitHub Releases page.

Install Kafka

We will deploy a Kafka broker using Bitnami's Helm chart. In the chart we will configure Kafka to:

  • Recognize the Otterize intents operator as a super user so it can configure ACLs.
  • Turn on Kafka debug logging to allow the Kafka watcher to feed topic-level client access information to the network mapper.
Expand to see the Helm values.yaml used with the Bitnami chart
- "CLIENT://:9092"
- "INTERNAL://:9093"
- "CLIENT://:9092"
- "INTERNAL://:9093"
# For a gradual rollout scenario we will want to keep the default permission for topics as allowed, unless an ACL was set
allowEveryoneIfNoAclFound: true
# Allocate resources
cpu: 50m
memory: 256Mi
log4j: |
# Unspecified loggers and loggers with additivity=true output to server.log and stdout
# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise

log4j.rootLogger=INFO, stdout, kafkaAppender

log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Change the line below to adjust ZK client logging

# Change the two lines below to adjust the general broker logging level (output to server.log and stdout)
log4j.logger.kafka=INFO, stdout

# Change to DEBUG or TRACE to enable request logging
log4j.logger.kafka.request.logger=WARN, requestAppender

# Uncomment the lines below and change$ to TRACE for additional output
# related to the handling of requests, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false$=WARN, requestAppender$=false

# Change the line below to adjust KRaft mode controller logging, controllerAppender

# Change the line below to adjust ZK mode controller logging
log4j.logger.kafka.controller=TRACE, controllerAppender

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender

log4j.logger.state.change.logger=INFO, stateChangeAppender

# Access denials are logged at INFO level, change to DEBUG to also log allowed accesses
log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender

The following command will deploy a Kafka broker with this chart:

helm repo add bitnami
helm repo update
helm install --create-namespace -n kafka \
-f kafka bitnami/kafka --version 21.4.4

Deploy demo to simulate traffic

Let's add a few services that will access our Kafka server, and see how the network mapper builds the access map:

  • One service named "client".
  • One service named "client-2".

To deploy these services, use:

kubectl apply -n otterize-tutorial-kafka-mapping -f

Each of these services is built to periodically call the Kafka broker we deployed. Because that broker has the Otterize OSS Kafka watcher enabled and feeding data to the network mapper, we can query the network mapper directly to see the map it has built up.

otterize network-mapper list -n otterize-tutorial-kafka-mapping

We expect to see:

  • client consuming from mytopic.
  • client-2 producing to mytopic.

And indeed:

client in namespace otterize-tutorial-kafka-mapping calls:
- kafka in namespace kafka
- Kafka topic: transactions, operations: [describe]
- Kafka topic: mytopic, operations: [describe consume]
client-2 in namespace otterize-tutorial-kafka-mapping calls:
- kafka in namespace kafka
- Kafka topic: transactions, operations: [describe]
- Kafka topic: mytopic, operations: [produce describe]

If you've attached Otterize OSS to Otterize Cloud, go back to see the access graph in your browser. To only see Kafka information, make sure to de-select the 'Use in access graph' settings for network policies and Istio policies, and leave Kafka ACLs selected, like so: Access graph settings

Access graph Only the arrows between the clients and the Kafka are green, because we've selected Kafka ACLs for access graph. The other arrows were detected through network mapping, but since there's no Kafka mapping for those arrows, they are grayed out.

Clicking on a specific arrow between a client and the broker reveals which topic and operations are being accessed.

What did we accomplish?

Enabling the Kafka watcher component of the network mapper shows which clients connect to running Kafka servers, the topics they access, and the operations they undertake on those topics.

You can consume this information in various ways:

  • Visually via the access graph, where shadow mode shows you what would in enforcement mode before actually turning on enforcement, and auto-generating client intents to bootstrap rolling out IBAC.
  • Via the CLI: from the network mapper directly or the cloud.
  • Via the API.

What's next


To remove the deployed examples run:

helm uninstall otterize -n otterize-system
helm uninstall kafka -n kafka
helm delete ns otterize-tutorial-kafka-mapping