Skip to main content

Visual tutorial: IBAC for Kafka in Kubernetes

This tutorial focuses on intent-based access control (IBAC) for Kafka, specifically for a Kafka broker and its clients running within a Kubernetes cluster.

Otterize OSS configures access controls automatically, based on declared client intents, as we'll now see.

  • In the previous tutorial, we rolled out IBAC by declaring intents for pods to call each other. Otterize OSS automatically configured Kubernetes network policies to enable access based on those intents and block any undeclared access.
  • In this tutorial, we add fine-grained, topic- and operation-level intents to the existing pod-to-pod intents. Otterize OSS automatically configures Kafka ACLs, and issues and distributes certificates to Kafka clients, so they can authenticate via mTLS and perform those operations on those topics, while everything undeclared is blocked.
  • Otterize Cloud again enriches these capabilities with a visual access graph, displaying a unified view of topic- and operation-level access and protection, and client identity certificate information.
  • Otterize Cloud also provides a pre-configured certificate credentials service you may use instead of running one (e.g. SPIRE) in your cluster.

In this tutorial, we will:

  • Start where the previous tutorial left off: with a demo based on the Google microservices demo (a simple e-commerce application) deployed to a Kubernetes cluster, with Otterize OSS installed and integrated with Otterize Cloud.
  • Add fine-grained, topic-level intents and see how access to operations on Kafka topics is automatically managed.

Prerequisites

You'll need a Kubernetes cluster, an Otterize Cloud account, and of course to install Otterize OSS in your cluster.

If you've gone through other tutorials, you may already have an Otterize Cloud account, and have Otterize OSS installed in your cluster. But if it's installed without enforcement active ("shadow mode"), you'll want to follow the instructions below to reinstall it with active enforcement.

Prepare a cluster

Before you start, you'll need a Kubernetes cluster.

You won't actually need network policies in this tutorial, so you can follow the steps below while skipping the network policies (CNI) enablement bits, or keep them as is.

Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.

If you don't have the Minikube CLI, first install it.

Then start your Minikube cluster with Calico, in order to enforce network policies.

minikube start --cpus=4 --memory 8192 --disk-size 32g --cni=calico

The increased CPU, memory and disk resource allocations are required to be able to deploy the ecommerce app used in the visual tutorials successfully.

Create an Otterize Cloud account

If you don't already have an account, browse to https://app.otterize.com to set one up.

If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.

Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.

Install Otterize OSS with enforcement active

If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:

  1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
  2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide " link and running the Helm commands shown there.
    1. Follow the instructions to install Otterize with enforcement on (not in shadow mode) for this tutorial. In other words, omit the following flag in the Helm command: --set intentsOperator.operator.enableEnforcement=false
More details, if you're curious

Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.

The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will show you what would happen if it were to create/update your access controls (Kubernetes network policies, Kafka ACLs, Istio authorization policies, etc.). While that's useful for gradually rolling out IBAC, for this tutorial we go straight to active enforcement.

Setup the demo

Deploy the demo set of services

We'll deploy the same demo set of services, based on the Google microservices demo e-commerce application, into a Kubernetes cluster.

But in this tutorial, we'll deploy them a bit differently, as we'll want to add certificates to each, for use in mTLS authentication to Kafka.

For the Kafka service, we'll use Bitnami's Helm chart configured to:

  • Recognize the Otterize intents operator as a super user so it can configure ACLs;
  • Use TLS (Kafka calls it SSL) for its listeners;
  • Tell the Otterize credentials operator, via pod annotations, how credentials should be created; and
  • Authenticate clients using mTLS credentials provided as a Kubernetes secret
Expand to see the Helm values.yaml used with the Bitnami chart
# Configure Otterize as a super user to grant it access to configure ACLs
superUsers: "User:CN=kafka.kafka,O=SPIRE,C=US;User:CN=intents-operator-controller-manager.otterize,O=SPIRE,C=US"
# Use TLS for the Kafka listeners (Kafka calls them SSL)
listeners:
- "CLIENT://:9092"
- "INTERNAL://:9093"
advertisedListeners:
- "CLIENT://:9092"
- "INTERNAL://:9093"
listenerSecurityProtocolMap: "INTERNAL:SSL,CLIENT:SSL"
# For a gradual rollout scenario we will want to keep the default permission for topics as allowed, unless an ACL was set
allowEveryoneIfNoAclFound: true
# Annotations for Otterize to generate credentials
podAnnotations:
credentials-operator.otterize.com/cert-type: jks
credentials-operator.otterize.com/tls-secret-name: kafka-tls-secret
credentials-operator.otterize.com/truststore-file-name: kafka.truststore.jks
credentials-operator.otterize.com/keystore-file-name: kafka.keystore.jks
credentials-operator.otterize.com/dns-names: "kafka-0.kafka-headless.kafka.svc.cluster.local,kafka.kafka.svc.cluster.local"
# Authenticate clients using mTLS
auth:
clientProtocol: mtls
interBrokerProtocol: mtls
tls:
type: jks
existingSecrets:
- kafka-tls-secret
password: password
authorizerClassName: kafka.security.authorizer.AclAuthorizer
# Allocate resources
resources:
requests:
cpu: 50m
memory: 256Mi

Clients will authenticate to Kafka using mTLS. Otterize makes this easy, requiring just 3 simple changes to the client pod spec:

  1. Generate credentials: add the credentials-operator.otterize.com/tls-secret-name annotation, which tells Otterize to generate mTLS credentials and store them in a Kubernetes secret whose name is the value of this annotation.
  2. Expose credentials in a volume: add a volume containing this secret to the pod.
  3. Mount the volume: mount the volume in every container in the pod.
Expand to see this structure
spec:
template:
metadata:
annotations:
# 1. Generate credentials as a secret called "client-credentials-secret":
credentials-operator.otterize.com/tls-secret-name: client-credentials-secret
...
spec:
volumes:
# 2. Create a volume containing this secret:
- name: otterize-credentials
secret:
secretName: client-credentials-secret
...
containers:
- name: client
...
volumeMounts:
# 3. Mount volume into container
- name: otterize-credentials
mountPath: /var/otterize/credentials
readOnly: true

To deploy the demo services into your cluster:

kubectl create namespace otterize-ecom-kafka-demo
kubectl apply -n otterize-ecom-kafka-demo -f https://docs.otterize.com/code-examples/shadow-mode/ecom-demo-mtls.yaml
Optional: check that the demo was deployed

To see all the pods in the demo:

kubectl get pods -n otterize-ecom-kafka-demo

The pods should all be ready and running:

NAME                                     READY   STATUS    RESTARTS      AGE
adservice-65494cbb9d-5lrv6 1/1 Running 0 115s
cartservice-6d84fc45bb-hdtwn 1/1 Running 0 115s
checkoutservice-5599486df-dvj9n 1/1 Running 3 (79s ago) 115s
currencyservice-6d64686d74-lxb7x 1/1 Running 0 115s
emailservice-7c6cbfbbd7-xjxlt 1/1 Running 0 115s
frontend-f9448d7d4-6dmnr 1/1 Running 0 115s
kafka-0 1/1 Running 2 (83s ago) 115s
loadgenerator-7f6987f59-bchgm 1/1 Running 0 114s
orderservice-7ffdbf6df-wzzfd 1/1 Running 0 115s
otterize-ecom-kafka-demo-zookeeper-0 1/1 Running 0 115s
paymentservice-86855d78db-zjjfn 1/1 Running 0 115s
productcatalogservice-5944c7f666-2rjc6 1/1 Running 0 115s
recommendationservice-6c8d848498-zm2rm 1/1 Running 0 114s
redis-cart-6b79c5b497-xpms2 1/1 Running 0 115s
shippingservice-85694cb9bd-v54xp 1/1 Running 0 114s
Optional: browse the demo

To get the externally-accessible URL where your demo front end is available, run:

kubectl get service -n otterize-ecom-kafka-demo frontend-external | awk '{print $4}'

The result should be similar to (if running on AWS EKS):

a11843075fd254f8099a986467098647-1889474685.us-east-1.elb.amazonaws.com

Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior. (The URL might take some time to populate across DNS servers. Note that we are accessing an HTTP and not an HTTPS website.)

Apply pod-to-pod-level intents

This tutorial picks up where the network policies tutorial left off. We'll want to make sure all the pods have access to the pods they'll need to call, so we'll put in place the same intents that we had in that tutorial, just in case you're using network policies. In that tutorial, we showed how you can bootstrap all those intents, by observing the calls the pods are making using the network mapper, and then exporting the network map as a set of intents files.

So let's get back to that state by applying that same set of initial intents:

kubectl apply -n otterize-ecom-kafka-demo -f https://docs.otterize.com/code-examples/shadow-mode/all.yaml

Seeing the deployed demo

In the Otterize Cloud UI, the access graph should now show the following map for the demo running in your cluster:

Access graph

Notice that the Kafka service (called kafka in this demo) is shown just like any other service that's called by other services (four, in our case) acting as its clients. We'll let Otterize know this is specifically a Kafka-type service in the next step.

Manage Kafka access with Otterize

Let's configure Otterize to recognize Kafka and communicate with it by applying a KafkaServerConfig (KSC):

kubectl apply -n otterize-ecom-kafka-demo -f https://docs.otterize.com/code-examples/shadow-mode/kafkaserverconfig.yaml
Expand to see the KafkaServerConfig
apiVersion: k8s.otterize.com/v1alpha2
kind: KafkaServerConfig
metadata:
name: kafkaserverconfig
namespace: otterize-ecom-kafka-demo
spec:
service:
name: kafka
addr: kafka-headless.otterize-ecom-kafka-demo:9092
tls:
certFile: /etc/otterize-spire/cert.pem
keyFile: /etc/otterize-spire/key.pem
rootCAFile: /etc/otterize-spire/ca.pem
topics:
- topic: "*"
pattern: literal
clientIdentityRequired: false
intentsRequired: false

Upon applying the KSC, an ACL will configure Kafka to allow anonymous access to all topics. This will be the base state, from which we will gradually roll out secure access to specific topics.

We can see in the access graph that the service is now marked with the "Kafka" logo and a "KSC" icon, since Otterize now recognizes it as a Kafka broker.

Kafka Server Config

By clicking the Kafka service twice, we can focus on this service to inspect its configuration and credentials.

Kafka Server Config

We can see:

  • The kafka service is protected by network policies ("NetPols") and Kafka ACLs, because Otterize is in enforcement mode and managing network policies as well as ACLs.
  • The KSC shows us that the default level of protection for all topics (*) is not to require client identity and not to require intents. In this demo, we're going to be rolling out protection gradually, so we don't pre-protect topics. But it would be easy to modify the KSC to pre-protect certain topics or all topics by requiring client authentication as well as explicit intents.
  • The certificate information shows information about the certificate this kafka service will present to its clients as part of the mutual authentication (mTLS).

Declare Kafka intents

In a gradual rollout scenario, where the default protection level is to allow any operation by any client, Kafka ACLs act a bit like network policies: once any ACL is defined on a topic, then only authenticated clients that have an ACL on that topic get access to it.

To see how this works, let's declare what the checkoutservice, orderservice, and paymentservice intend to access:

apiVersion: k8s.otterize.com/v1alpha2
kind: ClientIntents
metadata:
name: checkoutservice
spec:
service:
name: checkoutservice
calls:
- name: kafka
type: kafka
topics:
- name: orders
operations: [ produce ]
- name: cartservice
- name: currencyservice
- name: emailservice
- name: paymentservice
- name: productcatalogservice
- name: shippingservice

Let's apply these intents:

kubectl apply -n otterize-ecom-kafka-demo -f https://docs.otterize.com/code-examples/shadow-mode/kafka-intents.yaml

Looking back at the access graph, we can see the results:

Kafka Server Config

  • Each of the lines from the declared Kafka clients is now marked with the Kafka icon, indicating it has specific Kafka access configured.
  • The kafka service shows the access granted:
    • The payments topic allows the paymentservice service to perform all operations;
    • The orders topic allows the checkoutservice to produce events and the orderservice to consume events.

No other services can now perform any operations on these two topics.

That's the Kafka server perspective and remember, no server admin had to create or maintain these access controls, they are always kept in sync with the client intents.

To get the Kafka client perspective, click on any of the lines from the clients to Kafka:

Kafka client perspective

You can see the exact access the client is configured to have, and trace it back to the specific intent that generated it. (You can also see that the client is actually calling Kafka, via the discovered intents, so you know the access is indeed needed.)

Optional: browse the demo to verify it still works

You can play with the demo in your browser to see it still works as intended, while everything in it is protected against unintended and potentially malicious access.

To get the externally-accessible URL where your demo front end is available, run:

kubectl get service -n otterize-ecom-kafka-demo frontend-external | awk '{print $4}'

The result should be similar to (if running on AWS EKS):

a11843075fd254f8099a986467098647-1889474685.us-east-1.elb.amazonaws.com

Go ahead and browse to the URL above to "shop" and get a feel for the demo's behavior. (The URL might take some time to populate across DNS servers. Note that we are accessing an HTTP and not an HTTPS website.)

We've protected access to two topics: payments and orders. And we've granted to their clients access to these topics by configuring ACLs, issuing certificates, and distributing them to the clients. All this happened automatically just by declaring and applying the clients' intents to access those topics.

To continue gradually rolling out IBAC, protecting more topics while granting access to their intended clients, simply continue to declare and apply more client intents. And to see which clients are accessing Kafka, and should declare their intents, just look to the access graph, which shows the data accumulated by the Otterize OSS network mapper.

What's next

Teardown

To remove the deployed demo, first delete the client intents:

kubectl delete -n otterize-ecom-kafka-demo -f https://docs.otterize.com/code-examples/shadow-mode/all.yaml

Next, remove the Kafka server config:

kubectl delete -n otterize-ecom-kafka-demo -f https://docs.otterize.com/code-examples/shadow-mode/kafkaserverconfig.yaml

Finally, remove the demo services and the namespace:

kubectl delete -n otterize-ecom-kafka-demo -f https://docs.otterize.com/code-examples/shadow-mode/ecom-demo-mtls.yaml
kubectl delete namespace otterize-ecom-kafka-demo