Simple mTLS deployment
Otterize can automatically provision mTLS credentials by using the service identities implied by Kubernetes. This tutorial will walk you through deploying mTLS certificates on a sample client-server deployment using the Otterize credentials operator. You can configure this operator to use a local SPIRE server to issue and manage certificates, or to use the Otterize Cloud service to manage this for you. You can read more about these options in the cryptographic credentials documentation.
In this tutorial, we will:
- Deploy client and server pods communicating over HTTP with mTLS.
- See that mTLS credentials were autogenerated.
We'll start by installing Otterize. You can do so just using Otterize OSS, without it to Otterize Cloud, and use SPIRE for certificates. Or you can do so with Otterize OSS connected to Otterize Cloud, which adds web visualization as well as the option of using Cloud-managed credentials. (You can also connect to Cloud but still use SPIRE; refer to the cryptographic credentials documentation.)
Install Otterize in your cluster, without Otterize Cloud
Otterize requires about 200 MBs and 200 mCPU for all components (including a SPIRE deployment) to install and run properly on Minikube and EKS clusters.
You'll need Helm installed on your machine to install Otterize as follows:
helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace
This chart is a bundle of the Otterize intents operator, Otterize credentials operator, Otterize network mapper, and SPIRE.
Initial deployment may take a couple of minutes.
You can add the --wait
flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be Ready
using kubectl get pods -n otterize-system -w
.
After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system
:
NAME READY STATUS RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-spire-agent-lcbq2 1/1 Running 0 9s
otterize-spire-server-0 2/2 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s
Or choose to include browser visualization and Cloud-managed credentials:
Install Otterize in your cluster, with Otterize Cloud
Create an Otterize Cloud account
If you don't already have an account, browse to https://app.otterize.com to set one up.
If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.
Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.
Install Otterize OSS, connected to Otterize Cloud
If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:
- Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
- Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide →" link and running the Helm commands shown there.
More details, if you're curious
Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.
The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster — you just won't have the visualization in Otterize Cloud.
The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will not create network policies to restrict pod-to-pod traffic, or create Kafka ACLs to control access to Kafka topics. Instead, it will report to Otterize Cloud what would happen if enforcement were to be enabled, guiding you to implement IBAC without blocking intended access.
Deploy the example
Our example consists of two pods, "client" and "server", communicating over HTTP with mTLS. Otterize makes mTLS easy, requiring just 3 simple changes to a client pod spec:
- Generate credentials: add the
credentials-operator.otterize.com/tls-secret-name
annotation, which tells the Otterize credentials operator to generate mTLS credentials, and to store them in a Kubernetes secret whose name is the value of this annotation. - Expose credentials in a volume: add a volume containing this secret to the pod.
- Mount the volume: mount the volume in every container in the pod.
Expand to see this structure
spec:
template:
metadata:
annotations:
# 1. Generate credentials as a secret called "client-credentials-secret":
credentials-operator.otterize.com/tls-secret-name: client-credentials-secret
...
spec:
volumes:
# 2. Create a volume containing this secret:
- name: otterize-credentials
secret:
secretName: client-credentials-secret
...
containers:
- name: client
...
volumeMounts:
# 3. Mount volume into container
- name: otterize-credentials
mountPath: /var/otterize/credentials
readOnly: true
Expand to see the complete YAML files of the example
- namespace.yaml
- client-deployment.yaml
- client-configmap.yaml
- client.js
- server.js
apiVersion: v1
kind: Namespace
metadata:
name: otterize-tutorial-kafka-mtls
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
namespace: otterize-tutorial-kafka-mtls
spec:
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
annotations:
credentials-operator.otterize.com/tls-secret-name: client-credentials-secret
spec:
containers:
- name: client
image: golang
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do cd /app; cp src/* .; go get main; go run .; sleep infinity; done" ]
volumeMounts:
- name: ephemeral
mountPath: /app
- mountPath: /app/src
name: client-go
- name: otterize-credentials
mountPath: /var/otterize/credentials
readOnly: true
volumes:
- name: client-go
configMap:
name: client-go
- name: otterize-credentials
secret:
secretName: client-credentials-secret
- name: ephemeral
emptyDir: { }
apiVersion: v1
kind: ConfigMap
metadata:
name: client-go
namespace: otterize-tutorial-kafka-mtls
data:
client.go: |
package main
import (
"crypto/tls"
"crypto/x509"
"fmt"
"github.com/Shopify/sarama"
"github.com/sirupsen/logrus"
"io/ioutil"
"time"
)
const (
kafkaAddr = "kafka.kafka:9092"
testTopicName = "mytopic"
certFile = "/var/otterize/credentials/cert.pem"
keyFile = "/var/otterize/credentials/key.pem"
rootCAFile = "/var/otterize/credentials/ca.pem"
)
func getTLSConfig()( * tls.Config, error) {
cert, err: = tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
return nil, fmt.Errorf("failed loading x509 key pair: %w", err)
}
pool: = x509.NewCertPool()
rootCAPEM, err: = ioutil.ReadFile(rootCAFile)
if err != nil {
return nil, fmt.Errorf("failed loading root CA PEM file: %w ", err)
}
pool.AppendCertsFromPEM(rootCAPEM)
return &tls.Config {
Certificates: [] tls.Certificate {
cert
},
RootCAs: pool,
}, nil
}
func send_messages(producer sarama.SyncProducer) {
i: = 1
for {
msg: = fmt.Sprintf("Message %d [sent by client]", i)
_,
_,
err: = producer.SendMessage( & sarama.ProducerMessage {
Topic: testTopicName,
Partition: -1,
Value: sarama.StringEncoder(msg),
})
if err != nil {
return
}
fmt.Printf("Sent message - %s\n", msg)
time.Sleep(2 * time.Second)
i++
}
}
func loop_kafka() error {
addrs: = [] string {
kafkaAddr
}
config: = sarama.NewConfig()
fmt.Println("Loading mTLS certificates")
config.Net.TLS.Enable = true
tlsConfig,
err: = getTLSConfig()
if err != nil {
return err
}
config.Net.TLS.Config = tlsConfig
fmt.Println("Connecting to Kafka")
config.Net.DialTimeout = 5 * time.Second
config.Net.ReadTimeout = 5 * time.Second
config.Net.WriteTimeout = 5 * time.Second
client,
err: = sarama.NewClient(addrs, config)
if err != nil {
return err
}
fmt.Println("Creating a producer and a consumer for -", testTopicName)
config.Producer.Return.Successes = true
config.Producer.Timeout = 5 * time.Second
config.Consumer.MaxWaitTime = 5 * time.Second
config.Producer.Return.Errors = true
config.Consumer.Return.Errors = true
producer,
err: = sarama.NewSyncProducerFromClient(client)
if err != nil {
return err
}
consumer,
err: = sarama.NewConsumerFromClient(client)
if err != nil {
return err
}
fmt.Println("Sending messages")
go send_messages(producer)
partConsumer,
err: = consumer.ConsumePartition(testTopicName, 0, 0)
if err != nil {
return err
}
for msg: = range partConsumer.Messages() {
fmt.Printf("Read message - %s\n", msg.Value)
}
return nil
}
func main() {
for {
err: = loop_kafka()
logrus.WithError(err).Println()
fmt.Println("Loop exited")
time.Sleep(2 * time.Second)
}
}
const fs = require('fs');
const https = require('https');
const options = {
hostname: 'server.otterize-tutorial-mtls',
port: 443,
path: '/hello',
method: 'GET',
cert: fs.readFileSync('/var/otterize/credentials/cert.pem'),
key: fs.readFileSync('/var/otterize/credentials/key.pem'),
ca: fs.readFileSync('/var/otterize/credentials/ca.pem')
}
const req = https.request(
options,
res => {
res.on('data', function (data) {
console.log(data.toString())
});
}
);
req.end();
const https = require(`https`);
const fs = require(`fs`);
const options = {
key: fs.readFileSync('/var/otterize/credentials/key.pem'),
cert: fs.readFileSync('/var/otterize/credentials/cert.pem'),
ca: fs.readFileSync('/var/otterize/credentials/ca.pem'),
requestCert: true
};
https.createServer(
options,
(req, res) => {
const peerCert = req.connection.getPeerCertificate();
const ownCert = req.connection.getCertificate();
console.log("Received request:");
console.log(peerCert.subject.CN + ":\t" + req.method + " " + req.url);
if (req.url === '/hello') {
res.writeHead(200);
res.end('mTLS hello world\nfrom: ' + ownCert.subject.CN + '\nto client: ' + peerCert.subject.CN);
} else {
res.end();
}
}).listen(443);
Deploy the client and server using kubectl
:
kubectl apply -f https://docs.otterize.com/code-examples/mtls/all.yaml
Optional: check deployment status
kubectl get pods -n otterize-tutorial-mtls
You should see
NAME READY STATUS RESTARTS AGE
client-5689997b5c-grlnt 1/1 Running 0 35s
server-6698c58cbc-v9n9b 1/1 Running 0 34s
Watch it in action
Confirm that the client can successfully call the server using HTTP with mTLS:
kubectl logs --tail 3 -n otterize-tutorial-mtls deploy/client
The client makes requests and prints out the server's response; our example server will respond with the
common name
of the server's certificate as well as thecommon name
of the client's certificate:mTLS hello world
from: server.otterize-tutorial-mtls # server's common name in the certificate
to client: client.otterize-tutorial-mtls # client's common name in the certificateYou can also confirm on the server side that it sees requests from this authenticated client:
kubectl logs --tail 1 -n otterize-tutorial-mtls deploy/server
The example server logs the common name of every client that makes a request:
client.otterize-tutorial-mtls: GET /hello
Otterize leverages SPIRE or the Otterize Cloud credentials service to manage certificate lifecycle tasks such as rotation, revocation, etc.
We recommend reloading credentials before each use, as Otterize makes sure the mounted credentials are constantly refreshed and up to date.
Inspect credentials
Using Otterize Cloud
Otterize Cloud can be used for visualizing your network and overlay certificate information. When you browse to https://app.otterize.com you can see the access graph, showing you information about the services in your cluster, how they access each other, and so on. There are many other insights in the access graph, such as whether a service is or would be protected if enforcement were activated, or whether it would then block clients... but those are the subjects of the other tutorials.
If you click on the server service to see its details, you can expand and see the certificate details:

You can see the common name (CN) used as the service's identity. And, when available, this will also show all DNS names (SANs), which together with the CN encompass all the possible names attested for by this certificate.
Using the command line
We can use openssl
to inspect the generated credentials. The credentials are stored as a Kubernetes secret
and are then mounted as a file into the container.
Retrieve the credentials from the Kubernetes secret:
kubectl get secret -n otterize-tutorial-mtls client-credentials-secret -o jsonpath='{.data.cert\.pem}' | base64 -d > cert.pem
Inspect the credentials with
openssl
:openssl x509 -in cert.pem -text | head -n 15
You should see output similar to:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
0b:eb:eb:4d:0e:02:7e:28:93:30:1c:55:26:22:8b:c7
Signature Algorithm: sha256WithRSAEncryption
Issuer: C = ..., O = ...
Validity
Not Before: Aug 24 12:19:57 2022 GMT
Not After : Sep 23 12:20:07 2022 GMT
Subject: C = ..., O = ..., CN = client.otterize-tutorial-mtls # the client's name
Subject Public Key Info:
Public Key Algorithm: id-ecPublicKey
Public-Key: (256 bit)
pub:You can see that Otterize generated an X.509 keypair using the pod's name ("client") and namespace ("otterize-tutorial-mtls"):
client.otterize-tutorial-mtls
.
Expand to see what happened behind the scenes
- We annotated the pods to let Otterize know it should generate mTLS credentials.
- The Otterize credentials operator:
- Generated matching mTLS credentials.
- Stored the mTLS credentials into Kubernetes secrets.
- The secrets were mounted (separately) into each pod's container.
- The pods communicated with each other using mutual TLS-authenticated HTTPS.
Otterize defaults to generating credentials with an expiry time of 1 day. The certificates are automatically refreshed before expiring, and therefore you must always read the credentials from file rather than caching them.
To set a longer expiration time, set the credentials-operator.otterize.com/cert-ttl
annotation for your pods.
For more information, see the documentation for the credentials operator
What's next
- Learn how to manage and automatically provision mTLS credentials within a Kubernetes cluster.
- Enforce secure Kafka access with mTLS.
- Learn more about how the Otterize credentials operator works.
Teardown
To remove the deployed examples run:
kubectl delete -f https://docs.otterize.com/code-examples/mtls/all.yaml