Skip to main content

Simple mTLS deployment

Otterize can automatically provision mTLS credentials by using the service identities implied by Kubernetes. This tutorial will walk you through deploying mTLS certificates on a sample client-server deployment using the Otterize credentials operator. You can configure this operator to use a local SPIRE server to issue and manage certificates, or to use the Otterize Cloud service to manage this for you. You can read more about these options in the cryptographic credentials documentation.

In this tutorial, we will:

  • Deploy client and server pods communicating over HTTP with mTLS.
  • See that mTLS credentials were autogenerated.

We'll start by installing Otterize. You can do so just using Otterize OSS, without it to Otterize Cloud, and use SPIRE for certificates. Or you can do so with Otterize OSS connected to Otterize Cloud, which adds web visualization as well as the option of using Cloud-managed credentials. (You can also connect to Cloud but still use SPIRE; refer to the cryptographic credentials documentation.)

Install Otterize in your cluster, without Otterize Cloud
Basic system memory requirements

Otterize requires about 200 MBs and 200 mCPU for all components (including a SPIRE deployment) to install and run properly on Minikube and EKS clusters.

You'll need Helm installed on your machine to install Otterize as follows:

helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace

This chart is a bundle of the Otterize intents operator, Otterize credentials operator, Otterize network mapper, and SPIRE. Initial deployment may take a couple of minutes. You can add the --wait flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be Ready using kubectl get pods -n otterize-system -w.

After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system:

NAME                                                       READY  STATUS  RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-spire-agent-lcbq2 1/1 Running 0 9s
otterize-spire-server-0 2/2 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s

Or choose to include browser visualization and Cloud-managed credentials:

Install Otterize in your cluster, with Otterize Cloud

Create an Otterize Cloud account

If you don't already have an account, browse to https://app.otterize.com to set one up.

If someone in your team has already created an org in Otterize Cloud, and invited you (using your email address), you may see an invitation to accept.

Otherwise, you'll create a new org, which you can later rename, and invite your teammates to join you there.

Install Otterize OSS, connected to Otterize Cloud

If no Kubernetes clusters are connected to your account, click the "connect your cluster" button to:

  1. Create a Cloud cluster object, specifying its name and the name of an environment to which all namespaces in that cluster will belong, by default.
  2. Connect it with your actual Kubernetes cluster, by clicking on the "Connection guide " link and running the Helm commands shown there.
More details, if you're curious

Connecting your cluster simply entails installing Otterize OSS via Helm, using credentials from your account so Otterize OSS can report information needed to visualize the cluster.

The credentials will already be inlined into the Helm command shown in the Cloud UI, so you just need to copy that line and run it from your shell. If you don't give it the Cloud credentials, Otterize OSS will run fully standalone in your cluster you just won't have the visualization in Otterize Cloud.

The Helm command shown in the Cloud UI also includes flags to turn off enforcement: Otterize OSS will be running in "shadow mode," meaning that it will not create network policies to restrict pod-to-pod traffic, or create Kafka ACLs to control access to Kafka topics. Instead, it will report to Otterize Cloud what would happen if enforcement were to be enabled, guiding you to implement IBAC without blocking intended access.

Deploy the example

Our example consists of two pods, "client" and "server", communicating over HTTP with mTLS. Otterize makes mTLS easy, requiring just 3 simple changes to a client pod spec:

  1. Generate credentials: add the credentials-operator.otterize.com/tls-secret-name annotation, which tells the Otterize credentials operator to generate mTLS credentials, and to store them in a Kubernetes secret whose name is the value of this annotation.
  2. Expose credentials in a volume: add a volume containing this secret to the pod.
  3. Mount the volume: mount the volume in every container in the pod.
Expand to see this structure
spec:
template:
metadata:
annotations:
# 1. Generate credentials as a secret called "client-credentials-secret":
credentials-operator.otterize.com/tls-secret-name: client-credentials-secret
...
spec:
volumes:
# 2. Create a volume containing this secret:
- name: otterize-credentials
secret:
secretName: client-credentials-secret
...
containers:
- name: client
...
volumeMounts:
# 3. Mount volume into container
- name: otterize-credentials
mountPath: /var/otterize/credentials
readOnly: true
Expand to see the complete YAML files of the example
apiVersion: v1
kind: Namespace
metadata:
name: otterize-tutorial-kafka-mtls

Deploy the client and server using kubectl:

kubectl apply -f https://docs.otterize.com/code-examples/mtls/all.yaml
Optional: check deployment status
kubectl get pods -n otterize-tutorial-mtls

You should see

NAME                      READY   STATUS    RESTARTS   AGE
client-5689997b5c-grlnt 1/1 Running 0 35s
server-6698c58cbc-v9n9b 1/1 Running 0 34s

Watch it in action

  1. Confirm that the client can successfully call the server using HTTP with mTLS:

    kubectl logs --tail 3 -n otterize-tutorial-mtls deploy/client

    The client makes requests and prints out the server's response; our example server will respond with the common name of the server's certificate as well as the common name of the client's certificate:

    mTLS hello world
    from: server.otterize-tutorial-mtls # server's common name in the certificate
    to client: client.otterize-tutorial-mtls # client's common name in the certificate
  2. You can also confirm on the server side that it sees requests from this authenticated client:

    kubectl logs --tail 1 -n otterize-tutorial-mtls deploy/server

    The example server logs the common name of every client that makes a request:

    client.otterize-tutorial-mtls:  GET /hello
Certificate lifecycle management

Otterize leverages SPIRE or the Otterize Cloud credentials service to manage certificate lifecycle tasks such as rotation, revocation, etc.

We recommend reloading credentials before each use, as Otterize makes sure the mounted credentials are constantly refreshed and up to date.

Inspect credentials

Using Otterize Cloud

Otterize Cloud can be used for visualizing your network and overlay certificate information. When you browse to https://app.otterize.com you can see the access graph, showing you information about the services in your cluster, how they access each other, and so on. There are many other insights in the access graph, such as whether a service is or would be protected if enforcement were activated, or whether it would then block clients... but those are the subjects of the other tutorials.

If you click on the server service to see its details, you can expand and see the certificate details:

Service details

You can see the common name (CN) used as the service's identity. And, when available, this will also show all DNS names (SANs), which together with the CN encompass all the possible names attested for by this certificate.

Using the command line

We can use openssl to inspect the generated credentials. The credentials are stored as a Kubernetes secret and are then mounted as a file into the container.

  1. Retrieve the credentials from the Kubernetes secret:

    kubectl get secret -n otterize-tutorial-mtls client-credentials-secret -o jsonpath='{.data.cert\.pem}' | base64 -d > cert.pem
  2. Inspect the credentials with openssl:

    openssl x509 -in cert.pem -text | head -n 15

    You should see output similar to:

    Certificate:
    Data:
    Version: 3 (0x2)
    Serial Number:
    0b:eb:eb:4d:0e:02:7e:28:93:30:1c:55:26:22:8b:c7
    Signature Algorithm: sha256WithRSAEncryption
    Issuer: C = ..., O = ...
    Validity
    Not Before: Aug 24 12:19:57 2022 GMT
    Not After : Sep 23 12:20:07 2022 GMT
    Subject: C = ..., O = ..., CN = client.otterize-tutorial-mtls # the client's name
    Subject Public Key Info:
    Public Key Algorithm: id-ecPublicKey
    Public-Key: (256 bit)
    pub:
  3. You can see that Otterize generated an X.509 keypair using the pod's name ("client") and namespace ("otterize-tutorial-mtls"): client.otterize-tutorial-mtls.

Expand to see what happened behind the scenes
  1. We annotated the pods to let Otterize know it should generate mTLS credentials.
  2. The Otterize credentials operator:
    1. Generated matching mTLS credentials.
    2. Stored the mTLS credentials into Kubernetes secrets.
  3. The secrets were mounted (separately) into each pod's container.
  4. The pods communicated with each other using mutual TLS-authenticated HTTPS.
tip

Otterize defaults to generating credentials with an expiry time of 1 day. The certificates are automatically refreshed before expiring, and therefore you must always read the credentials from file rather than caching them.

To set a longer expiration time, set the credentials-operator.otterize.com/cert-ttl annotation for your pods. For more information, see the documentation for the credentials operator

What's next

Teardown

To remove the deployed examples run:

kubectl delete -f https://docs.otterize.com/code-examples/mtls/all.yaml