Installation
Install Otterize without Otterize Cloud (OSS only)
You'll need Helm installed on your machine to install Otterize as follows:
helm repo add otterize https://helm.otterize.com
helm repo update
helm install otterize otterize/otterize-kubernetes -n otterize-system --create-namespace
This chart is a bundle of the Otterize intents operator, Otterize credentials operator, Otterize network mapper, and SPIRE.
Initial deployment may take a couple of minutes.
You can add the --wait
flag for Helm to wait for deployment to complete and all pods to be Ready, or manually watch for all pods to be Ready
using kubectl get pods -n otterize-system -w
.
After all the pods are ready you should see the following (or similar) in your terminal when you run kubectl get pods -n otterize-system
:
NAME READY STATUS RESTARTS AGE
credentials-operator-controller-manager-6c56fcfcfb-vg6m9 2/2 Running 0 9s
intents-operator-controller-manager-65bb6d4b88-bp9pf 2/2 Running 0 9s
otterize-network-mapper-779fffd959-twjqd 1/1 Running 0 9s
otterize-network-sniffer-65mjt 1/1 Running 0 9s
otterize-spire-agent-lcbq2 1/1 Running 0 9s
otterize-spire-server-0 2/2 Running 0 9s
otterize-watcher-b9bf87bcd-276nt 1/1 Running 0 9s
If you are installing Otterize for network policies, make sure your cluster supports network policies.
Expand to see how.
Before you start, you need to have a Kubernetes cluster with a CNI that supports NetworkPolicies.
Below are instructions for setting up a Kubernetes cluster with network policies. If you don't have a cluster already, we recommend starting out with a Minikube cluster.
- Minikube
- Google GKE
- AWS EKS
- Azure AKS
If you don't have the Minikube CLI, first install it.
Then start your Minikube cluster:
minikube start --network-plugin=cni
Install Calico, in order to enforce network policies:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
You need to install Calico because Minikube does not support network policy enforcement by default; Calico helps solve this issue.
- gcloud CLI
- Console
To use the gcloud CLI for this tutorial, first install and then initialize it.
To enable network policy enforcement when creating a new cluster:
Run the following command:
gcloud container clusters create CLUSTER_NAME --enable-network-policy --zone=ZONE
(Replace CLUSTER_NAME
with the name of the new cluster and ZONE
with your zone.)
To enable network policy enforcement for an existing cluster, perform the following tasks:
Run the following command to enable the add-on:
gcloud container clusters update CLUSTER_NAME --update-addons=NetworkPolicy=ENABLED
(Replace CLUSTER_NAME
with the name of the cluster.)
Then enable network policy enforcement on your cluster, re-creating your cluster's node pools with network policy enforcement enabled:
gcloud container clusters update CLUSTER_NAME --enable-network-policy
(Replace CLUSTER_NAME
with the name of the cluster.)
To enable network policy enforcement when creating a new cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
On the Google Kubernetes Engine page, click Create.
Configure your cluster as desired.
From the navigation pane, under Cluster, click Networking.
Select the checkbox to Enable network policy.
Click Create.
To enable network policy enforcement for an existing cluster:
Go to the Google Kubernetes Engine page in the Google Cloud console. The remaining steps will appear automatically in the Google Cloud console.
In the cluster list, click the name of the cluster you want to modify.
Under Networking, in the Network policy field, click Edit network policy.
Select the checkbox to Enable network policy for master and click Save Changes.
Wait for your changes to apply, and then click Edit network policy again.
Select the checkbox to Enable network policy for nodes.
Click Save Changes.
- Spin up an EKS cluster using the console, AWS CLI or
eksctl
. - Install Calico for network policy enforcement, without replacing the CNI:
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-crs.yaml
You can set up an AKS cluster using this guide.
For network policy support, no setup is required: Azure AKS comes with a built-in network policy implementation called Azure Network Policy Manager. You can choose whether you'd like to use this option or Calico when you create a cluster.
Read more at the official documentation site.Upgrade Otterize
Use Helm to upgrade to the latest version of Otterize:
helm repo update
helm upgrade --install otterize otterize/otterize-kubernetes -n otterize-system
Connect Otterize OSS to Otterize Cloud, or install Otterize with Otterize Cloud
To connect Otterize OSS to Otterize Cloud you will need to login, create a cluster, and follow the instructions.
In a nutshell, you need to helm upgrade
the same Helm chart, but provide Otterize Cloud credentials. Upon creating a cluster, a guide will appear that walks you through doing this with the new credentials jut created.
Install just the Otterize network mapper
helm repo add otterize https://helm.otterize.com
helm repo update
helm install network-mapper otterize/network-mapper -n otterize-system --create-namespace
You can add the --wait
flag for Helm to wait for deployment to complete and all pods to be Ready
, or manually watch for all pods to be Ready
using kubectl get pods -n otterize-system -w
.
Install the Otterize CLI
The Otterize CLI is a command-line utility used to control and interact with the Otterize network mapper, manipulate local intents files, and interact with Otterize Cloud.
To install the CLI:
- Mac
- Windows
- Linux
- Brew
- Apple Silicon
- Intel 64-bit
brew install otterize/otterize/otterize-cli
curl -LJO https://get.otterize.com/otterize-cli/v0.1.20/otterize_macOS_arm64_notarized.zip
tar xf otterize_macOS_arm64_notarized.zip
sudo cp otterize /usr/local/bin # optionally move to PATH
curl -LJO https://get.otterize.com/otterize-cli/v0.1.20/otterize_macOS_x86_64_notarized.zip
tar xf otterize_macOS_x86_64_notarized.zip
sudo cp otterize /usr/local/bin # optionally move to PATH
- Scoop
- 64-bit
scoop bucket add otterize-cli https://github.com/otterize/scoop-otterize-cli
scoop update
scoop install otterize-cli
Invoke-WebRequest -Uri https://get.otterize.com/otterize-cli/v0.1.20/otterize_Windows_x86_64.zip -OutFile otterize_Windows_x86_64.zip
Expand-Archive otterize_Windows_x86_64.zip -DestinationPath .
# optionally move to PATH
- 64-bit
wget https://get.otterize.com/otterize-cli/v0.1.20/otterize_Linux_x86_64.tar.gz
tar xf otterize_Linux_x86_64.tar.gz
sudo cp otterize /usr/local/bin # optionally move to PATH
More variants are available at the GitHub Releases page.
Uninstall Otterize
Before uninstalling
Before uninstalling Otterize, you should make sure to delete any resources created by users: ClientIntents
and KafkaServerConfig
s.
When you remove these resources, the intents operator will clean up network policies and Kafka ACLs it created. If you remove the operator before doing so, it will not be able to clean up.
If, however, you want the network policies and ACLs to stay in place (because you're redeploying with different configuration, for example), don't remove them.
- First check if any
ClientIntents
exist:kubectl get clientintents --all-namespaces
- If so, remove them.
- Check if any
KafkaServerConfig
s exist:kubectl get kafkaserverconfig --all-namespaces
- If so, remove them.
It's important to remove ClientIntents
before removing KafkaServerConfig
s, as once you remove the KafkaServerConfig
for a Kafka cluster, the intents operator will no longer know how to connect to it and perform cleanup.
Uninstallation
helm uninstall otterize -n otterize-system