Terminology
An overview of the terminology used in Otterize OSS documentation. If you think a term is missing here, please let us know.
Basics
Workload
An Otterize workload is the logical workload identity used to refer to a particular service. For example, Kafka is a workload, the Frontend is a workload, the login-service is a workload. But an S3 bucket is not a workload.
It can be a client or a server or both. Workloads are connected through client intents: one workload intends to call another workload. Learn how workload identity resolution happens.
Service
A Service refers to a Kubernetes Service. This is different from a Workload, which is a logical entity that exists in Otterize and can be a client or a server. A Service is a Kubernetes object that exposes a set of pods as a network service. Read more about Kubernetes Services in the official documentation.
Resource
A resource is a generic term used to refer to any entity that can be accessed or interacted with that is not a workload. In the context of Otterize, resources can be databases, Kafka topics, S3 buckets.
CLI
The Otterize CLI is a command-line utility used to control and interact with the Otterize network mapper, manipulate local Intents files, and interact with Otterize Cloud.
Intent (or client intent)
Otterize intents are a way to declare that one workload intends to call another workload or use a resource. Otterize uses them to compute which credentials and policies are required to enable the calls to go through, and block any unintended calls. An intent refers to a client declaring a particular call to a server; all a given client's intents to the servers it intends to call are collected in a single client intents file. Learn more about intents.
Integrations
An Integration is how you configure Otterize Cloud to work with your infrastructure. For example, you can configure Otterize Cloud to work with a Kubernetes cluster, your GitHub organization, your AWS account, your Slack organization, etc.
Each Integration has different steps that you need to complete to perform the integration. For example, a Kubernetes integration requires that you deploy the Otterize operators. A GitHub integration requires that you connect the Otterize GitHub app to the relevant repositories.
Identity
PKI
PKI stands for public key infrastructure, and refers to the infrastructure used to provision X.509 credentials. A common use case for PKI is to support mTLS.
mTLS
mTLS stands for mutual TLS, and is a form of TLS where both the client and server mutually authenticate to each other. In other words, mTLS is mutual TLS.
In regular TLS, only the server is authenticated. For example, when you connect to google.com, a nd your browser authenticates google.com using its certificate, you're using TLS; but google.com does not authenticate you, as the client, with a certificate, so the communication isn't using mTLS, just TLS.
SPIRE
An open-source implementation of the SPIFFE specification. It's used for workload attestation and credential management. Read more about SPIRE in the official documentation.
Credentials operator
The Otterize credentials operator automatically resolves pods to dev-friendly workload identities, registers them with a SPIRE server or with the Otterize Cloud-managed credentials service, and provisions credentials as Kubernetes Secrets.
Enforcement
Network policies
Kubernetes network policies can be used to control network access between pods in a Kubernetes cluster. To do so they require the installation of a Kubernetes CNI network plugin that supports network policy enforcement. One commonly supported CNI is Calico. Read more about network policies in the official documentation.
Kafka ACLs
ACLs stand for Access Control Lists, a built-in mechanism in Kafka (and other systems) for authorizing access to Kafka resources such as topics. Read more about Kafka ACLs in the official documentation.
Kubernetes
Custom resource
A Kubernetes custom resource refers to a resource that is not present in the base distribution of Kubernetes (such as Deployment or Pod), but comes with an installed operator. The Otterize ClientIntents are one such resource. Read more about Kubernetes custom resources here.
CNI (Container Network Interface)
CNI is a CNCF project that provides libraries for implementing plugins for configuring network interfaces in Linux containers, and is used by Kubernetes to provide pods running in a cluster with network connectivity. Examples of CNI plugins are Calico, Cilium, the AWS VPC CNI plugin. Read more about Kubernetes CNI plugins here.
Namespaces and Environments
Namespaces are used to group related services within a Kubernetes cluster and can be mapped to different environments (e.g., dev, staging, production). Intents can be cross-namespace and cross-environment, and Otterize Cloud associates namespaces with their respective environments. Environment names must be unique within an organization.
Clusters
A Kubernetes cluster connected to Otterize Cloud is represented by a cluster object in the cloud. This object contains information about the cluster's intents, services, credentials, and configuration. Multiple clusters and namespaces can belong to a single environment, or environments can span multiple clusters, depending on the organization's needs. Cluster names must be unique within an organization.