Why choose calisti
Top reasons why organizations choose Calisti.
Calisti is a platform for setting up and operating production-ready Apache Kafka clusters on Kubernetes, leveraging a Cloud Native technology stack. Calisti simplifies and automates configuration, and management, enabling you to focus on your core business. With Calisti, you can easily encrypt traffic, configure, scale, and manage Kafka brokers and topics across hybrid and multi-cloud environments, reduce operational costs, improve reliability, and increase scalability. You can monitor and troubleshoot Kafka in real-time, and manage Kafka’s access control. Calisti includes Zookeeper, Koperator, Envoy, and many other components hosted in a managed service mesh. All components are automatically installed, configured, and managed in order to operate a production-ready Kafka cluster on Kubernetes.
Koperator (formerly called Banzai Cloud Kafka operator) is a Kafka operator to automate provisioning, management, autoscaling and operations of Apache Kafka clusters deployed on Kubernetes. It is an open source software developed by Cisco and is available freely on GitHub.
Calisti supports Apache Kafka versions 2.0 and above. Specifically,
Calisti allows you to run large Apache Kafka clusters not only in the cloud, but also on-premises, multi-cloud, and hybrid-cloud environments.
Running Kafka over Istio brings additional security benefits, scalability and durability, locality based load balancing, and many other useful features, including:
Currently Service Mesh Manager and Calisti use separate service meshes with separate control planes. The Calisti service mesh is used only for the Apache Kafka brokers and the control-plane services of Calisti. They are tied together in the sense that they are managed by the same Istio operator and use the same version of Istio.
Note that currently you cannot manage the Calisti service mesh from the Service Mesh Manager UI, only from the command line.
Koperator (formerly called Banzai Cloud Kafka operator) is a Kubernetes operator to automate provisioning, management, autoscaling and operations of Apache Kafka clusters deployed to K8s.
is an open-source component of the commercial Calisti product. In addition to Koperator, Calisti installs, configures, and manages several other components that are needed for the reliable operation and management of a Kafka cluster, and also provides several other features , including commercial support.
Calisti exposes Cruise-Control and Kafka JMX metrics to Prometheus and acts as a Prometheus Alert Manager. It receives alerts defined in Prometheus, and creates actions based on Prometheus alert annotations, so it can handle and react to alerts automatically, without having to involve human operators.
Calisti can gracefully scale your Kafka clusters both up and down, and also supports vertical capacity scaling individually for each broker, including adding new disks.
Calisti fully automates managed mutual TLS (mTLS) encryption and authentication. You don’t need to configure your brokers to use SSL, as Calisti provides mTLS out-of-the box at the network layer (implemented through a lightweight, managed Istio mesh). All services deployed by Calisti (Zookeeper, Koperator, the Kafka cluster, Cruise Control, MirrorMaker2, and so on) interact with each other using mTLS.
Nowadays every enterprise must be prepared for an eventual service disruption. Calisti provides multiple ways for you to get your Kafka clusters ready for disaster recovery, including MirrorMaker2 and Container Storage Interface (CSI).
Some of the components are shared between Service Mesh Manager and Calisti, while others are separate or different.
Both Service Mesh Manager and Calisti use the following components, but have separate instances of them:
Shared instances:
For managing certificates of the Istio control plane, Calisti uses the csr-operator (certificate-signing-request operator) , while Service Mesh Manager uses cert-manager.
The Istio used by Calisti is slightly different from the one used by Service Mesh Manager, because Calisti uses custom istio-proxy builds to handle Kafka traffic, so the sidecars are actually different.
Yes, Calisti can be deployed in on-premises, hybrid,and multi-cloud environments. Specifically, it can run on AWS, Azure, Google, and on-premises (OpenShift, Rancher), or Virtual Machine environments, allowing you to manage cloud native apps and Apache Kafka clusters across multiple cloud providers or on-premises data centers.
Service Mesh Manager uses the mutual TLS feature of Istio for service-to-service authentication and traffic encryption. In Service Mesh Manager, you can manage mTLS settings between services with the CLI or on the UI, mesh-wide, namespace-wide, and on the service-specific level. Calisti is fully compliant with the rules for cryptographic modules of FIPS 140-2 Security Level 1; the ciphers used are even more secure the minimum allowed by the standard.
No, Service Mesh Manager leverages Kubeconfig, the official client libraries, and the Kubernetes API to perform authentication and authorization for its users. If you’re allowed to add, edit, or delete specific Istio custom resources, you’ll have the same permissions from Service Mesh Manager as well. The Service Mesh Manager installer provides a way - mainly for demo/tryout purposes - to disable user authentication and use its own service account token for all communication with the Kubernetes API server.
By default, authentication is needed to access Service Mesh Manager UI. The observability features are granted for every authenticated users, the control features allowance is based on the authenticated user’s RBAC permissions.
Service mesh is a software layer used for handling all communications between services. It is independent of each service’s code so that it can work with multiple service management systems and across network boundaries without a problem. Its new features connect and manage connections between services effortlessly.
By enabling independence between applications and infrastructure, containers facilitated a shift in architectures from monolithic to microservice. This came with a multitude of challenges. Container orchestration tools solved deploy issues and microservices build, but many runtime challenges were left unaddressed. A service mesh offers solutions for these runtime issues by providing a bundling of capabilities like security, policy configuration, ingress and egress control, load balancing, distributed tracing, traffic shaping, or metrics collection.
Service Mesh Manager helps you to confidently scale your microservices over single- and multi-cluster environments and to make daily operational routines standardized and more efficient. The componentization and scaling of modern applications inevitably leads to a number of optimization and management issues:
Service Mesh Manager helps you accomplish these tasks and many others in a simple and scalable way, by leveraging the Istio service mesh and building many automations around it. Our tag-line for the product captures this succinctly:
Service Mesh Manager operationalizes the service mesh to bring deep observability, convenient management, and policy-based security to modern container-based applications.
Istio is still the most feature complete and mature service mesh solution by far. It may have its shortcomings, especially around complexity, but it has a great community around it that continuously works towards making it better. We also aim to solve some of these problems with Service Mesh Manager. One of the main use cases of Service Mesh Manager is the ability to connect multiple clusters even across different networks, and Istio has several flexible topologies for different use cases to achieve this.
We developed the open source Cisco Istio operator to solve the first tier of problems related to the installation, management and upgrade of the Istio infrastructure components. The operator continuously reconciles the state of the Istio components to keep them healthy, and facilitates multi-cluster federation. We offer community and paid support for the Istio operator.
The Cisco Istio operator is an open-source component of the commercial Service Mesh Manager product. In addition to the Cisco Istio operator, Service Mesh Manager:
All Service Mesh Manager features work in multi-cluster configurations as well, and a unified cross-cluster application view is provided.
After you’ve installed Service Mesh Manager, and want to put your application in the mesh, you need to inject a sidecar in the pods of your application. You can do that manually, or by enabling automatic injection for your namespaces, and restarting your pods. While in theory it’s usually that simple, we know that in practice an application can have some problems running a sidecar, and won’t behave the same anymore. We have a deep domain knowledge of Istio and have seen a lot of these problems. When integrating your application, we can help you overcome these issues.
Most of the overhead of Service Mesh Manager is coming from Istio itself, and it’s there in two different layers.
In general, no. There is some latency overhead added for every request because of the sidecar proxies, but if the mesh is configured properly it shouldn’t be more than a few milliseconds. Per Istio’s own measurements , with 16 concurrent connections and 1000 RPS, Istio adds 3ms over the baseline (P50) when a request travels through both a client and server proxy. At 64 concurrent connections, Istio adds 7ms over the baseline, with Mixer disabled. There could be some latency critical applications where it matters, but for most apps it won’t make a difference.
Service Mesh Manager provides a few handy features to keep a mesh healthy. The most important of these is the mesh validation feature. Other than doing basic validation of Istio configuration, Service Mesh Manager analyses the whole mesh state and tries to find ambiguous or invalid configs. For example, a label selector that points to an invalid service, or there is some shadowed or ambiguous routing config present.
Service Mesh Manager also provides debugging features like tapping an Envoy proxy and analyzing requests. You can also keep track of real-time metrics on the dashboard and check if your latency or error rate values are increasing.
No, we’ve designed Service Mesh Manager in a way that it doesn’t add a new abstraction layer. We thought that Istio is complicated enough in itself and it wouldn’t do any good introducing a few new CRDs. Service Mesh Manager can help you configure your mesh through a CLI or the dashboard, but those commands are always translated to plain old Istio CRs. Doing it this way enables Service Mesh Manager to be completely compatible with all Istio configuration changes. If you write Istio config directly, Service Mesh Manager will still be able to detect it, display it, and validate it properly.
Yes. Since there is no additional abstraction layer involved, Service Mesh Manager is able to interpret your Istio configurations. If your virtual services, service entries, and other Istio resources are deployed through a CI/CD flow, Service Mesh Manager will instantly parse them and display your configuration on the dashboard.
Our Istio distribution is very close to upstream Istio, but contains a few stability fixes and enhancements, especially around multi-cluster topologies and telemetry. For a detailed list of changes compared to upstream Istio, see Istio distribution .
In most cases you don’t need to change anything. But we have experience with putting a lot of different applications in Istio, and know that sometimes there are special cases when an application doesn’t handle having a sidecar well. It could be some special HTTP headers, or mTLS configuration that conflicts with an Envoy sidecar. In these cases there could be some slight changes involved and we can help you solve these kind of issues.
One of the main goals of Service Mesh Manager is to give you an overview of your service mesh. You’ll see the topology of the services running in the mesh with real-time monitoring information of
A lot of different features exist in Service Mesh Manager that help debugging your services. Usually you start by checking real-time error rates and latency values on the topology view, then go on with mesh validations, and the drill-down view of a service or workload. You also have 1-click access to Jaeger and Grafana dashboards if you want to further explore your traces and metrics. If you need to check requests flowing through an Envoy proxy, Service Mesh Manager provides you a tapping feature to see access logs, or a detailed view of the requests.
One of the main goals of Service Mesh Manager is to give you an overview of your service mesh. You’ll see the topology of the services running in the mesh with real-time monitoring information of
You also get one-click access to distributed tracing with Jaeger, and Grafana dashboards if you want to further explore metrics provided by the service mesh. Service Mesh Manager completes the service mesh metrics with a drill-down view of your services and workloads from their mesh configuration to pod and node-level info and metrics of resource utilization.
Calisti detects and captures protocol-specific metrics, such as PostgreSQL and Apache Kafka, to enhance observability; it can also capture DNS requests and report these API endpoints for additional security.
Calisti topology view shows the status of the Kafka infrastructure, monitoring the current overall health, load and performance details at per node, broker and service levels
A lot of different features exist in Service Mesh Manager that help debugging your services. Usually you start by checking real-time error rates and latency values on the topology view, then go on with mesh validations, and the drill-down view of a service or workload. You also have 1-click access to Jaeger and Grafana dashboards if you want to further explore your traces and metrics. If you need to check requests flowing through an Envoy proxy, Service Mesh Manager provides you a tapping feature to see access logs, or a detailed view of the requests.
Service Mesh Manager helps you manage multi-cluster service meshes in three different layers.
Yes, attaching and detaching clusters from a service mesh can easily be done through the Service Mesh Manager CLI. These CLI commands are backed by the Istio operator that manages remote clusters through Kubernetes custom resources and secrets that hold the Kubeconfigs of those clusters.
Perhaps the most common use case for a multi-cluster service mesh is to connect on-premises and cloud environments easily. For example, using a multi-cluster mesh you can securely connect your cloud services to the legacy services running in on-prem clusters.
Public clouds are also often used to scale out from an on-premises datacenter during particular events when your services need to handle an increased load.
Some common load balancing and high availability patterns can easily be implemented using a multi-cluster mesh as well. You can have multiple clusters in different regions using locality-based load balancing, and driving traffic to another region during a failure event in a specific region.
Calisti is offered in Free, Pro, and Enterprise. Free and Pro tier are available directly from the website:
To get started with Calisti for Kafka, simply follow the quick start steps. Depending on what mode you choose to run Calisti in you will require a Kubernetes cluster that meets certain minimum size requirements to run Calisti. Simply follow the quick start steps for prerequisites and setup steps to get started.