(And why you need one…like yesterday!)

Once upon a time, long, long ago, enterprise applications were run on physical servers in on-premises datacenters. System administrators spent many hours of their lives in freezing cold server rooms. When something went wrong or when they had to upgrade hardware or applications, these heroic admins pulled a lot of overnight shifts in the sub-arctic server rooms, surviving mostly on caffeine and chocolate.
Thanks to the development of virtualization and cloud architecture, those days are mostly over (except for the folks who still work in traditional on-premises enterprise IT environments—they’re still freezing and highly caffeinated). A great percentage of companies have moved to the cloud, shifting some IT burden onto the cloud providers and focusing more on the workload inside the cloud environment. Life is a lot more civilized these days.
As cloud architecture has evolved and matured, we’ve seen the development of more sophisticated container technology and microservices (microservices usually provide a single feature or function, such as a retail site’s shopping cart or its display of recently viewed items). Scalable, highly resilient management and orchestration systems—such as Kubernetes, CI/CD and GitOps—emerged to help deploy, scale, auto-recover, and release the containers. Operations folks started to get more sleep and better nutrition.
Running microservices in their own containers made it even easier and more practical to add, upgrade, and scale specific features and functionality without having to release new versions of the entire cloud application with every change. Soon, it became clear that another, higher-level abstraction of services was necessary to manage inter-service traffic and to provide better security and observability. Service mesh to the rescue!
What is a service mesh?
While microservices do have the ability to communicate with each other, abstracting the logic and oversight into a new infrastructure layer—a service mesh—simplifies the management of decentralized applications. When you consider the fact that modern apps can be comprised of hundreds of containerized microservices, it’s easy to see how valuable a service mesh can be in managing load balancing, encryption, failure recovery, and more.
Service-mesh architecture employs proxy instances that attach to each microservice, microservice containers, or container-orchestration unit, such as a Kubernetes pod. They’re called “sidecars,” because they do not run within them, but alongside them, forming a unified data plane, and routing requests between them. In the aggregate, these sidecars form a single layer referred to as a “mesh.”
This type of architecture has grown in popularity as enterprises have worked to scale their cloud applications. Facebook, Amazon, Netflix, and Uber are just a few examples of giant cloud applications that are composed of hundreds of microservices—all running in service meshes.
The advantages
A service mesh:
- Handles security issues, including certificates and mTLS.
- Provides traffic management (this varies somewhat with specific meshes, but with Istio it includes timeouts, retries, circuit breakers, and canary deployments).
- Frees developers to work on their application code and business logic.
- Makes cloud applications more resilient by routing requests away from services that have problems.
- Assists with policy control.
The caveats
While a service mesh is a great solution for managing an increasingly complicated microservices architecture, there are definitely issues to keep in mind, such as the following:
- It’s important to ensure that the service mesh you choose can be easily integrated with the other tools you use.
- Although the service mesh will simplify management once it’s installed and deployed, setup of the sidecars and other components introduces some initial complexity. So, it’s important to construct a thorough, detailed implementation plan. A service mesh that enables phased migration from virtual machines (VMs) to containers is a definite advantage.
- Your team will need to have experience using a service mesh on top of Kubernetes, so you may need to provide training or hire in this skillset.
- A service mesh can be resource-intensive and/or add throughput latency. But you can minimize the performance hit via proper sidecar configuration. Using a service mesh with automatic proxy-configuration features (like Calisti) can help.
Cisco’s Calisti: your enterprise-ready Istio platform
Calisti is a managed Istio instance that lets you manage microservices and track service-level objectives (SLOs). It enables observability and traffic management for microservices and cloud-native applications, allowing admins to switch between live and historical views.
Among its features are:
- Automated application-lifecycle management
- A user interface for troubleshooting
- Configuration and monitoring of SLOs, burn rate, error budget, and compliance
- GraphQL alerts to automatically scale based on SLO burn rate
- Topology views
- Policy-based security
Calisti manages microservices running on both containers and virtual machines, allowing for application migration from VMs to containers in a phased manner. It reduces management overhead with consistent policy application across both Kubernetes and VMs.
Because Istio has new releases every three months, Calisti includes an Istio operator that automates lifecycle management, and even enables canary deployment of the platform itself.
Calisti works with the tools you rely on to unify your observability and ensure teams are alerted when incidents occur. It enables multi-cloud and hybrid deployments across all Amazon Web Services, Azure, Google Cloud Platform, and on-premises environments.
Try out Calisti’s free tier—no credit card required. It supports up to 10 nodes and two clusters and includes a dashboard for observability and control, plus mTLS for service-to-service communications.
Need more information?
- Browse these resources.
- Check out the Quick-Start guide.
- Have a look at the docs.