How to Achieve Operational Compliance in Production—Without Code Changes


What would your reaction be if the chief information security officer (CISO) at your company approached you and said, “We’ve got a new finance customer and we’ll need to increase our compliance efforts right away. That means encrypting all service-to-service communication, issuing valid TLS certs for each service, rotating those certificates regularly, and enabling services to verify each other’s identities when initiating requests.”?
I’m sure a lot of developers would swallow hard and try not to look rattled. After all, implementing all tasks from that list could sound daunting!! A quick glance at kubectl get pods
could easily reveal more than 100 services currently running on the production Kubernetes cluster—making the task a week-long ordeal, impacting feature work, and delaying time-sensitive bug fixes. How can you deliver this level of compliance on short notice?
The good news is that there’s a solution...and it doesn’t even require you to change any code. Read on to learn how you can use Mutual Transport Layer Security (mTLS) as part of a service mesh to automatically secure every microservice in your Kubernetes environment, make the CISO and customer happy, and still get those bug fixes done on time.
Using Transport Layer Security (TLS) encryption
From an engineering point of view, we can solve most of the compliance list requirements by leveraging TLS. TLS provides three guarantees for each connection:
- Authenticity
- Confidentiality
- Integrity
Confidentiality and integrity mean that a data exchange is private and hasn’t been tampered with in transit. By default, browsers rely on TLS when communicating with web servers. For instance, if you access your online-banking account, it’s crucial for the connection to be secure and private, so others can’t gain access to credentials or account numbers. Besides encrypting traffic, TLS also allows the browser to verify the web server’s identity to ensure we’re communicating with the bank’s server and not an impostor.
There’s a caveat, though: standard TLS only allows you to verify the server’s identity, not the client’s. If the server is only communicating with web browsers, the client identity does not matter. The web server only needs to provide the client with access to the requested resource. However, when we’re talking about service-to-service communication, we need to validate the client’s identity, as well.
Client authentication
When developing web applications, we have mechanisms to ensure the client’s identity. Any software-as-a-service (SaaS) product that offers a Web API requires authentication. The client needs to prove it is allowed to access protected resources. Usually, it supplies an authentication token the server can validate. While the mechanism allows the client to authenticate, it comes with two drawbacks:
- The authentication flow is part of the application’s code
- It’s labor intensive and so does not scale
When the authentication flow is supposed to be part of the application’s code, developers need to dedicate time to build it. That may include validating tokens and checking permissions. The client may also need to acquire the token beforehand.
In a SaaS context, the developer signs into the web application, generates a token, and configures it with the client. But because our hypothetical scenario requires us to add authentication for all microservices, this approach would take too much time and work. That’s why mTLS is a better approach, offering all the features we need...but without all of the code changes.
How to encrypt service-to-service traffic
mTLS is the same as TLS, but with one addition—it provides a mechanism to allow the client to authenticate itself, as well. It is a fantastic fit for situations where:
- You need to encrypt traffic
- You care about client authentication, even if the client has not been seen before
- You want client authentication without code changes
- You don’t want to take on the complexity of managing certificates
Both TLS and mTLS rely on public key cryptography and public key infrastructure. These are topics that could fill many blog posts. But in short, if you’ve ever worked with TLS, you have come across X.509 certificates (browsers, for example, use X.509 certificates to verify the server’s identity). The certificate contains two important bits of information:
- A public key
- An identity
The certificate also comes with a private key that stays with the server.
Before exchanging any data, the server shows the certificate to the other side, which uses the private key to prove that it owns identity inside the certificate. If anyone tried to copy the certificate, they would not be able to impersonate the server. They would lack the corresponding private key to prove their identity.
As an added layer of protection, the X.509 gets signed by a certificate authority (CA)—an organization that acts to validate identities and bind them to cryptographic key pairs with digital certificates. Signing the certificate indicates that the CA “trusts” its identity. This step is required for the second part of TLS authentication. TLS mandates that if the certificate is signed by a CA you trust, you will automatically trust the certificate’s identity.
Besides signing, a CA also issues certificates. To get a new certificate, on your local machine, you create a certificate signing request (CSR), along with the public and private keys. The CSR contains the public key and your identity. The private key stays on your machine. If the CSR is approved, the CA creates a new certificate, signs it, and sends it back to you.
With these steps in mind, here’s the challenge: how do you generate and distribute certificates for all of your microservices? To complicate the problem further, services running in Kubernetes are created and destroyed frequently as part of your replica set and the CISO has specified that we need to rotate certificates to mitigate certificate loss (“certificate loss” means that somebody has gotten access to your private key). While some certificates have expiration dates far in the future, the best practice is to issue very short-lived certificates and issue new ones before the originals expire.
So, mTLS is a great choice. But there’s still a problem—the time it would take to implement it manually for all of your microservices. That’s where a service mesh comes in.
Introducing Calisti: Cisco’s service mesh solution
Cisco’s Calisti is an enterprise-ready distribution of Istio. It provides mTLS automatically for all your services, takes care of generating certificates, and provides frequent rotation—a zero-config operation.
Ready to take it for a spin for free (no credit card required!)? Create a new free account on https://calisti.app. Then, head over to QuickStart for information on how to install Calisti on your cluster. Once installed, run:
$ smm dashboard
Open the Calisti dashboard in your browser on your local machine. Follow the steps below to verify mTLS for your application or namespace:
- Select the service on the Menu>Topology or Menu>Services page.
- Select “mTLS Policies” and configure the mTLS policy you want to use independently for each port of the service. You can choose between the following policies:
- Strict: The service can accept only mutual TLS traffic
- Permissive: The service can accept both plaintext/unencrypted traffic and mutual TLS traffic at the same time
- Disabled: The service can accept plaintext/unencrypted traffic only
- Default: The service uses the global MTLS policy
Now, we’re good to go. So we can just smile at the CISO and say, “no problem!”