Why Symmetric Key Intercept for Secure SSL/TLS Decryption? 

A seismic shift in decrypted cloud visibility has created a new and massive burden caused by the clash of three tectonic forces:

  1. The new TLS 1.3 encryption standard that breaks traditional out-of-band due to its enforcement of perfect forward secrecy, ephemeral keys and discontinuation of certificates for decryption.
  2. Cloud application architecture that is highly distributed, dynamic and decentralized which renders man-in-the-middle, firewall and proxy decryption untenable and unaffordable.
  3. Decoupling keys from decryption without which decryption is effectively rendered unscalable and single-threaded. We then explores the solution to these challenges and the restoration of out-of-band decryption in the cloud with the new symmetric key intercept architecture.

1. Why TLS 1.3 Breaks Legacy Out-of-Band Decryption?

Against the backdrop of new application design patterns and the networking ecosystems that connect all the cloud workloads together, a new transport layer security standard has emerged. TLS 1.3 became the official encryption-in-motion standard in March of 2018. TLS 1.3 and its precursor, TLS 1.2 with Perfect Forward Secrecy (PFS), Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) ciphers and pinned certificates were designed to enforce the idea that encryption should be more robust, keys should be prolific and temporary, and decryption should only be possible by the owner of the traffic.

The new encryption standards and the older implementations of TLS that use PFS enforce three key ideas that increase security but, when combined with the cloud, create new challenges for cloud security, compliance and troubleshooting.

One-way symmetric key computation. Symmetric keys are not derived from the combination of the certificate, private key and packets. The final, symmetric encryption keys are created in such a way that there is never enough information transmitted over the wire for a snoop to derive the key. In the classic Bob-Alice-Eve scenario, Diffie-Hellman based encryption ensures that Eve will not be able to figure out Bob and Alice’s shared symmetric key. This is true whether Eve is there as an attacker or as a legitimate visibility device.

Ephemeral keys. In TLS 1.3 and TLS 1.2 with PFS, final, symmetric encryption keys are for one and only one session. Each session has its own, unique key that only works on the contents of that session. After a session ends and a new one begins, the same endpoints complete a new TLS handshake and a new symmetric key is created. This means there is a massive increase in the number of symmetric encryption keys created. Symmetric keys can only encrypt and decrypt the packet contents of that one session. There is no longer any “master skeleton key” or a single “key to the kingdom.” If a key is obtained by a bad actor, it can only be used to decrypt the one set of packets from the session for which it was created.

Decryption Requires Participation in the TLS Handshake. Because of the way that PFS works and its enforcement in TLS 1.3, only those endpoints that participate in the TLS handshake have the information required to decrypt the packets sent between those endpoints. In TLS 1.3, certificates are used only for authentication. Everything after the ServerHello is encrypted. Therefore, the contents of the certificates are encrypted in the TLS handshake, and consequently, not available for use as components of key construction or derivation. This also means that a decryption solution must only be present on one or the other end of a TLS handshake in order to have access to the data flowing between those two systems (at least for that session).


The impact is that TLS 1.3 breaks legacy out-of-band decryption. Because certificates are not available for decryption key derivation, old solutions do not work.


Because certificates are not available for decryption key derivation, old solutions do not work. In the past, all the session traffic between two points could be decrypted once the encryption key was provided or derived. Now keys are ephemeral; they work only for a single session. Legacy, out-of-band solutions that relied on RSA key exchange or certificate access for decryption do not work in the new TLS 1.3 world.

We have seen how the new TLS 1.3 standard breaks legacy out-of-band decryption that relies on RSA key exchange and certificate inspection. But what about in-line proxies and man-in-the middle decryption?


2. How do Cloud Application Architectures Break Man-in-the-Middle Decryption?

Applications are no longer single, monolithic code structures. The cloud has opened up the ability to distribute and decentralize application layers and processes. This distributed and decentralized, microservice-based architecture means that applications are an amalgam of networked services, API calls and elastic workloads. Modern application workloads communicate over TLS encrypted networks to perform their tasks, deliver data and run mission critical applications for business. Each workload makes thousands of TLS-secure connections each day. Each workload is a TLS client and sometimes a TLS server. The impact is that TLS 1.3 breaks legacy out-of-band decryption. Because certificates are not available for decryption key derivation, old solutions do not work.

Historically, the only option to gain decrypted visibility was to decrypt at the server or create chokepoints with man-in-the-middle architectures and decryption zones. This was In-Line Decryptionacceptable in the data center where East-West connections were controlled, TLS clients and TLS servers were known and network edges were hardened perimeters. Incoming and outgoing communication — North-South — was an obvious location for inspection, monitoring and control. In the cloud, the assumptions of known perimeters, full control of East-West connections and complete control of North-South ingress/egress points do not hold.

The impact is that, as containers and microservice-based architectures accelerate the decentralization of application workloads in the cloud, there is no longer a “middle” into which a decryption solution may be inserted. The cloud does not tolerate in-line solutions precisely for this reason.

Man-in-the-middle decryption offered by some legacy firewalls and inline security devices either don’t work in the cloud or require restrictive architectural designs. Imagine trying to jam a decrypt-capable firewall in between each connection in a scale set. Imagine trying to pay for a firewall doing MITM inspection and proxying in between every back-end third party API connection for an application. Imagine the architectural nightmare when trying to run a 10Gbps duplex connection through a 5Gb firewall chokepoint. You would need three firewalls at the choke point to cope with peak load and that’s before any scaling events. It is clear that the cloud simply will not tolerate in-line, man-in-the-middle solutions for decryption and visibility.


3. How Keeping Keys & Encrypted Traffic Together Breaks Scalability

In legacy tools decryption happens when a device receives encrypted traffic, calculates or receives the static (not ephemeral) key and then decrypts the traffic which it can then inspect or forward on as clear text to other tools. In each case, the keys and the encrypted traffic are bound together in the same processes. This effectively makes the decryption process single-threaded. The reason keys and encrypted traffic are kept together as long as possible is to preserve the security of the encrypted traffic and to lower the risk of splitting up the keys – which in the old world of TLS 1.2 and before, can be used to decrypt anything from any point in time sent between the two endpoints that created that key.

While the actual process of decrypting traffic is quick, the process of key derivation is time consuming and resource intensive. A Gartner study reports that enabling decryption on leading next-generation firewalls can degrade firewall performance by as much as 80% and reduce the transactions per second by more than 90%.1 It is not uncommon for these statistics to be hidden because they are not flattering.

John Maddison, writing for Network Computing. com puts it this way: “According to recent test results from NSS Labs, very few security devices can inspect encrypted data without severely impacting network performance. On average, the performance hit for deep packet inspection is 60 percent, connection rates dropped by an average of 92 percent and response time increased by a whopping 672 percent. Even more concerning, not all products were able to support the top 30 cipher suites either, meaning that some traffic that appeared to be analyzed wasn’t being processed by some of the security devices at all.”

The truth is that TLS handshakes are computationally complex and can eat up system resources. TLS 1.3 reduces some of the computation load by decreasing the number of roundtrips in the handshake. But the challenge of computation for ephemeral, session-by-session symmetric keys is still huge on man-in-the-middle decryption architectures.

Compounding the challenge for the legacy decryption approach is the fact that most IT and security teams struggle with the following challenges:

  1. They have multiple tools that need to see decrypted traffic, which causes a significant decrypt – re-encrypt – forward burden on the decryption tools and the network overall.

  2. The MITM handshake is really a double handshake which exacts a large CPU tax (as described above) making it inefficient to run this function on multiple security tools. MITM terminates the inbound handshake, decrypts, inspects and then recreates the handshake on the other side.

  3. The serial chaining of multiple security tools together while scaling the decryption process is difficult.

  4. Properly handling, protecting and maintaining the isolation of clear-text traffic for regulatory compliance is difficult.

The impact is that, like the problem with man-in-the-middle decryption, keeping the keys and encrypted traffic together breaks scalability of visibility for security, DevOps, compliance and cloud systems. The old approach creates many problems for performance, data handling and cost, while solving only one – keeping the original traffic encrypted for as long as possible. We have talked about forces that are shifting the foundations of decrypted visibility in the cloud. We have seen how the new TLS 1.3 standard breaks legacy out-of-band decryption that relies on RSA key exchange and certificate inspection.

We have seen how modern cloud architecture does not tolerate in-line, MITM decryption architectures. We have seen how keeping keys and encrypted streams single-threaded is required by the new TLS standards and how that breaks the scalability of the cloud.


4. What are the key considerations for a Modern SSL/TLS Decryption Solution? 

These three tectonic forces have collided in the cloud to create upheaval. This upheaval creates new groundwork, a new environment for a new approach to decrypted visibility in the cloud. The new approach must address each of the three forces described above.

Any new approach to decrypted visibility in the cloud must support TLS 1.3 with its certificate encryption, enforced strong ciphers, ephemeral keys and enforced PFS. It must also be backwards-compatible with earlier TLS versions that enable perfect forward secrecy.

A new approach must be out-of-band so that modern cloud application architectures can scale with all the potential the cloud offers and make use of microservice design patterns, containerization and third party API data that come with their own controlled encryption and pinned certificates.

A new approach to decryption should decouple key identification and decryption. By decoupling key discovery.


Additionally, there are several factors of the new cloud environment that any new decryption approach should include.

  1. Cloud Native. Any new approach should be built from the ground-up as a cloud native solution. Legacy solutions masquerading as cloud-ready have merely been lifted and shifted from the data center. These solutions may run in the cloud but lack the fundamental design paradigms that deliver cloud benefits. Running monolithic applications and execution contexts in a VM or bloated container is not the same as a small footprint, extremely nimble, cloud-native design. A true born-in-the-cloud solution will take advantage of microservice architectures, containerization, scalability and easy spin-up / spin-down elasticity. A solution that scales at cloud levels will work as well for the single user troubleshooting as for an enterprise-wide monitoring deployment. A solution that is cloud-native will not require encryption library locations to be known and set ahead of time as this only limits scalability and elasticity in the cloud.

  2.  Open. Any new approach should be able to work with any packet mirror or tap source, any packet brokering source, and any tool destination source. A truly open solution will not require users to know in advance where the encryption and TLS libraries are stored in each application and will not require that only certain ciphers and certificates are used. The days of tool and vendor lock-in are over. Best-of-breed means the openness and flexibility to select the tools, processes and platforms that are best for your business rather than bending your business around inflexible, vendor-established requirements.

  3.  Universal. Any new approach must be able to handle any cipher, any TLS standard, and any protocol. Artificial limitations on visibility are half measures that can leave you fully vulnerable. True universality can be tested and proven without requiring turn-down of TLS, without omitting certain ciphers and without requiring application modification.


5. What is Symmetric Key Intercept? 

Symmetric Key Intercept is a patent-pending process that works by separating and solving the three shifts of cloud decryption:

  1. Discovering and obtaining the final, ephemeral, symmetric encryption key.

  2. Out of band decryption of particular, encrypted, packets from a replicated stream or stored file (pcap).

  3. Decoupled key discovery and decryption.

This cloud-native architecture delivers universal TLS visibility and decryption for any workload whether it is acting as the TLS server or TLS client. Symmetric Key Intercept works after the TLS Handshake by retrieving the final, ephemeral, symmetric encryption keys from workload memory. This means that it works for any cipher, on any protocol. It works with perfect forward secrecy and it works with any TLS / SSL standard – including the new TLS 1.3. This architecture enables real-time, multi-destination, decentralized decryption of mirrored traffic as well as instant decryption and replay of mirrored and encrypted pcaps that can be stored for future investigation, compliance or inspection.

 

Symmetric Key Intercept

 

Symmetric Key Intercept architecture answers the secure vs. visibility conundrum that most enterprise IT organizations need to solve. The process ensures that original end-to-end encryption is preserved while cloud-scale decrypted visibility is created.

 


Download complete technical brief for complete information on how it works!

Download PDF


 

Seismic Shifts Reveal the New Foundation for Out-of-Band, Decrypted Visibility in the Cloud

The new Symmetric Key Intercept architecture ensures decrypted traffic is never exposed to potential threats if it gets intercepted. Instead of decrypting traffic in storage then sending it to monitoring tools for inspection, Symmetric Key Intercept allows users to send encrypted traffic to tools, databases or storage and then decrypt right at the tool. The architecture is easy to deploy, and scales to meet any traffic load without any configuration overhead or architectural constraints.

With Symmetric Key Intercept in place, cloud DevOps and security teams can, with confidence, decrypt TLS traffic inside their cloud environments – enabling security, performance, and diagnostic systems and processes.


Get The Brief

Download the full brief for easy reading and future reference!

Download PDF