Themes from Real World Crypto 2022

By William Woodruff

Last week, over 500 cryptographers from around the world gathered in Amsterdam for Real World Crypto 2022, meeting in person for the first time in over two years.

As in previous years, we dispatched a handful of our researchers and engineers to attend the conference, listen to talks, and schmooze observe the themes currently dominating the nexus between cryptographic research and practical (real world!) engineering.

Here are the major themes we gleaned from Real World Crypto 2022:

  1. Trusted hardware isn’t so trustworthy: Implementers of trusted hardware (whether trusted execution environments (TEEs), HSMs, or secure enclaves) continue to make engineering mistakes that fundamentally violate the integrity promises made by the hardware.
  2. Security tooling is still too difficult to use: Or “you can lead a horse to water, but you can’t make it run ./configure && make && make install.”
  3. Side channels everywhere: When God closes a door, he opens a side channel.
  4. LANGSEC in cryptographic contexts: Figuring out which protocol you’re speaking is the third hard problem in computer science.

Let’s get to it!

Trusted hardware isn’t so trustworthy

Fundamental non-cryptographic vulnerabilities in trusted hardware are nothing new. Years of vulnerabilities have led to Intel’s decision to remove SGX from its next generation of consumer CPUs, and ROCA affected one in four TPMs back in 2017.

What is new is the prevalence of trusted hardware in consumer-facing roles. Ordinary users are increasingly (and unwittingly!) interacting with secure enclaves and TEEs via password managers and 2FA schemes like WebAuthn on their mobile phones and computers. This has fundamentally broadened the risk associated with vulnerabilities in trusted hardware: breaks in trusted hardware now pose a direct risk to individual users.

That’s where our first highlight from RWC 2022 comes in: “Trust Dies in Darkness: Shedding Light on Samsung’s TrustZone Cryptographic Design” (slides, video, paper). In this session, the presenters describe two critical weaknesses in TEEGRIS, Samsung’s implementation of a TrustZone OS: an IV reuse attack that allows an attacker to extract hardware-protected keys, and a downgrade attack that renders even the latest and patched flagship Samsung devices vulnerable to the first attack. We’ll take a look at both.

IV reuse in TEEGRIS

TEEGRIS is an entirely separate OS, running in isolation and in parallel with the “normal” host OS (Android). To communicate with the host, TEEGRIS provides a trusted application (TA) that runs within the TEE but exposes resources to the normal host via Keymaster, a command-and-response protocol standardized by Google.

Keymaster includes the concept of “blobs”: encryption keys that have themselves been encrypted (“wrapped”) with the TEE’s key material and stored on the host OS. Because the wrapped keys are stored on the host, their security ultimately depends on the security of the TEE’s correct application of encryption during key wrapping.

So how does the TEEGRIS Keymaster wrap keys? With AES-GCM!

As you’ll recall, there are (normally) three parameters for a block cipher (AES) combined with a mode of operation (GCM):

  • The secret key, used to initialize the block cipher
  • The initialization vector (IV), used to perturb the ciphertext and prevent our friend the ECB penguin
  • The plaintext itself, which we intend to encrypt (in this case, another encryption key)

The security of AES-GCM depends on the assumption that an IV is never reused for the same secret key. Therefore, an attacker that can force the secret key and IV to be used across multiple “sessions” (in this case, key wrappings) can violate the security of AES-GCM. The presenters discovered mistakes in Samsung’s Keymaster implementation that violate two security assumptions: that the key derivation function (KDF) can’t be manipulated to produce the same key multiple times, and that the attacker can’t control the IV. Here’s how:

  • On the Galaxy S8 and S9, the KDF used to generate the secret key used only attacker-controlled inputs. In other words, an attacker can force all encrypted blobs for a given Android application to use the exact same AES key. In this context, this is acceptable as long as an attacker cannot force IV reuse, except
  • …the Android application can set an IV when generating or importing a key! Samsung’s Keymaster implementation on the Galaxy S9 trusts the IV passed in by the host, allowing an attacker to use the same IV multiple times.

At this point, the properties of the stream cipher itself give the attacker everything they need to recover an encryption key from another blob: the XOR of the malicious blob, the malicious key, and the target (victim) blob that yields the plaintext of the target, which is the unwrapped encryption key!

Ultimately, the presenters determined that this particular attack worked only on the Galaxy S9: the S8’s Keymaster TA generates secret keys from attacker-controlled inputs but doesn’t use an attacker-provided IV, preventing IV reuse.

The talk’s presenters reported this bug in March of 2021, and it was assigned CVE-2021-25444.

Downgrade attacks

As part of their analysis of Samsung’s TrustZone implementation, the presenters discovered that the Keymaster TA on Galaxy S10, S20, and S21 devices used a newer blob format (“v20-s10”) by default. This new format changes the data used to seed the KDF: instead of being entirely attacker controlled, random bytes (derived from the TEE itself) are mixed in, preventing key reuse.

But not so fast: the TEE on the S10, S20, and S21 uses the “v20-s10” format by default but allows the application to specify a different blob version to use instead. The version without any randomized salt (“v15”) is one of the valid options, so we’re right back where we started with predictable key generation.

The talk’s presenters reported this bug in July of 2021, and it was assigned CVE-2021-25490.

Takeaways

TEEs are not special: they’re subject to the same cryptographic engineering requirements as everything else. Hardware guarantees are only as good as the software running on top of them, which should (1) use modern ciphers with misuse-resistant modes of operation, (2) minimize potential attacker influence over key and key derivation material, and (3) eliminate the attacker’s ability to downgrade formats and protocols that should be completely opaque to the host OS.

Security tooling is still too difficult to use

We at Trail of Bits are big fans of automated security tooling: it’s why we write and open-source tools like dylint, pip-audit, siderophile, and Echidna.

That’s why we were saddened by the survey results in “‘They’re not that hard to mitigate’: What Cryptographic Library Developers Think About Timing Attacks” (slides, video, paper): of 44 cryptographers surveyed across 27 major open-source cryptography projects, only 17 had actually used automated tools to find timing vulnerabilities, even though 100% of the participants surveyed were aware of timing vulnerabilities and their potential severity. The following are some of the reasons participants cited for choosing not to use automated tooling:

  • Skepticism about risk: Many participants expressed doubt that they needed additional tooling to help mitigate timing attacks or that there were practical real-world attacks that justified the effort required for mitigation.
  • Difficulty of installation or use: Many of the tools surveyed had convoluted installation, compilation, and usage instructions. Open-source maintainers expressed frustration when trying to make projects with outdated dependencies work on modern systems, particularly in contexts in which they’d be most useful (automated testing in CI/CD).
  • Maintenance status: Many of the tools surveyed are source artifacts from academic works and are either unmaintained or very loosely maintained. Others had no easily discoverable source artifacts, had binary releases only, or were commercially or otherwise restrictively licensed.
  • Invasiveness: Many of the tools introduce additional requirements on the programs they analyze, such as particular build structures (or program representations, such as C/C++ or certain binary formats only) and special DSLs for indicating secret and public values. This makes many tools inapplicable to newer projects written in languages like Python, Rust, and Go.
  • Overhead: Many of the tools involve significant learning curves that would take up too much of a developer’s already limited time. Many also require a significant amount of time to use, even after mastering them, in terms of manually reviewing and eliminating false positives and negatives, tuning the tools to increase the true positive rate, and so forth.

Perhaps unintuitively, awareness of tools did not correlate with their use: the majority of developers surveyed (33/44) were aware of one or more tools, but only half of that number actually chose to use them.

Takeaways

In the presenters’ words, there is a clear “leaky pipeline” from awareness of timing vulnerabilities (nearly universal), to tool awareness (the majority of developers), to actual tool use (a small minority of developers). Stopping those leaks will require tools to become:

  • Easier to install and use: This will reduce the cognitive overhead necessary between selecting a tool and actually being able to apply it.
  • Readily available: Tools must be discoverable without requiring intense familiarity with active cryptographic research; tools should be downloadable from well-known sources (such as public Git hosts).

Additionally, the presenters identified compilers themselves as an important new frontier: the compiler is always present, is already familiar to developers, and is the ideal place to introduce more advanced techniques like secret typing. We at Trail of Bits happen to agree!

Side channels everywhere

The great thing about side-channel vulnerabilities is their incredible pervasiveness: there’s a seemingly never-ending reservoir of increasingly creative techniques for extracting information from a target machine.

Side channels are typically described along two dimensions: passive-active (i.e., does the attacker need to interact with the target, and to what extent?) and local-remote (i.e., does the attacker need to be in the physical vicinity of the target?). Remote, passive side channels are thus the “best of both worlds” from an attacker’s perspective: they’re entirely covert and require no physical presence, making them (in principle) undetectable by the victim.

We loved the side channel described in “Lend Me Your Ear: Passive Remote Physical Side Channels on PCs” (video, paper). To summarize:

  • The presenters observed that, on laptops, the onboard microphone is physically wired to the audio interface (and, thus, to the CPU). Digital logic controls the intentional flow of data, but it’s all just wires and, therefore, unintentional noise underneath.
  • In effect, this means that the onboard microphone might act as an EM probe for the CPU itself!
  • We share our audio over the internet to potentially untrusted parties: company meetings, conferences, VoIP with friends and family, voice chat for video games, and so on…
  • …so can we extract anything of interest from that data?

The presenters offered three case studies:

  • Website identification: A victim is browsing a website while talking over VoIP, and the attacker (who is on the call with the victim) would like to know which site the victim is currently on.
    • Result: Using a convolutional neural network with a 14-way classifier (for 14 popular news websites), the presenters were able to achieve 96% accuracy.
  • Cryptographic key recovery: A victim is performing ECDSA signatures on her local machine while talking over VoIP, and the attacker would like to exfiltrate the secret key being used for signing.
    • Result: The presenters were able to use the same side-channel weakness as Minerva but without local instrumentation. Even with post-processing noise, they demonstrated key extraction after roughly 20,000 signing operations.
  • CS:GO wallhacks: A victim is playing an online first-person shooter while talking with other players over VoIP, and the attacker would like to know where the victim is physically located on the game map.
    • Result: Distinct “zebra patterns” were visually identifiable in the spectrogram when the victim was hidden behind an opaque in-game object, such as a car. The presenters observed that this circumvented standard “anticheat” mitigations, as no client code was manipulated to reveal the victim’s in-game location.

Takeaways

Side channels are the gift that keeps on giving: they’re difficult to anticipate and to mitigate, and they compromise cryptographic schemes that are completely sound in the abstract.

The presenters correctly note that this particular attack upends a traditional assumption about physical side channels: that they cannot be exploited remotely and, thus, can be excluded from threat models in which the attacker is purely remote.

LANGSEC in cryptographic contexts

LANGSEC is the “language-theoretic approach to security”: it attributes many (most?) exploitable software bugs to the ad hoc interpretation of potentially untrusted inputs and proposes that we parse untrusted inputs by comparing them against a formal language derived solely from valid or expected inputs.

This approach is extremely relevant to the kinds of bugs that regularly rear their heads in applied cryptography:

We saw not one, but two LANGSEC-adjacent talks at RWC this year!

Application layer protocol confusion

The presenters of “ALPACA: Application Layer Protocol Confusion—Analyzing and Mitigating Cracks in TLS Authentication” (slides, video, paper) started with their observation of a design decision in TLS: because TLS is fundamentally application and protocol independent, it has no direct notion of how the two endpoints should be communicating. In other words, TLS cares only about establishing an encrypted channel between two machines, not (necessarily) which machines or which services on those machines are actually communicating.

In their use on the web, TLS certificates are normally bound to domains, preventing an attacker from redirecting traffic intended for safe.com to malicious.biz. But this isn’t always sufficient:

  • Wildcard certificates are common: an attacker who controls malicious.example.com might be able to redirect traffic from safe.example.com if that traffic were encrypted with a certificate permitting *.example.com.
  • Certificates can claim many hosts, including hosts that have been obtained or compromised by a malicious attacker: the certificate for safe.example.com might also permit safe.example.net, which an attacker might control.
  • Finally, and perhaps most interestingly, certificates do not specify which service and/or port they expect to authenticate with, creating an opportunity for an attacker to redirect traffic to a different service on the same host.

To make this easier for the attacker, hosts frequently run multiple services with protocols that roughly resemble HTTP. The presenters evaluated four of them (FTP, SMTP, IMAP, and POP3) against three different attack techniques:

  • Reflection: A MiTM attacker redirects a cross-origin HTTPS request to a different service on the same host, causing that service to “reflect” a trusted response back to the victim.
  • Download: An attacker stores malicious data on a service running the same host and tricks a subsequent HTTPS request into downloading and presenting that data, similarly to a stored XSS attack.
  • Upload: An attacker compromises a service running on the same host and redirects a subsequent HTTPS request to the service, causing sensitive contents (such as cookie headers) to be uploaded to the service for later retrieval.

Next, the presenters evaluated popular web browsers and application servers for FTP, SMTP, IMAP, and POP3 and determined that:

  • All browsers were vulnerable to at least two attack techniques (FTP upload + FTP download) against one or more FTP server packages.
  • Internet Explorer and Microsoft Edge were particularly vulnerable: all exploit methods worked with one or more server packages.

This is all terrible great, but how many actual servers are vulnerable? As it turns out, quite a few: of 2 million unique hosts running a TLS-enabled application server (like FTP or SMTP), over 1.4 million (or 69%) were also running HTTPS, making them potentially vulnerable to a general cross-protocol attack. The presenters further narrowed this down to hosts with application servers that were known to be exploitable (such as old versions of ProFTPD) and identified over 114,000 HTTPS hosts that could be attacked.

So what can we do about it? The presenters have some ideas:

  • At the application server level, there are some reasonable countermeasures we could apply: protocols like FTP should be more strict about what they accept (e.g., refusing to accept requests that look like HTTP) and should be more aggressive about terminating requests that don’t resemble valid FTP sessions.
  • At the certificate level, organizations should be wary of wildcard and multi-domain certificates and should avoid shared hosts for TLS-enabled applications.
  • Finally, at the protocol level, TLS extensions like ALPN allow clients to specify the application-level protocol they expect to communicate with, potentially allowing the target application server (like SMTP) to reject the redirected connection. This requires application servers not to ignore ALPN, which they frequently do.

Takeaways

  • Despite being known and well understood for years, cross-protocol attacks are still possible today! Even worse, trivial scans reveal hundreds of thousands of exploitable application servers running on shared HTTPS, making the bar for exploitation very low.
  • This space is not fully explored: application protocols like SMTP and FTP are obvious targets because of their similarity to HTTP, but newer protocols are also showing up in internet services such as VPN protocols and DTLS.

“Secure in isolation, vulnerable when composed”: ElGamal in OpenPGP

The presenters of “On the (in)security of ElGamal in OpenPGP” (slides, video, paper) covered another LANGSEC-adjacent problem: standards or protocols that are secure in isolation but insecure when interoperating.

The presenters considered ElGamal in implementations of OpenPGP (RFC 4880) due to its (ahem) unique status among asymmetric schemes required by OpenPGP:

  • Unlike RSA (PKCS#1) and ECDH (RFC 6637), ElGamal has no formal or official specification!
  • The two “official” references for ElGamal are the original paper itself and the 1997 edition of the Handbook of Applied Cryptography, which disagree on parameter selection techniques! The OpenPGP RFC cites both; the presenters concluded that the RFC intends for the original paper to be authoritative.

(By the way, did you know that this is still a common problem for cryptographic protocols, including zero-knowledge, MPC, and threshold schemes? If that sounds scary (it is) and like something you’d like to avoid (it is), you should check out ZKDocs! We’ve done the hard work of understanding best practices for protocol and scheme design in the zero-knowledge ecosystem so that you don’t have to.)

The presenters evaluated three implementations of PGP that support ElGamal key generation (GnuPG, Botan, and libcrypto++) and found that none obey RFC 4880 with regard to parameter selection: all three use different approaches to prime generation.

But that’s merely the beginning: many OpenPGP implementations are proprietary or subject to long-term changes, making it difficult to evaluate real-world deviation from the standard just from open-source codebases. To get a sense for the real world, the presenters surveyed over 800,000 real-world ElGamal keys and found that:

  • The majority of keys appear to be generated using a “safe primes” technique.
  • A large minority appear to be using Lim-Lee primes.
  • A much smaller minority appear to be using Schnorr or similar primes.
  • Just 5% appear to be using “quasi-safe” primes, likely indicating an intent to be compliant with RFC 4880’s prime generation requirements.

Each of these prime generation techniques is (probably) secure in isolation…but not when composed: each of Go, GnuPG, and libcrypto++’s implementations of encryption against an ElGamal public key were vulnerable to side-channel attacks enabling plaintext recovery because of the unexpected prime generation techniques used for ElGamal keys in the wild.

The bottom line: of the roughly 800,000 keys surveyed, approximately 2,000 were vulnerable to practical plaintext recovery because of the “short exponent” optimization used to generate them. To verify the feasibility of their attack, the presenters successfully recovered an encrypted message’s plaintext after about 2.5 hours of side-channel analysis of GPG performing encryptions.

Takeaways

  • ElGamal is an old, well-understood cryptosystem, one whose parameters and security properties are straightforward on paper (much like RSA) but is subject to significant ambiguity and diversity in real-world implementations. Standards matter for security, and ElGamal needs a real one!
  • Cryptosystem security is pernicious: it’s not enough to be aware of potential side channels via your own inputs; signing and encryption schemes must also be resistant to poorly (or even just unusually) generated keys, certificates, etc.

Honorable mentions

There were a lot of really great talks at this year’s RWC—too many to highlight in a single blog post. Some others that we really liked include:

  • “Zero-Knowledge Middleboxes” (slides, video, paper): Companies currently rely on TLS middleboxes (and other network management techniques, like DNS filtering) to enforce corporate data and security policies. Middleboxes are powerful tools, ones that are subject to privacy abuses (and subsequent user circumvention by savvy users, undermining their efficacy). This talk offers an interesting (albeit still experimental) solution: use a middlebox to verify a zero-knowledge proof of policy compliance, without actually decrypting (and, therefore, compromising) any TLS sessions! The key result of this solution is compliance with a DNS policy without compromising a user’s DNS-over-TLS session, with an overhead of approximately five milliseconds per verification (corresponding to one DNS lookup).
  • “Commit Acts of Steganography Before It’s Too Late” (slides, video): Steganography is the ugly duckling of the cryptographic/cryptanalytic world, with research on steganography and steganalysis having largely dried up. Kaptchuk argues that this decline in interest is unwarranted and that steganography will play an important role in deniable communication with and within repressive states. To this end, the talk proposes Meteor, a cryptographically secure steganographic scheme that uses a generative language model to hide messages within plausible-looking human-language sentences.

See you in 2023!

Article Link: Themes from Real World Crypto 2022 | Trail of Bits Blog