Friends don’t let friends reuse nonces

By Joe Doyle

If you’ve encountered cryptography software, you’ve probably heard the advice to never use a nonce twice—in fact, that’s where the word nonce (number used once) comes from. Depending on the cryptography involved, a reused nonce can reveal encrypted messages, or even leak your secret key! But common knowledge may not cover every possible way to accidentally reuse nonces. Sometimes, the techniques that are supposed to prevent nonce reuse have subtle flaws.

This blog post tells a cautionary tale of what can go wrong when implementing a relatively basic type of cryptography: a bidirectional encrypted channel, such as an encrypted voice call or encrypted chat. We’ll explore how more subtle issues of this type can arise in a network with several encrypted channels, and we’ll describe a bug we discovered in a client’s threshold signature scheme. In that implementation, none of the parties involved ever used the same nonce twice. However, because they used the same sequence of nonce values, two different senders could accidentally use the same nonce as each other. An attacker could have used this issue to tamper with messages, or make honest parties appear malicious.

Figure 1: Don’t let your drunk friend drive, or use your nonce!

How we make encrypted channels

Encrypting messages—making the meaning of a message hidden, even to a third party that has full access to the content of a message—is probably the oldest activity we’d recognize as “cryptography.” The core structure of today’s message encryption stretches back at least to the polyalphabetic ciphers of the 1500s, and goes as follows:

To encrypt:

  • Take the secret message and separate it into regular-sized sections (or “blocks”). The overall data in each section is treated as a single “symbol.”
  • Substitute each symbol with a different symbol, depending on the secret, the position in the message, and possibly also on previous symbols in the message.
  • Send the now-encrypted message.

To decrypt:

  • Take the encrypted message, and separate it into blocks.
  • Substitute each symbol using the reverse of the encryption procedure, again using the secret, the position, and possibly the previous symbols.
  • Read the now-decrypted message.

The security of this scheme relies on third parties being unable to infer data about the symbol-substitution procedure just by looking at the encrypted data.

Historically, many ciphers have been broken by observing patterns within individual encrypted messages (Alan Turing’s Banburismus technique, which broke the Nazi Navy’s Enigma encryption, is a famous example).

Modern ciphers are designed to completely eliminate these patterns within messages, if properly used. First, our substitution alphabets are much larger—two commonly used stream ciphers, AES-CTR and ChaCha20, use block sizes of 128 and 256, respectively. That means the alphabets have 2128 and 2256 symbols, respectively. Next, there are rules used to ensure that every symbol in a message gets a different substitution table. If you treat every symbol in the same way, you risk revealing patterns in the underlying message, as in the classic ECB penguin!

Figure 2: The original image (source)

Figure 3: The image after encryption with ECB mode (source)

Finally, and most importantly for this story, you need to ensure that every message is treated differently—which is where nonces come in.

Numbers, but only once

The AES-CTR and ChaCha20 stream ciphers are both “counter-mode” stream ciphers. Counter-mode ciphers use a very simplistic type of substitution table: map the ith block, with the value x_i to x_i XOR F(i), where F is a a so-called “pseudorandom function” derived from the secret key1. To see how this works, let’s start again with our trusty image of Tux, and an image generated from AES-CTR’s pseudorandom function:

Figure 4: The original image again

Figure 5: Image generated from AES-CTR’s pseudorandom function

When we XOR the pseudorandom image with Tux, Tux vanishes in the noise:

Figure 6: XOR of the pseudorandom image with Tux

It might not be obvious that this actually still has Tux in it—but if you closely watch the animation below, you can see the outline of Tux as it switches from the original noise to the encrypted version of Tux:

Figure 7: Animation of mixing Tux with the AES-CTR output; notice the visible outline of Tux

And if we XOR this with the noise again, Tux returns!

Figure 8: Tux visible again after XOR

This lets us both encrypt and decrypt data, so long as you know the function F used to generate the pseudorandom data.

But if we aren’t careful, we might reveal too much. Let’s start with a different image, but the same noise:

Figure 9: A different example: Beastie (source)

Figure 10: Same noise

If we XOR the image and the noise together, Beastie, like Tux, vanishes:

Figure 11: Beastie disappears in noise

But if we now XOR these two encrypted messages, suddenly we can tell what they originally were!

Figure 12: Beastie and Tux reappear when the two encrypted messages are XORed

What went wrong? Well, we used the exact same noise in each encrypted message. In real encrypted channels, the pseudorandom function F we use to generate our noise gets an extra parameter, called the “nonce,” or “number used once.” As the name suggests, that number should be unique for each message. If you ever reuse a nonce, a third party who sees two encrypted messages can learn the XOR of the plaintext. However, so long as you never reuse a nonce, a good pseudorandom function will generate completely different noise given two different nonces2. By tweaking the above experiment to use the nonce 1 for Tux and the nonce 2 for the Beastie, the XOR of the two messages is still incomprehensible noise:

Figure 13: Encrypted Tux

Figure 14: Encrypted Beastie

Figure 15: XOR of the previous two images

Which brings us to the bug.

The bug

Our client was implementing a threshold signature scheme. The signing process in a threshold signature scheme requires a lot of communication between all parties. Some communication is broadcast, and some is peer-to-peer. For security, the peer-to-peer communication needs to be both private and tamper-resistant, so the implementation uses an authenticated encryption scheme called ChaCha20-Poly1305, which combines the ChaCha20 stream cipher with Poly1305, a Polynomial Message Authentication Code.

Let’s consider a three-party example with Alice, Bob, and Carol. To create her peer-to-peer channels, Alice establishes two different shared secrets, s_B and s_C, with Bob and with Carol respectively, via Diffie-Hellman key exchange. Then, Alice sets up a global “nonce counter”: every time Alice sends a message, she sends it with the current value of the counter, then increments the counter. That way she will absolutely never send two messages with the same nonce, even on different channels!

Unfortunately, all parties initialize the counter at the same value (0), increment it at the same rate, and send messages in the same order. So in the first step, when Alice sends a message to Bob, and Bob sends a message to Alice, they both use the secret s_B and the nonce 0! So an eavesdropper who intercepts both these messages can learn their XORed contents. Likewise, Bob and Carol will send each other messages with nonce 1, and then in the next round Alice and Bob will both use nonce 2. Alice and Carol will always use different nonces to each other, however—Alice is Carol’s first recipient, and Carol is Alice’s second—so the Alice-to-Carol nonces will always be odd and the Carol-to-Alice nonces will always be even.

In the actual system where this bug occurred, the messages that use the same nonce happen to be very structured and the important fields that get XORed are, themselves, pseudorandom. This meant that an eavesdropper couldn’t learn enough to perform a direct exploit using these messages. However, this particular nonce reuse did leak the message-authentication key, and would have allowed a person in the middle to tamper with certain messages and cause other participants to treat honest parties as potentially malicious.

How to fix it

Whenever you have a communication channel, it’s extremely important to properly manage the nonces involved to ensure that no nonce is ever repeated. A quick-and-dirty method would be to divide the space of nonces between parties. In the example above, Alice and Carol coincidentally always had different nonce parity, and you could make that deliberate: in each channel, you have some way to designate one party as “odd” and one party as “even,” and then, to send the message with the nonce n, you actually use 2n if you’re the even party, and 2n+1 if you’re the odd one3.

However, a much better scheme is to have entirely separate keys for each direction: in other words, Alice encrypts messages to Bob with a secret s_AB and decrypts messages from Bob with s_BA. Likewise, Bob encrypts with s_BA and decrypts with s_AB. This is what is done by the [Noise Protocol Framework], which requires that you use different CipherState objects for sending and receiving. There are a few different ways to derive these “directional keys” from a single shared secret, but generally, we recommend using a well-vetted existing implementation of a well-vetted scheme, like the Noise Protocol Framework. Many of these issues have been proactively handled in such implementations.

Don’t reuse nonces!

At the end of the day, it’s important to evaluate every assumption and restriction of a cryptographic system carefully, and to make sure that all your mitigations actually address the threat as it is. An easy mental simplification of Nonce reuse is “don’t send two messages with the same nonce”—and in that simplified model, the global nonce counter works! However, the actual threat of nonce reuse doesn’t care who sends the message—and if anyone sends a message with the same key and nonce, you’re at risk.

Most prominent encrypted-channel libraries handle this safely, but if you find you need to implement a solution like this, consider reaching out to us for a cryptographic review.

1Although this is a faithful description of counter-mode encryption, many functions that are called “pseudorandom” are completely unsuitable for use in encryption. Whenever possible, use well-vetted stream ciphers and follow industry best practices.
2Some encryption schemes have various restrictions beyond just avoiding nonce reuse – in some schemes, having overly long messages can lead to nonce-reuse-like issues. Some schemes have different recommendations depending on whether you generate nonces randomly or with a counter. In general, please use a well-vetted encryption implementation and ensure that you follow all recommendations in the relevant specification or standard.
3This requires decreasing the effective nonce size by 1 bit, so in general, we don’t recommend it!

Article Link: Friends don’t let friends reuse nonces | Trail of Bits Blog