However, before going on the features let me describe the current trends in the applied cryptography field, to provide an idea of the issues considered during development, and to give context for the included changes. For the really impatient to see the features, jump to last section.
Non-NIST algorithms
After the dual EC-DRBG NIST fiasco, requests to avoid relying blindly on NIST approved/standardized algorithms for the Internet infrastructure, became louder and louder (for the new in the crypto field NIST is USA's National Institute of Standards and Technology). Even though NIST with its standardizations has certainly aided the Internet security technologies, the Snowden revelations that NSA via NIST had pushed for the backdoor-able by design EC-DRBG random generator (RNG), and required standard compliant applications to include the backdoor, made a general distrust apparent in organizations like IETF or IRTF. Furthermore, more public scrutiny of NSA's contributions followed. You can see a nice example on the reactions to NSA's attempt to standardize extensions to the TLS protocol which will expose more state of the generator; an improvement which would have enabled the EC-DRBG backdoor to operate in a more efficient way under TLS.Given the above, several proposals were made to no longer rely on NIST's recommendations for elliptic curve cryptography or otherwise. That is, both their elliptic curve parameters as well as their standardized random generators, etc. The first to propose alternative curves to IETF was the German BSI which proposed the brainpool curves. Despite their more open design, they didn't receive much attention by implementers. The most current proposal to replace the NIST curves comes from the Crypto Forum Research Group (CFRG) and proposes two curves, curve25519 and curve448. These, in addition to being not-proposed-by-NIST, can have a very fast implementation in modern systems and can be implemented to operate in constant time, something that is of significant value for digital signatures generated by servers. These curves are being considered for addition in the next iteration of the TLS elliptic curve document for key exchange, and there is also a proposal to use them for PKIX/X.509 certificate digital signatures under the EdDSA signature scheme.
For the non-NIST symmetric cipher replacements the story is a bit different. The NIST-standardized AES algorithm is still believed to be a very strong cipher, and will stay with us for quite long time especially since it is already part of the x86-64 CPU instruction set. However, for CPUs that do not have that instruction set, encryption performance is not particularly impressing. That, when combined with the common in TLS GCM authenticated-encryption construction which cannot be easily optimized without a specific (e.g., PCLMUL) instruction set being present, put certain systems on a disadvantage. Prior to RC4 being known to be completely broken, this was the cipher to switch your TLS connection to, for such systems. However, after RC4 became a cipher to display on a museum, a new choice was needed. I have written about the need for it back in 2013, and it seems today we are very close to having Chacha20 with Poly1305 as authenticator being standardized for use in the TLS protocol. That is an authenticated-encryption construction defined in RFC7539, a construction that can outperform both RC4 and AES on hardware without any cipher-specific acceleration.
Note that in non-NIST reliance trend, in GnuTLS we attempt to stay neutral and decide our next steps on case by case basis. Not everything coming or being standardized from NIST is bad, and algorithms that are not standardized by NIST do not become automatically more secure. Things like EC-DRBG for example were never part of GnuTLS not because we disliked NIST, but because this design didn't make sense for a random generator at all.
No more CBC
TLS from its first incarnation used a flawed CBC construction which led to several flaws over its years, the most prominent being the Lucky13 attack. The protocol in its subsequent updates (TLS 1.1 or 1.2) never fixed these flaws, and instead required the implementers to have a constant time TLS CBC pad decoding, a task which proved to be notoriously hard, as indicated by the latest OpenSSL issue. While there have been claims that some implementations are better than others, this is not the case here. The simple task of reading the CBC padding bytes, which would have been 2-3 lines of code normally, requires tens of lines of code (if not more), and even more extensive testing for correctness/time invariance. The more code a protocol requires, the more mistakes (remember that better engineers make less mistakes, but they still make mistakes). The protocol is at fault, and the path taken in the next revision of TLS, being 1.3, is to completely drop the CBC mode. In RFC7366 there is a proposal to continue using the CBC ciphersuites in a correct way which would pose no future issues, but the TLS 1.3 revision team though it is better to move away completely from something that has caused so many issues historically.In addition to that, it seems that all the CBC ciphersuites were banned from being used under HTTP/2.0 even when used under TLS 1.2. That would mean that applications talking HTTP/2.0 would have to disable such ciphersuites or they may fail to interoperate. This move effectively obsoletes the RFC7366 fix for CBC (already implemented in GnuTLS).
Cryptographically-speaking the CBC mode is a perfectly valid mode when used as prescribed. Unfortunately when TLS 1.0 was being designed, the prescription was not widely available or known, and as such it got into to protocol following the "wrong" prescription. As it is now and with all the bad PR around it, it seems we are going to say goodbye to it quite soon.
For the emotionally tied with CBC mode like me (after all it's a nice mode to implement), I'd like to note that it will still live with us under the CCM ciphersuites but on a different role. It now serves as a MAC algorithm. For the non-crypto geeks, CCM is an authenticated encryption mode using counter mode for encryption and CBC-MAC for authentication; it is widely used in the IoT, an acronym for the 'Internet of Things' buzzwords, which typically refers to small embedded devices.
Latency reduction
One of the TLS 1.3 design goals, according to its charter, is to "reduce handshake latency, ..., aiming for one roundtrip for a full handshake and one or zero roundtrip for repeated handshakes". That effort was initiated and tested widely by Google and the TLS false start protocol, which reduced the handshake latency to a single roundtrip, and further demonstrated its benefits with the QUIC protocol. The latter is an attempt to provide multiple connections over a stream into userspace, avoiding any kernel interaction for de-multiplexing or congestion avoidance into user-space. It comes with its own secure communications protocol, which integrates security into the transport layer, as opposed to running a generic secure communications protocol over a transport layer. That's a certainly interesting approach and it remains to be seen whether we will be hearing more of it.
While I was initially a skeptic for modifications to existing cryptographic protocols to achieve low latency, after all such modifications reduce the security guarantees (see this paper for a theoretical attack which can benefit from false-start), the requirement for secure communications with low latency is there to stay. Even though the strive to reduce latency for HTTP communication may not be convincing for everyone, one cannot but imagine a future where high latency scenarios like this are the norm, and low-roundtrip secure communications protocols are required.
Post-quantum cryptography
Since several years it is known that a potential quantum computer can break cryptographic algorithms like RSA or Diffie Hellman as well as the elliptic curve cryptosystems. It was unknown whether a quantum computer at the size where it could break existing cryptosystems could ever exist, however research conducted the last few years provides indications that this is a possibility. NIST hosted a conference on the topic last year, where NSA expressed their plan to prepare for a post-quantum computer future. That is, they no longer believe that elliptic curve cryptography, i.e., their SuiteB profile, is a solution which will be applicable long-term. That is, because due to their use of short keys, the elliptic curve algorithms require a smaller quantum computer to be broken, rather than their finite field counterparts (RSA and Diffie-Hellman). Ironically, it is easier to protect the classical finite field algorithms from quantum computers by increasing the key size (e.g., to 8k or 16k keys) than their more modern counterparts based on elliptic curves.Other than the approach of increasing the key sizes, today we don't have much tools (i.e., algorithms) to protect key exchanges or digital signatures against a quantum computer future. By the time a quantum computer of
Neither IETF or any other standardizing body has any out of the box solution. The most recent development is a NIST competition for quantum computer resistant algorithms, which is certainly a good starting point. It is also a challenge for NIST as it will have to overcome the bad publicity due to the EC-DRBG issue and reclaim its position in technology standardization and driver. Whether they will be successful on that goal, or whether we are going to have new quantum-computer resistant algorithms at all, it remains to be seen.
Finally: the GnuTLS 3.5.0 new features
In case you managed to read all of the above, only few paragraphs are left. Let me summarize the list of prominent changes.
- SHA3 as a certificate signature algorithm. The SHA3 algorithm on all its variations (256-512) was standardized by FIPS 202 publication in August 2015. However, until very recently there were no code points (or more specifically object identifiers) for using it on certificates. Now that they are finally available, we have modified GnuTLS to include support for generating, and verifying certificates with SHA3. For SHA3 support in the TLS protocol either as a signature algorithm or a MAC algorithm we will have to wait further for code points and ciphersuites being available and standardized. Note also, that since GnuTLS 3.5.0 is the first implementation supporting SHA3 on PKIX certificates there have not been any interoperability tests with the generated certificates.
- X25519 (formerly curve25519) for ephemeral EC diffie-hellman key exchange. One quite long-time expected feature we wanted to introduce in GnuTLS is support for alternative to the standardized by NIST elliptic curves. We selected curve25519 originally described in draft-ietf-tls-curve25519-01 and currently in the document which revises the elliptic curve support in TLS draft-ietf-tls-rfc4492bis-07. The latter document --which most likely means the curve will be widely implemented in TLS-- and the advantages of X25519 in terms of performance are the main reasons of selecting it. Note however, that X25519 is a peculiar curve for implementations designed around the NIST curves. That curve cannot be used with ECDSA signatures, although it can be used with a similar algorithm called EdDSA. We don't include EdDSA support for certificates or for the TLS protocol in GnuTLS 3.5.0 as the specification for it has not settled down. We plan to include it in a later 3.5.x release. For curve448 we would have to wait until its specification for digital signatures is settled and is available in the nettle crypto library.
- TLS false start. The TLS 1.2 protocol as well as its earlier versions required a full round-trip time of 2 for its handshake process. Several applications require reduced latency on the first packet and so the False start modification of TLS was defined. The modification allows the client to start transmitting at the time the encryption keys are known to him, but prior to verifying the keys with the server. That reduces the protocol to a single round-trip at the cost of putting the initially transmitted messages of the client at risk. The risk is that any modification of the handshake process by an active attacker will not be detected by the client, something that can lead the client to negotiate weaker security parameters than expected, and so lead to a possible decryption of the initial messages. To prevent that GnuTLS 3.5.0 will not enable false start even if requested when it detects a weak ciphersuite or weak Diffie-Hellman parameters. The false start functionality can be requested by applications using a flag to gnutls_init().
- New APIs to access the Shawe-Taylor-based provable RSA and DSA parameter generation. While enhancing GnuTLS 3.3.x for Red Hat in order to pass the FIPS140-2 certification, we introduced provable RSA and DSA key generation based on the Shawe-Taylor algorithm, following the FIPS 186-4 recommendations. That algorithm allows generating parameters for the RSA and DSA algorithms from a seed that are provably prime (i.e., no probabilistic primality tests are included). In practice this allows an auditor to verify that the keys and any parameters (e.g., DH) present on a system are generated using a predefined and repeatable process. This code was enabled only when GnuTLS was compiled to enable FIPS140-2 mode, and when the system was put in FIPS140-2 compliance mode. In GnuTLS 3.5.0 this functionality is made available unconditionally from the certtool utility, and a key or DH parameters will be generated using these algorithms when the --provable parameter is specified. That required to modify the storage format for RSA and DSA keys to include the seed, and thus for compatibility purposes this tool will output both old and new formats to allow the use of these parameters from earlier GnuTLS versions and other software.
- Prevent the change of identity on rehandshakes by default. The TLS rehandshake protocol is typically used for three reasons, (a) rekey on long standing connections, (b) re-authentication and (c) connection upgrade. The rekey use-case is self-explanatory so my focus will be on the latter two. Connection upgrade is when connecting with no client authentication and rehandshaking to provide a client certificate, while re-authentication is when connecting with an identity A, and switching to identity B mid-connection. With that change in GnuTLS the latter use case (re-authentication) is prohibited unless the application has explicitly requested it. The reason is that the majority of applications using GnuTLS are not prepared to handle a connection identity change in the middle of a connection something that depending on the application protocol may lead to issues. Imagine the case where a client authenticates to access a resource, but just before accessing it, the client switches to another identity by presenting another certificate. It is unknown whether applications using GnuTLS are prepared for such changes, and thus we considered important to protect applications by default by requiring applications that utilize re-authentication to explicitly specify it via a flag to gnutls_init(). This change does not affect applications using rehandshake for rekey or connection upgrade.
That concludes my list of the most notable changes, even though this release includes several other ones, including a stricter protocol adherence in corner cases and a significant enhancement of the included test suite. Even though I am the primary contributor, this is a release containing code contributed by more than 20 people which I'd like to personally thank for their contributions.
If you are interested in following our development or helping out, I invite you on our mailing list as well as to our gitlab pages.