A while ago I was writing on why we need an alternative authentication method in TLS. Then I described the SSH-style authentication and how it was implemented it GnuTLS. Another approach is the DANE protocol. It uses the hierarchic model of DNSSEC to provide an alternative authentication method in TLS.
DNSSEC uses a hierarchical model similar to a single root CA that is delegating CA responsibilities to each individual domain holder for its domain. In DANE a DNSSEC-enabled domain may sign entries for each TLS server in the domain. The entries contain information, such as the hash, of the TLS server's certificate. That's a nice idea and can be used instead, or in addition to the existing commercial CA verification. Let's see two examples that demonstrate DANE. Let's suppose we have an HTTPS server called www.example.com and a CA.
A typical web client after it successfully verifies www.example.com's certificate using the CA's certificate, will check the DANE entries. If they match it's assured that a CA compromise does not affect its current connection. That is, in this scenario, an attacker needs not only to compromise the example.com's CA, but also to compromise the server's DNS domain service.
Another significant example is when there is no CA at all. The server www.example.com is a low-budget one and wants to avoid paying any commercial CA. If it has DNSSEC set up then it just advertises its key on the DNS and clients can verify its certificate using the DNSSEC signature. That is the trust is moved from the commercial CAs to the DNS administrators. Whether they can cope with that, will be seen in time.
Even though the whole idea is attractive, the actual protocol is (IMO) quite bloated. On a recent post in the DANE mailing list I summarized several issues (note that I was wrong on the first). The most important I believe is the fact that DANE separates the certificates to CA signed certificates and to self-signed certificates. That is if you have a CA signed certificate you mark it in the DNS entry as 1, while if you have a self signed one you mark it as 3. The reasoning behind this is unclear but it's effect is that it is harder to move from the CA signed to non-CA signed world or vice-versa. That is if you have a CA signed certificate and it expires, the DANE entry automatically becomes invalid as well. This may be considered a good property for some (especially the CAs), but I see no much point in that. It is also impossible for the server administrator to know whether a client trusts its CA, thus this certificate distinction doesn't always make sense. A work-around is to always have your certificates marked as self-signed and CA signed in two different DNS entries.
Despite this and few other shortcomings that I worry of, this is the best protocol we have so far to divide the trust for TLS certificates verification to more entities than the commercial CAs. For this reason, a companion library implementing DANE will be included in the upcoming GnuTLS 3.1.3 release.
Monday, October 8, 2012
Thursday, August 16, 2012
Using the Trusted Platform Module to protect your keys
There was a big hype when the Trusted Platform Module (TPM) was introduced into computers. Briefly it is a co-processor in your PC that allows it to perform calculations independently of the main processor. This has good and bad side-effects. In this post we focus on the good ones, which are the fact that you can use it to perform cryptographic operations the same way as in a smart-card. What does that mean? It simply means that you can have RSA keys in your TPM chip that you can use them to sign and/or decrypt but you cannot extract them. That way a compromised web server doesn't necessarily mean a compromised private key.
GnuTLS 3.1.0 (when compiled with libtrousers) adds support for keys stored in the TPM chip. This support is transparent, and such keys can be used similarly to keys stored in files. What is required is that TPM keys are specified using a URI of the following forms.
The first URI contains a UUID which is an identifier of the key, and the storage area of the chip (TPM allows for system and user keys). The latter URI is used for TPM keys that are stored outside the TPM storage area, i.e., in an (encrypted by the TPM) file.
Let's see how we can generate a TPM key, and use it for TLS authentication. We'll need to generate a key and the corresponding certificate. The following command generates a key which will be stored in the TPM's user section.
The output of the command is the key ID.
So now that we have the ID of the key, let's extract the public key from it.
And given the public key we can easily generate a certificate using the following command.
The generated certificate can now be used with any program using the gnutls library, such as gnutls-cli to connect to a server. For example:
An easy to notice issue with TPM keys is that they are not mnemonic. There is only an UUID identifying the key, but no labels making the distinction of multiple keys a troublesome task. Nevertheless, TPM keys provide a cheap way to protect keys used in your system.
GnuTLS 3.1.0 (when compiled with libtrousers) adds support for keys stored in the TPM chip. This support is transparent, and such keys can be used similarly to keys stored in files. What is required is that TPM keys are specified using a URI of the following forms.
tpmkey:uuid=c0208efa-8fe3-431a-9e0b-b8923bb0cdc4;storage=system tpmkey:file=/path/to/the/tpmkey
The first URI contains a UUID which is an identifier of the key, and the storage area of the chip (TPM allows for system and user keys). The latter URI is used for TPM keys that are stored outside the TPM storage area, i.e., in an (encrypted by the TPM) file.
Let's see how we can generate a TPM key, and use it for TLS authentication. We'll need to generate a key and the corresponding certificate. The following command generates a key which will be stored in the TPM's user section.
$ tpmtool --generate-rsa --bits 2048 --register --user
The output of the command is the key ID.
tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user
So now that we have the ID of the key, let's extract the public key from it.
$ tpmtool --pubkey "tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user" --outfile=pubkey.pem
And given the public key we can easily generate a certificate using the following command.
$ certtool --generate-certificate --outfile cert.pem --load-privkey "tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user" --load-pubkey pubkey.pem --load-ca-certificate ca-cert.pem --load-ca-privkey ca-key.pem
The generated certificate can now be used with any program using the gnutls library, such as gnutls-cli to connect to a server. For example:
$ gnutls-cli --x509keyfile "tpmkey:uuid=58ad734b-bde6-45c7-89d8-756a55ad1891;storage=user" --x509certfile cert.pem -p 443 my_host.name
An easy to notice issue with TPM keys is that they are not mnemonic. There is only an UUID identifying the key, but no labels making the distinction of multiple keys a troublesome task. Nevertheless, TPM keys provide a cheap way to protect keys used in your system.
Wednesday, April 18, 2012
A flaw in the smart card Kerberos (PKINIT) protocol
Reading security protocols is not always fun nor easy. Protocols like public key Kerberos are hard to read because they just define the packet format and expect the reader to assume a correct message sequence. I read it, nevertheless, because I was interested on the protocol's interaction with smart cards. If you are not aware of the protocol, public key Kerberos or PKINIT is the protocol used in Microsoft Active Directory and described in RFC4556.
The idea of the protocol is to extend the traditional Kerberos, that supports only symmetric ciphers, with digital signatures and public key encryption in order to support stock smart cards. A use-case is, for example, logging in a windows domain using the smart card. The protocol itself doesn't mention smart cards at all, probably because it was thought as a deployment issue. Nevertheless, it was believed to be a secure protocol and several published papers provided proofs of security for all operational modes of the protocol.
However, the protocol has an important flaw. A flaw that makes it insecure if used with smart cards. I wrote a detailed report on the flaw, but the main idea is that if one has access for few minutes to your smart card he can login using your credentials at any time in the future. You may think that an attack like that can be prevented by never lending your smart card to anyone, but how can you prove that no-one borrowed it for a while? Or if you believe the smart card PIN would protect from theft, how could you know that the reader you are inserting your card isn't tampered? And in a protocol with smart cards you'd expect the tampered reader not to be able to use the card after you retrieve it. This is not the case, making the protocol unsuitable for smart cards, its primary use-case.
What is the most interesting issue however, are the security proofs. The protocol was proven secure, but because the protocol never mentioned smart cards, researchers proved its security on a different setting than the actual use cases. So when reading a security proof, always check the assumptions, which are as important as the proof itself.
Few things went bad with the design of this protocol, none of which is actually technical. The protocol is hard to read, and in order to get an overview of it, you have to read the whole RFC. This is just bad. If you check figure 1 in the TLS RFC you get an overview of the protocol immediately. You might not know the actual contents of the messages but the sequence is apparent. This is not possible in the Kerberos protocol, and that discourages anyone who might want to understand the protocol using a high level description of it. Another flaw, is that the protocol doesn't mention smart cards, its primary use-case. Smart cards were treated as a deployment issue and readers of the RFC, would never know about it. The latter issue is occurring in many of the IETF protocols and the readers are expected to know where and how this protocol is used. As it was demonstrated by the security proofs on a different setting, this is not the case.
So, what can it be done to mitigate the flaw? Unfortunately without modifying the protocol, the only advice that can be given is something along the:
The idea of the protocol is to extend the traditional Kerberos, that supports only symmetric ciphers, with digital signatures and public key encryption in order to support stock smart cards. A use-case is, for example, logging in a windows domain using the smart card. The protocol itself doesn't mention smart cards at all, probably because it was thought as a deployment issue. Nevertheless, it was believed to be a secure protocol and several published papers provided proofs of security for all operational modes of the protocol.
However, the protocol has an important flaw. A flaw that makes it insecure if used with smart cards. I wrote a detailed report on the flaw, but the main idea is that if one has access for few minutes to your smart card he can login using your credentials at any time in the future. You may think that an attack like that can be prevented by never lending your smart card to anyone, but how can you prove that no-one borrowed it for a while? Or if you believe the smart card PIN would protect from theft, how could you know that the reader you are inserting your card isn't tampered? And in a protocol with smart cards you'd expect the tampered reader not to be able to use the card after you retrieve it. This is not the case, making the protocol unsuitable for smart cards, its primary use-case.
What is the most interesting issue however, are the security proofs. The protocol was proven secure, but because the protocol never mentioned smart cards, researchers proved its security on a different setting than the actual use cases. So when reading a security proof, always check the assumptions, which are as important as the proof itself.
Few things went bad with the design of this protocol, none of which is actually technical. The protocol is hard to read, and in order to get an overview of it, you have to read the whole RFC. This is just bad. If you check figure 1 in the TLS RFC you get an overview of the protocol immediately. You might not know the actual contents of the messages but the sequence is apparent. This is not possible in the Kerberos protocol, and that discourages anyone who might want to understand the protocol using a high level description of it. Another flaw, is that the protocol doesn't mention smart cards, its primary use-case. Smart cards were treated as a deployment issue and readers of the RFC, would never know about it. The latter issue is occurring in many of the IETF protocols and the readers are expected to know where and how this protocol is used. As it was demonstrated by the security proofs on a different setting, this is not the case.
So, what can it be done to mitigate the flaw? Unfortunately without modifying the protocol, the only advice that can be given is something along the:
- make sure you always possess the card;
- make sure you never use a tampered smart card reader.
Sunday, April 1, 2012
TLS in embedded systems
In some embedded systems space may often be a serious constraint. However, there are many such systems that contain several megabytes of flash either as an SD memory card, or as raw NAND, having no real space constraint. For those systems using a TLS implementation such as GnuTLS or OpenSSL would provide performance gains that are not possible with the smaller implementations that target small size. That is because both of the above implementations, unlike the constraint ones, support cryptodev-linux to take advantage of cryptographic accelerators, widely present in several constraint CPUs, and support elliptic curves to optimize performance when perfect forward secrecy is required.
I happened to have an old geode (x86 compatible) CPU which contained an AES accelerator, so here are some benchmarks created using the nxweb/GnuTLS and nginx/OpenSSL web servers and the httpress utility. The figure on the right shows the data transferred per second using AES in CBC mode, with the cryptographic accelerator compared to GnuTLS' and OpenSSL's software implementations. We can clearly see that download speed almost doubles on a big file transfer when using the accelerator.
As a side-note it is nice to see that at the security level of 192 bits GnuTLS outperforms OpenSSL on this processor. The trend continues on higher security levels for the RSA and DHE-RSA methods but the ECDHE-RSA method is interesting since even though OpenSSL has a more efficient elliptic curve implementation. GnuTLS' usage of nettle and GMP (which provide a faster RSA implementation) compensates, and their performance is almost identical.
Overall, in the few embedded systems that I've worked on, space was not a crucial limiting factor, and it was mainly this work that drove me into updating cryptodev for linux. In those systems the space cost occurred due to the usage of a larger library was compensated by (1) the off-loading to the crypto processor of operations that would otherwise load the CPU and (2) the reduce in processing time due to elliptic curves.
However this balance is system specific and what was important for my needs would not cover everyone elses, so it is important to weigh the advantages and disadvantages of cryptographic implementations on your system alone.
I happened to have an old geode (x86 compatible) CPU which contained an AES accelerator, so here are some benchmarks created using the nxweb/GnuTLS and nginx/OpenSSL web servers and the httpress utility. The figure on the right shows the data transferred per second using AES in CBC mode, with the cryptographic accelerator compared to GnuTLS' and OpenSSL's software implementations. We can clearly see that download speed almost doubles on a big file transfer when using the accelerator.
The figure on the left shows a comparison of the various key exchange methods in this platform using GnuTLS and OpenSSL. The benchmark measures HTTPS transactions per second and the keys and parameters used are the same for both implementations. The key sizes are selected of equivalent security levels (1776 bits in RSA and DH are equivalent to 192 bits in ECDH according to ECRYPT II recommendations). We can see that the elliptic curve version of Diffie Hellman (ECDHE-RSA) allows 25% more transactions in both implementations comparing to the Diffie-Hellman on a prime field (DHE-RSA). The plain RSA key exchange remains the fastest, at the cost of sacrificing perfect forward secrecy.
As a side-note it is nice to see that at the security level of 192 bits GnuTLS outperforms OpenSSL on this processor. The trend continues on higher security levels for the RSA and DHE-RSA methods but the ECDHE-RSA method is interesting since even though OpenSSL has a more efficient elliptic curve implementation. GnuTLS' usage of nettle and GMP (which provide a faster RSA implementation) compensates, and their performance is almost identical.
Overall, in the few embedded systems that I've worked on, space was not a crucial limiting factor, and it was mainly this work that drove me into updating cryptodev for linux. In those systems the space cost occurred due to the usage of a larger library was compensated by (1) the off-loading to the crypto processor of operations that would otherwise load the CPU and (2) the reduce in processing time due to elliptic curves.
However this balance is system specific and what was important for my needs would not cover everyone elses, so it is important to weigh the advantages and disadvantages of cryptographic implementations on your system alone.
Sunday, March 18, 2012
Google summer of code
This year GnuTLS participates in the Google summer of code under the GNU project umbrella. If you are a student willing to spend this summer coding, check our ideas.
Saturday, February 18, 2012
The need for SSH-like authentication in TLS
After the Diginotar CA compromise it is apparent that verifying web sites using only a trusted certificate authority (CA) is not sufficient. Currently a web site's certificate is verified against the CA that issued it and checked for revocation using the OCSP server the CA set up. If the CA is trusted by the user, this process protects against man-in-the-middle attacks when visiting such a web-site, and also against leakage of the web-sites private key (e.g. via OCSP as long as the leakage is reported to the CA). This is an automatic process that does not require the user to be involved, but it comes at a cost. There is a single channel for verification, the CA.
The certificate based web-site verification tries to convert the trust we have in a merchant to the digital world. That is currently done by having "few" authorities that provide certificates to the merchants and based on those certificates we should base our decisions. However, trust in trade requires more than that. For example wouldn't it raise suspicions, a laptop from a merchant who approached you in a parking lot and provided you with a legitimate looking brand card? Would his business or credentials card be the only thing to check? Those heuristics are unfortunately not currently available in the digital world.
If it wasn't for the Chrome browser that implemented a preliminary version of key pinning, the Diginotar compromise might not have been uncovered. In the maliciously-issued certificates, the automatic verification procedure was not reporting any problems. It was the change in public key that triggered the key pinning warning and eventually to the Diginotar issue becoming public. Thus having an additional verification channel to standard PKI proved to be a success.
Key pinning, as of the 01 draft, is mostly server driven. That is, the user has very little control on trusting a particular server key if the server operator doesn't ask to. Another approach is the SSH programs' typical authentication method. The trust on first use. That describes the concept where the public key of the peer is not verified, or verified out-of-bound, but subsequent connections to the same peer require the public key to remain the same. That approach has the advantage that doesn't depend on the server operator setting up anything, but also the disadvantage that the user will be bugged every time the server changes its public key.
In any case having such methods to complement standard certificate verification and revocation checks, provides an additional layer of security. With such a layer, a CA compromise would not be enough for a large-scale man-in-the-middle attack since changes in the public keys would be detected and users would be warned. Such warnings might not be entirely successful in preventing all users from continuing but would raise suspicions for the legitimacy of the server, which might be enough.
For that, we implemented in GnuTLS a framework to be used either by applications that want to use key pinning, or a trust on first use (TOFU) authentication. That consists of three helper functions that store and verify the public keys. They can be seen in the GnuTLS manual. The included program gnutls-cli has also been modified to support a hybrid authentication that includes certificate authentication, TOFU and OCSP if the --tofu and --ocsp arguments are specified. The main idea of its operation is the idea discussed here and is shown in this example code at the manual.
The certificate based web-site verification tries to convert the trust we have in a merchant to the digital world. That is currently done by having "few" authorities that provide certificates to the merchants and based on those certificates we should base our decisions. However, trust in trade requires more than that. For example wouldn't it raise suspicions, a laptop from a merchant who approached you in a parking lot and provided you with a legitimate looking brand card? Would his business or credentials card be the only thing to check? Those heuristics are unfortunately not currently available in the digital world.
If it wasn't for the Chrome browser that implemented a preliminary version of key pinning, the Diginotar compromise might not have been uncovered. In the maliciously-issued certificates, the automatic verification procedure was not reporting any problems. It was the change in public key that triggered the key pinning warning and eventually to the Diginotar issue becoming public. Thus having an additional verification channel to standard PKI proved to be a success.
Key pinning, as of the 01 draft, is mostly server driven. That is, the user has very little control on trusting a particular server key if the server operator doesn't ask to. Another approach is the SSH programs' typical authentication method. The trust on first use. That describes the concept where the public key of the peer is not verified, or verified out-of-bound, but subsequent connections to the same peer require the public key to remain the same. That approach has the advantage that doesn't depend on the server operator setting up anything, but also the disadvantage that the user will be bugged every time the server changes its public key.
In any case having such methods to complement standard certificate verification and revocation checks, provides an additional layer of security. With such a layer, a CA compromise would not be enough for a large-scale man-in-the-middle attack since changes in the public keys would be detected and users would be warned. Such warnings might not be entirely successful in preventing all users from continuing but would raise suspicions for the legitimacy of the server, which might be enough.
For that, we implemented in GnuTLS a framework to be used either by applications that want to use key pinning, or a trust on first use (TOFU) authentication. That consists of three helper functions that store and verify the public keys. They can be seen in the GnuTLS manual. The included program gnutls-cli has also been modified to support a hybrid authentication that includes certificate authentication, TOFU and OCSP if the --tofu and --ocsp arguments are specified. The main idea of its operation is the idea discussed here and is shown in this example code at the manual.
Sunday, January 1, 2012
Do we need elliptic curve point compression?
GnuTLS has recently added support for elliptic curves (ECDSA and Elliptic curve Diffie-Hellman). Elliptic curves are an improvement on public key technologies, mostly in efficiency because they require operations with much smaller integers for equivalent security parameters to RSA or DH. For example a 1248-bit RSA key corresponds to an 160-bit ECDSA key, saving both in space required to store parameters and in time spent for calculations.
However elliptic curve technology is considered a patent minefield and quite some attempts have been made to clarify the status, e.g. IETF made an attempt with RFC6090 to describe only the fundamental algorithms on which all patents are known to have expired. The GnuTLS implementation of ECC is based on this document.
One of the interesting "improvements" that is not included in the fundamental algorithms, and is still patented is point compression. To describe it we need a small (but not detailed) introduction to elliptic curves. An elliptic curve (at least for the needs of TLS protocol) is a set of points (x,y) that satisfy the following relation modulo a large prime p.
This set of points has some properties, i.e., a group law allowing point addition, scalar multiplication etc. Without getting into details those points need to be exchanged during a protocol run and given that the points usually belong to a "large" prime field (e.g. 256-bits), reducing the exchanged bytes is an improvement. Point compression achieves that goal, so let's see how.
Say we have a point (x0,y0) satisfying y02 = x03 + ax0 + b. If x0 is known, then the previous relation becomes a second degree polynomial, that has two solutions.
y1,2 = ±√ x3 + ax + b
And surprisingly that's all about point compression. Instead of sending the tuple (x0,y0), only x0 and a bit identifying the sign of y is sent. The receiver only needs to calculate y1,2 and then select the appropriate using the sign bit. Of course calculating square roots modulo a prime isn't as simple as in our example, but the idea is similar.
So is this worthwhile? Are implementations like GnuTLS that don't infringe this patent in a disadvantage? While it is a clever idea, it doesn't really add much value in real-world protocols. A 256-bit curve has points of around 32 bytes. Thus the bytes saved per TLS handshake are 64 bytes overall. Such handshake typically contains data of several kilobytes, and saving few bytes isn't going to make a dramatic difference. It wouldn't be a top priority addition even if it wasn't a patented method.
However elliptic curve technology is considered a patent minefield and quite some attempts have been made to clarify the status, e.g. IETF made an attempt with RFC6090 to describe only the fundamental algorithms on which all patents are known to have expired. The GnuTLS implementation of ECC is based on this document.
One of the interesting "improvements" that is not included in the fundamental algorithms, and is still patented is point compression. To describe it we need a small (but not detailed) introduction to elliptic curves. An elliptic curve (at least for the needs of TLS protocol) is a set of points (x,y) that satisfy the following relation modulo a large prime p.
y2 = x3 + ax + b
This set of points has some properties, i.e., a group law allowing point addition, scalar multiplication etc. Without getting into details those points need to be exchanged during a protocol run and given that the points usually belong to a "large" prime field (e.g. 256-bits), reducing the exchanged bytes is an improvement. Point compression achieves that goal, so let's see how.
Say we have a point (x0,y0) satisfying y02 = x03 + ax0 + b. If x0 is known, then the previous relation becomes a second degree polynomial, that has two solutions.
y1,2 = ±√ x3 + ax + b
And surprisingly that's all about point compression. Instead of sending the tuple (x0,y0), only x0 and a bit identifying the sign of y is sent. The receiver only needs to calculate y1,2 and then select the appropriate using the sign bit. Of course calculating square roots modulo a prime isn't as simple as in our example, but the idea is similar.
So is this worthwhile? Are implementations like GnuTLS that don't infringe this patent in a disadvantage? While it is a clever idea, it doesn't really add much value in real-world protocols. A 256-bit curve has points of around 32 bytes. Thus the bytes saved per TLS handshake are 64 bytes overall. Such handshake typically contains data of several kilobytes, and saving few bytes isn't going to make a dramatic difference. It wouldn't be a top priority addition even if it wasn't a patented method.
Subscribe to:
Posts (Atom)