In related news, the Certificate Authority Security Council have today posted my article "An Introduction to OCSP Multi-Stapling".
Posts tagged with "standards"
In related news, the Certificate Authority Security Council have today posted my article "An Introduction to OCSP Multi-Stapling".
TLS Multi Stapling draft has become a TLS Working Group item
At the March IETF meeting in Paris, the TLS Working Group decided, and confirmed by the mailing list in late April, to accept my draft for fixing and expanding OCSP Stapling into "Multi Stapling" as a Working Group item.
Since then, the draft have been updated twice. The update released this summer included, as a trial, support for the Server-Based Certificate Validation Protocol (SCVP) mechanism. The text for that was mostly contributed by Sean Turner, one of the IETF Security Area Directors.
Unfortunately, there was not much interest expressed from the Working Group for or against this expansion of the draft. I therefore decided to remove the SCVP text from the draft when I updated it a few weeks ago.
How to prevent version rollback attacks against TLS clients and servers?
The SSL and TLS protocol has a mechanism that is intended to allow clients and servers that support different versions to negotiate the highest mutually supported version (the client sends its highest supported version, the server picks the lowest of its highest version and the client version) and to prevent an attacker from forcing the parties to negotiate an older version of the protocol (a version rollback attack) that might be easier to break.
This is done in two different ways:
- First, the integrity of the entire handshake is checked when the client and server exchange the first encrypted packets. As long as the method used to check the integrity (a digest method or hash function, e.g., SHA-1 and SHA-256) is secure, this will prevent a successful version rollback attack.
- Second, the RSA-based method for agreeing on the TLS encryption key is defined in such a way that the client also sends a copy of the version number it sent to the server and against which the server is then to check against the version number it received. This would protect the protocol version selection, even if the hash function security for a version is broken. Unfortunately, a number of clients and servers have implemented this incorrectly, meaning that this method is not effective. Additionally, this method is only possible to use for methods like RSA and not others like those based on Diffie-Hellman (DH) or Elliptic Curve Cryptography (ECC).
However, ever since TLS 1.0 was introduced by TLS clients, they have had to deal with legacy servers that did not respond well to being presented with a TLS 1.0 (or higher) version number from the client. The servers would just shut down connections, return error codes, or respond in other bad fashions. Once clients started supporting TLS Extensions, the situation got even more complex, since a large number of older servers would not accept connection attempts from clients sending TLS Extensions, despite all versions of TLS requiring it.
In order to let users actually access such sites, if the connection initially failed, the browser clients offered an older protocol version (down to SSL v2 while that was offered, then SSLv3) until one of them worked.
The result: All clients voluntarily subject themselves to a version rollback attack, and none of the built-in protection mechanisms work.
This has been an issue for at least 13 years, and it has never been fixed, since doing so would break "too many sites" (at present, 1.36% of all servers). Another factor is that the issue is still considered hypothetical, since there are no known attack able to break any version of SSL 3 or TLS 1.x.
In December 2009, during the work with the TLS Renego patch specification, I suggested to the TLS WG that clients could use the Renego Indication extension (the Renego patch) being used by servers as an indication that the server is TLS Version and Extension tolerant, and that they should not perform version rollback when connecting to such servers. While the outcome of that discussion was a reminder in RFC 5746 (the Renego patch specification) that servers MUST accept TLS Extensions and higher version numbers than they support, unfortunately the Working Group did not decide to add a requirement that clients must not permit version rollback recovery against Renego patched servers. I did, however, implement such a policy in our implementation of the patch in Opera 10.50.
Recently (in the past year, or so), there have been more discussion of this topic, starting with a suggestion by Eric Rescorla (aka EKR), one of the TLS WG co-chairs, about using special Cipher Suite values in the TLS handshake to indicate the version, and Adam Langley from Google followed up with a slightly different version of the same concept.
I do not like the proposed solution, for several reasons:
- It would require updates of all clients and all servers. Considering that, 3 years after the Renego disclosure, a serious protocol vulnerability, we have still not passed 75% patch coverage in my TLS Prober sample (it is currently 73.7%), and that this is will likely be considered a less serious problem to patch, I believe we would be very lucky to see over 50% coverage after the same time period.
- The logic around such a system would be complex, and needing a value for each defined TLS version would make it even more complex. There are also issues with how the server and client should behave in the event a mismatch is noted between the two version indication systems. Should the server upgrade the connection? Should it return an error code, and, if so, what should the client do? Something else?
- Any server that would be updated with this system would, by definition, already be version and extension tolerant. It would also already be patched for the Renego problem, probably years earlier.
- This solution would do nothing to protect connections with servers that are version and extension tolerant, which is 98.3+% of servers in my sample.
- This suggestion reuses a concept, the Special Cipher Suite Value or SCSV, that was introduced by the Renego patch as a hack to allow clients to signal their Renego patch status to the server when they are not sending TLS Extensions, such as when connecting to extension-intolerant servers. That was done to fix a serious security vulnerability, for which there was no other channel to convey the necessary information. In this case, I believe there are other, better methods to convey or deduce the information.
- The SCSV concept should be only be used as a last resort, when nothing else will do the job. If its use is allowed too often, it becomes easier to select it, and it ends up becoming a substitute for the TLS Extension mechanism. In fact, there has been at least one other suggestion to use SCSVs as a signaling mechanism in the last year, which did not have a really good rationale for using the concept, in my opinion (and the Working Group did not think it would produce the desired security result, either).
In my opinion, it is better to discover and use a way to reliably discover that the server is version and extension tolerant, and use that as a proxy indication, and I think the TLS Renegotiation Indication Extension (the Renego patch) is as nearly a perfect proxy indicator as we are going to find.
- The Renego patch RFC contains, as mentioned above, a reminder to implementers that version and extension tolerance is expected, and this recommendation has (mostly) been followed.
- While this method would not protect all of the 98.3+% of tolerant servers, it would immediately protect connections with the 73.7% of all servers that already support the Renego patch (based on the 571000 servers sampled by the TLS Prober).
- Of the Renego patched servers, only 0.17% are version and/or extension-intolerant, compared to 4.7% of the unpatched servers.
- Of the patched, but intolerant, servers, 33% are in a special category: they do not tolerate a specific TLS extension, either the Server Name Indication extension or the Certificate Status extension. I have yet to discover the reason for this intolerance among a small group of servers, but there are indications that some kind of TLS front-end is involved, and the largest collection of servers we have found is a group small online banks hosted on a single 256 IP address subnet by the banking ISP Jack Henry & Associates.
- When clients stop supporting Renego unpatched servers, they can immediately remove the code supporting version rollbacks, too, since the Renego patched servers will not require that functionality.
This means that it would be possible for clients to use the Renego patch to protect connections with 73.7% of the servers on the web, immediately, by not allowing version rollback when connecting to a patched server.
While the small number of problematic servers could cause some to be concerned, I believe that this small number of servers (712 of 421,000 servers) can be quickly fixed by their owners once they learn of the need to do so. I think it is much more of a problem what most users will be connecting to these sites using the obsolete, 17-year-old, SSL v3 protocol.
As there were other proposals on the table, I decided to write up my proposal as an Internet Draft and submit it to the IETF TLS Working Group for consideration. So far the topic has been discussed at one meeting, in Vancouver, but no decision about which approach to use has been made yet.
In my opinion, my proposal is the quickest and best way to remove the current system of automatic version rollback from use, while still maintaining compatibility with Renego unpatched servers that are version and/or extension-intolerant.
As mentioned, Opera has already implemented the proposed method, and thus has already protected its users against a version rollback attack when connecting to Renego patched servers. Unfortunately, this does create a couple of problems, since sites such as www.glacierbank.com, www.banksafe.com, and www.bigskybank.com do not follow the TLS specification, and require the client to perform a version rollback, even if they are Renego patched, because they do not tolerate the Certificate Status extension; therefore Opera is not able to connect to them.
Public Suffix information moving into the DNS?
Over the past several years, I, among others, have been working to convince the IETF to look at the problems around domain limitations around cookies and other cross-site information sharing, as demonstrated by the difficulty of preventing cookies being sent to all web servers in domains like co.uk, most commonly called Public Suffixes. This has been difficult, since many in the DNS community are very opposed the concepts that Public Suffixes involve.
In the meantime, the world has moved on, Mozilla completed its work on creating the Public Suffix List, and, in other areas of internet technology, concepts that required information provided by the Public Suffix List kept showing up.
One of the main issues with a system like the Public Suffix List is the problem of maintaining the list as an up-to-date resource. Currently, the information in the list has to be manually collected from many diverse sources, checked, and included in the list. As the number of Top Level Domains increase, not just with new internationalized country code TLDs (such as for many non-western countries), but ICANN is currently working to introduce hundreds of new generic TLDs every year, the amount of work needed to maintain the list will increase rapidly, and the list would in any case never be really up to date.
At the March IETF meeting in Paris, in a small meeting with several interested parties Andrew Sullivan, Co-chair of the IETF DNS Extensions Working Group, volunteered to investigate how to better implement the public suffix system in DNS. His suggestions are available in his document "Asserting DNS Administrative Boundaries Within DNS Zones".
As we now have started on a route to system that I hope will be better than the one I have been proposing, I am now suspending work on the SubTLD Structure draft. Opera will still generate the XML files we use to distribute Public Suffix information, but the work of developing the XML format into a suggested standard for distribution and aggregation of Public Suffix information will end.
Work on other drafts suspended
At present, the IETF HTTP Working Group is busy defining a new HTTP 2.0 specification, based on SPDY. Therefore, it is unlikely that my drafts for Cache Context and Cookie Origin will be taken up by the HTTP Working Group. It may be that this group, and the planned HTTP Authentication Working Group, will take a look at these areas and try to solve the issues those two drafts seek to address.
Given the low probability of an IETF Working Group taking up the drafts as Working Group items, and the general lack of interest expressed from other parties, I am suspending work on these drafts, as well. However, I may resume work on them if enough interest is expressed from relevant parties, particularly web developers and relevant websites. With enough interest and participation, it might be possible to refine either, or both, of these drafts and submit them for consideration as Individual Submission RFCs.
- The Public Suffix definition draft, introduced in this article. This version is substantially rewritten compared to earlier versions. Document link.
- TLS Multiple OCSP Stapling, introduced in this article. This version has some updates, drawing on implementation experience. Document link.
- Cache Context: how to organize resources in groups so that when you log out of a site the old pages cannot be displayed. Introduced here. Document link.
This had the benefit of preventing the easiest way of using this new attack vector: Inserting a request into the server's command stream, before letting the client take over and receive the malicious result.
It did not, however, help clients if the attack was staged against them, but such an attack is more difficult to accomplish and does not look any different from an ordinary certificate replacement attack, except when the server requires client certificate authentication.
Since the real Renego protocol patch (the RI Extension, RFC 5746) was released in February 2010, and Opera 10.50 was released with this update, I have occasionally received complaints from users and system administrators about Opera's security information item "The server does not support secure TLS renegotiation", claiming that we should not display that for their server because they have added the above mentioned workaround. One of the references used for these was Ivan Ristic's SSLlabs tester. Ivan has since updated the site, and also posted an article about this topic
At least two server vendors have used the same argument about why they do not need to immediately ship an update supporting the RI extension.
I suspect that this perception, that the workaround is "sufficient", is delaying the deployment of updates with the RI-extension, so it is time to set the record straight:
Disabling server-side renegotiation was a quick & dirty, and very temporary, workaround deployed while there was no other, and more secure options available, in order to mitigate the discovered problem. It was never meant to be a permanent solution, nor does it provide any real security.
One reason for this is an aspect of the Renego-problem that many forget: The attack can be used against the client, too, not just the server! Admittedly, the client-side attack is more difficult to carry off, and will usually be indistinguishable from a normal Man-In-The-Middle attack with a fake certificate, but there might still be situations where such an attack can yield results for an attacker, even against a server that has disabled renegotiation, because the clients cannot disable that functionality.
But the other reason this is a significant problem is that the client cannot know that the server has implemented the workaround! It have to treat any server that does not return the RI-extension as if it is unsecure. Even if the client should waste time probing the server to "confirm" that the server refused to renegotiate, the result would be inconclusive, for two reasons:
- An attacker can fake the response, particularly the aggressive "close the connection" response. So the client might think the server is "secure", while it isn't.
- Some servers do not accept a client-initiated renegotiation, but many servers, particularly ones requiring client certificate authentication, will tell the client that it wants to renegotiate the connection. Such server-initiated renegotiation is usually triggered in response to specific queries to the server, and these server-specific triggers are generally unknowable to a client trying to perform a general capability test of the server. So, once again, a client might think that the server is "secure", while it isn't.
For these reasons, as well as the fact that such testing would waste time, no client have, to my knowledge, realistically considered probing the server. Doing so would be a waste of time and resources and would obtain a meaningless result.
Even worse, however, is that a recently released client that support the RI-extension cannot know whether the connection with an unpatched server has been intercepted and is being manipulated, because without the RI extension there is no way to tell securely that the client and server have only been talking to each other, and not also an additional party.
Therefore, all server and OS vendors that still haven't released a Renego-patch for all their maintained versions (beta versions do not count): It is time to get down from the fence and release a patch. Now!
Over the past three months, about 12% of the tested servers have been updated to support the new TLS Extension that was developed to fix the issue. Extrapolating, and assuming the same growth rate, this means that it will take more than two years before "all" servers are patched, which in my opinion is much too long to leave a security vulnerability such as this unpatched.
At the same time, we have been observing a pattern that I think is of some long term concern: most of the servers that have been patched since early April are not fully TLS compliant. Specifically, these servers do not tolerate a client identifying its highest supported protocol version as 4.1 (a currently non-existent version; SSL 3 and TLS 1.x are using protocol version 3.x).
In the past few weeks, as many as 80-90% of the newly patched servers have refused to negotiate with our tester (the TLS Prober) when it claimed to support the hypothetical v4.1 TLS protocol version (or, as I call it, "TLS NG"). This is much higher than the 69% of all servers that generally exhibit the same problem.
The major TLS protocol version 4 is currently a hypothetical version of the protocol, and there are AFAIK no plans to write a specification that will use this major version number.
So, why worry?
I am concerned. If this version intolerance, which _is_ a violation of the TLS specification, is still in widespread use whenever protocol version 4 is defined, then we will, at best, have an interoperability problem; at worst we could have a serious security vulnerability.
Over the past 10+ years, TLS clients have had to cope with version-intolerant servers because older servers were not written to handle clients supporting newer versions than theirs of the protocol. It usually has been done by silently disabling the newer versions if the server does not tolerate them. This problem persisted well into TLS 1.0 deployment and was also extended to TLS Extension support, which also required clients to implement further fallbacks. This type of problem delayed Opera's activation of TLS 1.1 and TLS extensions for more than a year, after a scavenger hunt revealed the size of the problem, because we had to develop a way to handle the intolerant servers1,2.
These fallbacks are not just adding serious complications to our code (and every browser's code). They have the added potential to create security problems down the road, if (or more likely, when) a security problem develops in an older version of SSL or TLS that allows an attacker access to the protected data.
Therefore, it was very good that the TLS Renego RFC specifically reiterated the requirement that always existed about version and extension tolerance. Opera followed up on that by requiring Renego patched servers to tolerate a TLS 1.2 (version protocol 3.3) handshake, as mentioned in our article when we started testing.
So far, all servers in our list that have been updated with the Renego patch have implemented this properly with respect to SSL v3 and TLS 1.x tolerance. Very good!
However, it looks like some vendors unfortunately did not thoroughly think through what the version tolerance requirement in the TLS specifications really means. It does not mean, and has never meant, "We can refuse to negotiate with clients offering protocol version 4.0 or larger". It means "That client says it supports version 4.0 or higher, but we only support version 3.x, so we will only talk version 3.x with it".
In some cases, it seems that downstream vendors release updates that only include the Renego patch but did not pick the update that fixed the version intolerance problem. It might be that they did not think it was a security patch.
If these servers are still active when TLS NG (or whatever the next major version of SSL/TLS will be called) is defined and gets implemented, clients will either have to break these sites by refusing to connect to them, or we have to reintroduce the protocol fallback. As mentioned above, the fallback could create a security vulnerability.
Further, if a new major version of TLS is ever created, while there may be other reasons, it will most likely be for either one, or both, of the following two reasons:
- There are new improved protocol techniques that are very incompatible with those currently used in TLS
- The old system's cryptographic protection methods being discovered to have serious vulnerabilities, requiring a complete rewrite of the protocol
In the latter case, assuming the known problems are not so serious that support for older versions must be discontinued immediately, allowing an automatic fallback to be used would allow an attacker to trick a new client to talk to a new server using an old and _vulnerable_ protocol. Oops!
Therefore, since all servers are now being upgraded to protect against the Renego issue, we need to nip the v4-intolerance problem in the bud while it is still relatively small.
So far, we have been able to identify one vendor and have started to contact them about the issue. However, there is not yet a clear pattern to the server version information, which makes it very difficult to determine what other vendors are involved. It is also possible (probably very likely) that, based on the observed variation in server agent, the actual TLS servers in many cases are
SSL/TLS front-end accelerators or firewalls that do not directly inform the client about their involvement in the connection.
We will continue to attempt to identify vendors and contact them about this issue. We also have several other items being developed in relation to this, such as an online test utility.
Currently the *original* SSL/TLS implementations we know have been implemented with correct version tolerance are (some have not been release yet):
- OpenSSL 0.9.8m (for cherry pickers, you can find the relevant patch here)
- OpenSSL 1.0.0
- NSS 3.12.6
- Windows 7 (update not yet released, AFAIK, probably also applies to other windows versions)
- RSA BSafe (version unknown, not sure if it has been released)
There may be other implementations which have not been included.
One thing we have discovered is that some customized variants of OpenSSL with the Renego patch do not include the above mentioned version tolerance patch. The maintainers of such derived distributions should include that patch in their codebase.
For other vendors who wonder whether they need to do anything to their Renego patched system, I may be able to help them if they contact me and provide a URL to a test server that I can test.
The update of the SSL and TLS protocol to fix the "Renego" vulnerability was published earlier today.
The RFC can be downloaded from http://www.rfc-editor.org/rfc/rfc5746.txt
As mentioned in the article, Opera 10.50 Beta 1 includes support for the updated protocol, although it is not fully activated yet due to usability reasons. In related news, Mozilla included support in their nightlies earlier this week.
What this extention does is to provide a way for a client to ask the server to do the OCSP revocation check for its own certificate, rather than the client doing a separate connection to the issuer's OCSP responder. The benefit of this policy is that the client saves time completing the connection since it does not have to wait for the OCSP responder. Also, if the server stores the OCSP response for a while, then the traffic to the OCSP responder becomes much lower (and much less expensive). Mostly servers will request updates and not all the clients visiting the site. This is called TLS OCSP stapling.
This mechanism only works for the server's own certificate. It does not work for any of the other certificates in the chain, and these days most Certificate Authorities (CAs) use at least one intermediate certificate, and some use four, or more. Today all the revocation information about these are retrieved using CRLs, not OCSP. This means that the information is not as up-to-date as is possible for OCSP, as CRLs (particularly for intermediates) are valid for much longer periods than OCSP. This may not be an issue today because most intermediates are controlled by the CA or other relatively big CAs, but it could become a problem if CAs start issuing large numbers of intermediate CA certificates that they do not control, for example to corporate customers. This might become a possibility if/when better domain limitations are widely implementated in browsers. If one of those corporate customers or an independent sub-CA starts issuing bad certificates, it is imperative to be able to revoke those CA certifiates quickly, which would be difficult if the CRL was updated every 12 months. On the other hand, OCSP responses are usually valid for less than a week.
Some intermediate CA certificates are now issued with OCSP URLs specified, but no browsers are currently using them. It is my recommendation that they do not start using them. The reason is that for all clients to use OCSP to check intermediate CA certificates would increase traffic to those
servers multifold, perhaps dozens of times when TLS OCSP Stapling becomes widespread, meaning the bandwidth cost for the CA will increase significantly. A number of CAs have already been concerned about the cost of supporting OCSP just for server certificates while waiting for stapling to become widespread; they would not like the cost of supporting OCSP for one or more intermediate certificates.
The solution, of course, is to expand the TLS Extension to support multiple OCSP responses, which should have been a fairly straightforward task, which it was for the handling of the responses. It turned out, however, that it was not practical to use the existing Certificate Status Request extension in TLS, since it does not allow multiple methods to be specified (but only a single method), which would be necessary to support servers that do not support the new response format. The limitation is due to both a hard restriction in TLS, as only one entry for a given extension can be sent in any extension list, and the request extension only permits a single format to be specified, not multiple.
The solution in the end was to create a new extension that permits multiple formats to be specified, not just a single one as before.
I have just submitted an Internet Draft to the IETF TLS Working Group defining such an extension and new response format. The draft is based on the existing definition with enhancements for the new requirements.
I hope the TLS WG will take on the work to help me complete the draft so that we can get this new functionality into all new clients and servers as soon as possible.
Comments intended as contributions to the draft should also be posted at the TLS WG mailing list.
There are no changes in the DNS heuristic draft, but the subTLD draft have been updated based on my experience with implementing the Public Suffix (a.k.a. SubTLD or Effective TLD) support that is planned for future versions of Opera (well past v10.0). As a result, the certs.opera.com server which is used for Opera's online Root Certificate repository has now started hosting Opera's XML based Public Suffix List,
based on and generated from the Public Suffix List project's list. The XML files are also available as a single download (without the digital signature, under the Mozilla Tri-license (MPL, GPL, LGPL) from our online download location.
The drafts are available at
See also the Rootstore update.
This Draft describes Opera's current heuristical approach to avoid sending cookies to registry-like domains like co.uk (The "Cookie Monster Bug"). First discussed here.
This Draft describes an improved approach to handling the "Cookie Monster Bug", using an online black list of registry-like domains. Also first discussed here.
The Mozilla team's work on "Effective TLDs" is based on an early version of this suggestion. The result of this work is now available from PublicSuffix.org, and is AFAIK currently used for one or more features by Chrome Beta, FF3, and IE8 Beta.
To reduce the complexity of the specification, and to avoid excluding possible solutions, I have now removed the previous suggestions for how the repository should be generated, and just shortly mention some possibilies in the Appendixes.
This Draft describes the ideal solution to the "Cookie Monster Bug", that everybody starts using a new format for cookies that completely remove the problem. First discussed here.
This Draft describes a method for giving sites a method to tell the client that a group of webpages are related, which can be used to better organize logouts. First discussed here.
This specification deals with the trust decisions that users must make online, and with ways to support them in making safe and informed decisions where possible.
This document specifies user interactions with a goal toward making security usable, based on known best practice in this area. Subsequent testing of this specification will include conformance, interoperability, and usability testing.
If you want to comment on the document you are welcome to do so:
The W3C Membership and other interested parties are invited to review the document and send comments to email@example.com (with public archive) through 15 September 2008. We appreciate if comments follow these guidelines for writing good issues.
|April 2013June 2013|