Unser neuer 31er Datacenter: Düsseldorf | 05-03-15
Hallo Düsseldorf. Nestled in the center of the Lower Rhine basin lies the bustling city of Düsseldorf, capital of Germany’s most populous state, Northern Rhine-Westphalia. Provided its status as an international business and telecommunications hub, and serving a population larger than the Netherlands, our data center in Düsseldorf is an important addition to our European network. This means not only better performance in Germany and Northern Europe, but additional redundancy for our 10 other data centers throughout Europe, including our first German data center in Frankfurt.
For the local audience: Liebe Freunde in Düsseldorf, euer Internetanschluss ist schneller geworden und ihr könnt jetzt sicher surfen. Viel Spaß.
Not just any data center
Dusseldorf comes to life.
Our Düsseldorf data center holds a special place in the heart of our legal counsel Ken Carter. When he’s not helping to build a better Internet, he is likely to be found regaling the office with tales of his adventures in the quaint medieval town of Bad Honnef am Rhein, just south of our new data center. Ban Honnef, most famously known as the world-wide headquarters for Birkenstock, can now add one more tale of note. Equidistant between Frankfurt and Dusseldorf, it is now one of the best served cities by CloudFlare in Germany.
Dusseldorf is the first in a wave of new CloudFlare datacenters yet to come this year. At this very moment we have infrastructure present in, or in flight to, over 10 new sites. If you can guess one of the next three (in the comments below), we'll send you some free CloudFlare gear.
Mit freundlichen Grüßen Ihr CloudFlare
Photo source: Sergey Sokolov; image used under creative commons license.
No upgrade needed: CloudFlare sites already protected from FREAK | 04-03-15
The newly announced FREAK vulnerability is not a concern for CloudFlare's SSL customers. We do not support 'export grade' cryptography (which, by its nature, is weak) and we upgraded to the non-vulnerable version of OpenSSL the day it was released in early January.
CC BY 2.0 image by Stuart Heath
Our OpenSSL configuration is freely available on our Github account here as are our patches to OpenSSL 1.0.2.
We strive to stay on top of vulnerabilities as they are announced; in this case no action was necessary as we were already protected by decisions to eliminate cipher suites and upgrade software.
We are also pro-active about disabling protocols and ciphers that are outdated (such as SSLv3, RC4) and keep up to date with the latest and most secure ciphers (such as ChaCha-Poly, forward secrecy and elliptic curves).
Protecting web origins with Authenticated Origin Pulls | 28-02-15
As we have been discussing this week, securing the connection between CloudFlare and the origin server is arguably just as important as securing the connection between end users and CloudFlare. The origin certificate authority we announced this week will help CloudFlare verify that it is talking to the correct origin server. But what about verification in the opposite direction? How can the origin verify that the client talking to it is actually CloudFlare?
TLS Client Authentication
TLS (the modern version of SSL) allows a client to verify the identity of the server it is talking to. Normally, a TLS handshake is one-way, that is, the client is able to verify the server's identity, but the server is not able to verify the client's identity. What about when both sides need to verify each other's identity?
Enter TLS Client Authentication. In a client authenticated TLS handshake both sides provide a certificate to be verified. If the origin server is configured to only accept requests which use a valid client certificate from CloudFlare, requests which have not passed through CloudFlare will be dropped (as they will not have our certificate). This means that attackers cannot circumvent CloudFlare features such as our WAF, even via an attack like TCP source IP spoofing which could typically be used make an origin server believe malicious requests have passed through CloudFlare's network.
To implement TLS client authentication in CloudFlare, one of our engineers, Piotr Sikora, added support to nginx. This code is open source and has been merged into the official nginx 1.7 branch, and can be used by anyone utilizing nginx's proxy module.
Enabling Authenticated Origin Pulls
Generally, enabling Authenticated Origin Pulls does not cause any problems with a website, even if client certificates are not validated. However, in the event a website uses client certificates for other purposes, the CloudFlare origin-pull certificate may conflict and cause problems. Consequently, Authenticated Origin Pulls are an opt-in setting for CloudFlare customers. This service is available for all levels of CloudFlare plan: Free, Professional, Business, and Enterprise.
In order to enable Authenticated Origin Pulls for your CloudFlare protected website, you will need to use our new dashboard (currently in beta). To access this beta dashboard, first log in to your CloudFlare account. In the lower right corner of the page there is a button labeled "Try Our New Dashboard." Click and log in again. At this point, you're in our new dashboard with access to all your existing domains and settings through a completely new user interface.
There will be more information about this new dashboard in the near future, but feel free to check it out. You can continue to freely switch between old and new dashboard.
CloudFlare presents certificates signed by a CA with the following certificate:
This certificate is also available from https://origin-pull.cloudflare.com/
Origin Server Configuration
We will include configuration examples for popular web servers in our CloudFlare Support Knowledge Base in the next week.
Thoughts on Network Neutrality, the FCC, and the Future of Internet Governance | 27-02-15
Today the United States Federal Communications Commission (FCC) voted to extend the rules that previously regulated the telephone industry to now regulate Internet Service Providers (ISPs). The Commission did this in order to preserve the principle of network neutrality. Broadly stated, this principle is that networks should not discriminate against content that passes through them.
At CloudFlare, we are strong proponents of network neutrality. My co-founder, Michelle Zatlyn, sat on the FCC's Open Internet Advisory Committee. The work of that committee played a role in guiding today's vote. So there is a large part of us that is celebrating today.
At the same time, I have deep concerns that proponents of a free and open Internet may look back on today not as a great victory, but as the first step in what may turn out to be a devastating loss. The Internet has largely been governed from the bottom up by technologists seeking rough consensus and running code. Today's action by the FCC may mark the beginning of a new era where the Internet is regulated by lawyers from the top down. As a technologist and recovering lawyer, that worries me.
The Threat to the Network
If you think about it, it's a miracle that the Internet has been as neutral as it has for as long as it has. Throughout history, owners of networks — be they shipping channels, country clubs, Ivy League colleges, banks, roads, or actual telecommunications networks — have profited by charging tolls to access those networks. The Internet has, for the most part, resisted this. There is no "long distance" charge on the Internet. You don't pay more to access one website or use one mobile app than another. Bytes, for the most part, are bytes regardless of where they come from or go to.
That is not to say there is no risk. Networks tend to be natural monopolies. Consolidation of what are known as "terminating access providers" — companies like Comcast that act as ISPs for customers — creates a choke point for the Internet. These companies are in a position to start charging not only their users but also content providers. But, to date, the number of instances of actual abuse by ISPs in the United States has been remarkably low.
When proponents of government regulation of the Internet want to point to abuse their poster child is often Netflix. The company's fights with Comcast and other ISPs are well documented. There's little doubt that Netflix's performance suffered on some ISPs where the infrastructure serving the company's movie-streaming bytes became overloaded.
Lawyers have a saying: hard cases make bad law. Netflix is a hard case. According to the company, during primetime viewing they are responsible for more than 30% of all U.S. Internet bandwidth use. Remember that the company only launched streaming video seven years ago, and it only really took off about four years ago. In other words, the company has increased U.S. Internet use by a third — on its own — in an extremely short amount of time.
As a consumer, it's easy to think of the Internet as an infinite resource, but real atoms carry all those bytes and they have finite capacities. Who pays the cost of delivering that additional 30% of traffic is not straight forward. While it's tempting to say that an ISP like Comcast should pay, is it fair, then, for them to spread that cost over all Comcast subscribers regardless of whether they are Netflix subscribers?
Again, because of their remarkable scale and the rate of growth, Netflix makes for an extremely hard case and therefore a tricky poster child. In the end, assuming the whisper numbers are correct on the terms, the commercial arrangement worked out between ISPs and Netflix seems from my perspective to have been a pretty reasonable outcome. The market worked here when giants battled. However, I worry about the non-giants who so far haven't been poster children.
The Internet Miracle
The good news is that beyond the exceptional case of Netflix, other abuses of the principle of network neutrality have proven more difficult to find. By in large, today on the Internet, bytes are bytes. CloudFlare delivers a large number of bytes and, so far, we have never been "shaken down" by an ISP in order reach their customers. Quite the opposite, as we've grown our cost of delivering a byte has rapidly decreased. In the last 12 months, our global blended average per byte cost has fallen by half, even as we've expanded in expensive markets like Australia, Latin America, and Africa.
That's not to say I don't worry. I worry a lot. Comcast, or other large ISPs, could likely extract tolls from a company like ours if they threatened to rate limit or outright block their customers from reaching our network. Legally, before today's FCC vote, there was likely no rule that stopped them. And yet, to date, in spite of their market power, the bottoms-up, normative approach to Internet governance has largely kept ISPs in check and kept the network neutral. This arrangement feels fragile and likely won't last forever, but up until now, I think this is rightly described as the Internet miracle.
The Challenge of Regulating Technology
There is always tension between law and technology. Technology moves fast. Law moves slow. Technology is nimble. Law is a lumbering beast. I like to joke that my job as a lawyer and a programmer were the same: in both cases I was writing code, just as a lawyer it took 10 years and an expensive legal trial for the compiler to let me know if I had a bug.
Bad things happen when lawyers and politicians step in too early to regulate technology markets that are still developing. In this case, the FCC not only may be stepping in too early — before we know what the actual, not just theoretical, problems they aim to regulate will be — but they were forced by previous court decisions and political realities to apply the rules written for traditional telephone networks rather than craft a new regulation specific to the unique nature of the Internet.
The exact rules the FCC just voted for are not yet public, and we don't know exactly what they will contain. What we do know, however, is that they will stem from Title II of the Federal Telecommunications Act. The problem is that Telecommunications Act comes with 80 years of baggage. The Act, which was originally passed in 1934 under the Franklin D. Roosevelt administration, has been interpreted by courts and rewritten by Congress a multitude of times. For the programmers reading this, what the FCC is doing is like trying to compile Node.js on MS-DOS. Even if they get it to work, inevitably there will be unintended consequences.
Rules Versus Standards
There is a philosophical debate in the law about the role of "rules" versus "standards." For quite some time, in the United States, most roads have had a defined speed limit: say 55 miles per hour. That's a rule. Some stretches of road in Montana were different. There the speed limit was listed as "reasonable and prudent" speed. In some cases, say during a sunny day on a straight, clear road, what is "reasonable and prudent" may be a lot faster than 55 mph. On the other hand, if it's snowing, 55 mph may be far faster than "reasonable and prudent." Montana, in other words, set a standard, not a rule, for their speed limit.
Generally, when conditions are well defined, and the optimal outcome is known, then rules are appropriate. When the conditions are more uncertain and the optimal outcome is unknown, then standards are appropriate. I worry that the FCC is about to set down a series of rules for network neutrality before the real threat and the optimal outcome is known.
The other problem with rules is that they are brittle. Teams of lawyers will comb through whatever the FCC finally publishes and find any loopholes. There will be defined bright lines going forward and, make no mistake, ISPs will now get as close to those lines as they can. Whatever the Internet's rough consensus of "acceptable" was before, it's about to be thrown out in favor of a set of rules written by lawyers. Ironically, that may end up resulting in a regulated network that is less neutral than what we have today.
What I Wish the FCC Had Done
Setting aside political realities, I wish the FCC had done something quite different today. In my ideal alternate universe the FCC chairman would have given something like the following speech:
The Internet is one of the greatest inventions in human history. What has made the Internet great is that anyone, anywhere can publish an idea and reach a global audience. Incumbent on us preserving that Internet ideal is that the providers that make up the Internet not discriminate against one idea or another; not favor one byte flowing across their network or disfavor another. This principle of network neutrality is core to the continued success of the Internet.
We, as the FCC, are stewards of the communications systems in the United States. The policies of the United States also serve as a precedent for network regulators throughout the rest world. As such, when we act, we must do so with extreme care. To date, the Internet has been governed by a consensus-drive, bottom-up approach. That approach has succeeded in creating the most open, accessible network in human history. Our actions should serve to strengthen, not replace, that approach.
That said, there are real concerns about the increasing market power of Internet Service Providers. These providers hold a 100% monopoly on delivering content to each of their customers. Abusing that monopoly position is unacceptable and we, as the FCC, will not tolerate it. We hold, under Title II of the Federal Communications Act, broad powers to crack down on providers that are abusing their trusted position on the network. If we find abuses, ISPs be warned: we will not hesitate to use the full extent of our power.
Today, we are articulating a simple standards on network neutrality to which all ISPs will be held:
- Providers should not discriminate against or for any byte flowing across their network
- Providers should continue to invest in their networks to provide higher quality of service across the entire Internet
- Providers should not offer so-called "fast lanes" that content providers may purchase in order to favor their own content
To monitor compliance with this standard, I have hired a team of 100 investigators who will be fielding complaints around the clock from consumers and businesses about ISPs that fail to live up to these standards. We will take allegations of ISPs that do not follow these standards seriously and investigate them to the fullest extent. Non-neutral networks will be put through the equivalent of a legal root canal. And if we find that our current legal framework does not offer the tools to remedy abuses, make no mistake that we can and will act quickly under our full powers of Title II. To ISPs: you're on notice. To Internet users: we will be vigilant.
Now, of course, that would satisfy no one. John Oliver would take to his HBO show to yet again call Chairman Wheeler a do-nothing "dingo." But, I have a hunch, ISPs would actually act more cautiously and self-police out of fear they incur the full wrath of an FCC armed with specific consumer complaints. The risk of setting bad rules with their unintended consequences would be diminished. And, for at least a bit longer, we may preserve the Internet miracle: where technologists make decisions from the bottom up by rough consensus and running code, guided by the principle that any idea from anyone should be able to reach a global audience.
Enforce Web Policy with HTTP Strict Transport Security (HSTS) | 26-02-15
HTTP Strict Transport Security (HSTS, RFC 6797) is a web security policy technology designed to help secure HTTPS web servers against downgrade attacks. HSTS is a powerful technology which is not yet widely adopted. CloudFlare aims to change this.
Downgrade attacks (also known as SSL stripping attacks) are a serious threat to web applications. This type of attack is a form of man-in-the-middle attack in which an attacker can redirect web browsers from a correctly configured HTTPS web server to an attacker controlled server. Once the attacker has successfully redirected a user, user data, including cookies, can be compromised. Unfortunately, this attack is outside the realm of pure SSL to prevent. This is why HSTS was created.
These attacks are very real: many major websites have been attacked through SSL stripping. They are a particularly powerful attack against otherwise well secured sites, as they bypass the protections of SSL.
HSTS headers consists of an HTTP header with several parameters -- including a configurable duration for client web browsers to cache and continue to enforce policy even if the site itself changes. Through CloudFlare, it is easy to configure on a per-domain basis with standard settings.
HSTS causes compliant browsers to strictly enforce web security practices. Specifically, it automatically turns all HTTP links into HTTPS links within an application, and it upgrades all SSL errors from warnings or bypassable errors into non-bypassable errors.
The configurable parameters for HSTS are:
- Enable HSTS (Strict-Transport-Security): On/Off.
- Max Age (max-age): This is essentially a "time to live" field for the HSTS header. We recommend 6 months in order to earn an A+ rating from Qualys SSL Labs. Web browsers will cache and enforce HSTS policy for the duration of this value. A value of "0" will disable HSTS.
- Apply HSTS Policy to subdomains (includeSubDomains): Applies HSTS policy to every host in a domain.
There is one caveat to HSTS: it's a policy cached in each browser. If you configure HSTS settings, browsers will cache those settings for the duration of max-age. We recommend 6 months. If your site becomes inaccessible over strongly-configured HTTPS, web browsers will refuse to connect to the site on HTTP until the policy expires in the browser. Therefore, it's important that you set up HSTS only after establishing a stable SSL configuration. Fortunately, CloudFlare's default SSL settings are perfectly compatible with HSTS.
In order to enable HSTS for your CloudFlare protected website, you will need to use our new dashboard, currently in beta. To access this beta dashboard, first log in to your CloudFlare account. In the lower right corner of the page there is a button labeled "Try Our New Dashboard." Click and log in again. At this point, you're in our new dashboard with access to all your existing domains and settings through a completely new user interface.
There will be more information about this new dashboard in the near future, but feel free to check it out. You can continue to freely switch between old and new dashboard.
Universal SSL: Encryption all the way to the origin, for free | 24-02-15
Last September, CloudFlare unveiled Universal SSL, enabling HTTPS support for all sites by default. All sites using CloudFlare now support strong cryptography from the browser to CloudFlare’s servers. One of the most popular requests for Universal SSL was to make it easier to encrypt the other half of the connection: from CloudFlare to the origin server.
Until today, encryption from CloudFlare to the origin required the purchase of a trusted certificate from a third party. The certificate purchasing process can be tedious and sometimes costly. To remedy this, CloudFlare has created a new Origin CA service in which we provide free limited-function certificates to customer origin servers.
Today we are excited to announce the public beta of this service, providing full encryption of all data from the browser to the origin, for free.
Encrypted all the way
CloudFlare offers three modes for HTTPS: Flexible, Full and Strict. In Flexible mode, traffic from browsers to CloudFlare is encrypted, but traffic from CloudFlare to a site's origin server is not. In Full and Strict modes, traffic between CloudFlare and the origin server is encrypted. Strict mode adds validation of the origin server’s certificate. We strongly encourage customers to select Strict mode for their websites to ensure their visitors get the strongest data security possible.
As we previously discussed, sites on CloudFlare’s Free plan default to Flexible SSL mode. To take advantage of our Strict SSL mode it’s necessary to install a certificate on the origin server, which until now required them to buy one from a third party. Now customers can get that certificate directly from CloudFlare, for free.
This certificate is only used to protect the traffic between the origin server and CloudFlare; it is never presented to browsers. For now you should only use it behind orange-clouded sites on CloudFlare.
If you are a CloudFlare customer and want to sign up for the beta, just send an email to firstname.lastname@example.org with the following:
A certificate signing request (CSR)
The domain name of the orange-clouded zone you want to install the certificate on
The first ten brave beta customers will get a shiny new certificate to install on their web server. Note: do not send your private key to CloudFlare, only the CSR is needed.
Update: The beta is full! Thanks to those who are participating.
CloudFlare’s Origin Certificate Authority
In order to grant certificates to customer origins, CloudFlare had to create its own Certificate Authority. This consists of a set of processes and systems to validate certificate requests and create new certificates. For the Origin CA, CloudFlare created a private key and certificate for the specific purpose of signing certificates for origin servers.
The certificate authority software we use is CFSSL, our open source PKI toolkit written in Go. It allows us to validate CSRs and use them to create new certificates for sites. These certificates are signed with our certificate authority private key, and validated when CloudFlare connects to the origin in Strict SSL mode.
In collaboration with other members of the industry (such as Richard Barnes from the Let's Encrypt project), we have updated CFSSL with several new features that help make it a viable certificate authority tool. These include PKCS#11 support, which makes it possible for CFSSL to use a Hardware Security Module (HSM) to store private keys and OCSP support, which lets CFSSL answer questions about the revocation status of a certificate.
CAs are supposed to only give certificates to sites that own the domain(s) listed in the certificate. Domain validation is usually done in one of three ways:
- Putting a challenge in the DNS zone
- Putting a challenge into a meta-tag of an HTML page hosted on the domain
- Sending an email challenge to the domain registrant from the WhoIs DB
Since CloudFlare is both a content delivery network and a DNS provider, both DNS and HTML validation can be done by CloudFlare on behalf of the site. If your site is on CloudFlare and orange-clouded, we will give you a certificate for your site.
The CloudFlare Origin CA is currently not trusted by browsers, so these certificates should not be used on sites that are not behind CloudFlare. To issue certificates that are trusted by browsers, we would have to convince a publicly trusted certificate authority to cross-sign our CA certificate. This is not necessary in this case since it is CloudFlare that determines which certificates we trusted and the Origin CA is on our list.
Bonus: How to create Certificate Signing Requests
The certificate signing request (CSR) is the standard mechanism for obtaining a certificate from a certificate authority. It contains a public key, some metadata such as which domain it is for and is digitally signed by a private key. It lets CloudFlare know that you own the private key.
Creating a CSR and private key with CFSSL
CFSSL is not only a tool that can be used for running a CA, but it can be used to create CSRs too. Following these instructions will get you a private key and a CSR to submit to a certificate authority.
1) Install Go:
2) Install CFSSL
$ go get github.com/cloudflare/cfssl/cmd/...
3) Create a CSR template
Use the following template for
csr.json and replace “mysite.com” with your site’s domain name and names with your company's information.
"L": "San Francisco",
"O": "My Company, Inc.",
"OU": "My Company’s IT Department"
4) Create the certificate
$ cfssl genkey csr.json | cfssljson -bare site
This creates two files:
site.csr: your CSR
site-key.pem: your private key
If CFSSL is not working for you, here are some more resources for creating CSRs:
In the future we plan on releasing tools to make certificate generation even easier and more automatic.
TLS Session Resumption: Full-speed and Secure | 24-02-15
At CloudFlare, making web sites faster and safer at scale is always a driving force for innovation. We introduced “Universal SSL” to dramatically increase the size of the encrypted web. In order for that to happen we knew we needed to efficiently handle large volumes of HTTPS traffic, and give end users the fastest possible performance.
CC BY 2.0 image by ecos systems
In this article, I’ll explain how we added speed to Universal SSL with session resumptions across multiple hosts, and explain the design decisions we made in this process. Currently, we use two standardized session resumption mechanisms that require two different data sharing designs: Session IDs RFC 5246, and Session Tickets RFC 5077.
Session ID Resumption
Resuming an encrypted session through a session ID means that the server keeps track of recent negotiated sessions using unique session IDs. This is done so that when a client reconnects to a server with a session ID, the server can quickly look up the session keys and resume the encrypted communication.
At each of CloudFlare’s PoPs (Point of Presence) there are multiple hosts handling HTTPS traffic. When the client attempts to resume a TLS connection with a web site, there is no guarantee that they will connect to the same physical machine that they connected to previously. Without session sharing, the success rate of session ID resumption could be as low as 1/n (when there are n hosts). That means the more hosts we have, the less likely a session can be resumed. This goes directly against our goal of scaling SSL performance!
CloudFlare’s solution to this problem is to share the sessions within the PoP, making the successful resumption rate approach 100%.
How sessions are shared
We employ a memcached cluster to cache all the recent negotiated sessions from all the hosts within the same PoP. To enhance the secrecy and security of session keys, all cached sessions are encrypted. When a new session with a session ID is negotiated, a host will encrypt the new session and insert it to memcached, indexed by the session ID. When a host needs to look up a session for session resumption, it will query memcached using the session ID as the key and decrypt the cached session to resume it. All those operations happen as non-blocking asynchronous calls thanks to the power of OpenResty, and many handy OpenResty modules such as the fully asynchronous memcached client. We also needed tweaks in OpenSSL to support asynchronous session caching.
I’d like to send a few shout-outs to my amazing colleagues Piotr Sikora and Yichun Zhang for making this project possible.
Using OpenSSL’s s_client utility, we can quickly test how a session ID is speeds up the TLS connection from the client side. We test the TLS performance of www.cloudflare.com from our office. And the result is shown below:
The overall cost of a session resumption is less than 50% of a full TLS handshake, mainly because session resumption only costs one round-trip while a full TLS handshake requires two. Moreover, a session resumption does not require any large finite field arithmetic (new sessions do), so the CPU cost for the client is almost negligible compared to that in a full TLS handshake. For mobile users, the performance improvement by session resumption means a much more reactive and battery-life-friendly surfing experience.
Session Ticket Resumption
Session resumption with session IDs has a major limitation: servers are responsible for remembering negotiated TLS sessions for a given period of time. It poses scalability issues for servers with a large load of concurrent connections per second and for servers that want to cache sessions for a long time. Session ticket resumption is designed to address this issue.
The idea is simple: outsource session storage to clients. A session ticket is a blob of a session key and associated information encrypted by a key which is only known by the server. The ticket is sent by the server at the end of the TLS handshake. Clients supporting session tickets will cache the ticket along with the current session key information. Later the client includes the session ticket in the handshake message to indicate it wishes to resume the earlier session The server on the other end will be able to decrypt this ticket, recover the session key and resume the session.
Now consider every host in the same PoP uses the same encryption key, the good news is that every host now is able to decrypt this session ticket and resume the session for the client. The not-so-good news is that this key becomes critical single point of failure for TLS security: if an adversary gets hold of it, the session key information is exposed for every session ticket! Even after the lifetime of a session ticket, such a loss would invalidate supposed “perfect forward secrecy” (as evangelized here on our blog). Therefore, it is important to:
“generate session ticket keys randomly, distribute them to the servers without ever touching persistent storage and rotate them frequently.”
How session encryption keys are encrypted, shared and rotated
To meet all these security goals, we first start an in-memory key generator daemon that generates a fresh, timestamped key every hour. Keys are encrypted so that only our nginx servers can decrypt them. Then with CloudFlare’s existing secure data propagation infrastructure, ticket keys replicate from one master instance to all of our PoPs around the world. Each host periodically queries the local copy of the database through a memcached interface for fresh encryption keys for the current hour. To summarize, the key generation daemon generates keys randomly and rotates them hourly, and keys are distributed to all hosts across the globe securely without being written to disk.
There are some technical details still worth mentioning. First, we need to tackle distributed clock synchronization. For example, there might be one host thinks it is UTC 12:01pm while other hosts still think it UTC 11:59am, the faster-clock host might start encrypting session tickets with the key of 12:00pm while other hosts could not decrypt those tickets because they don’t know the new key yet. Or the fast-clock host might find the key is not yet available due to propagation delay. Instead of dedicating efforts for synchronization, we solve the problem by breaking the synchronization requirement. The key daemon generates keys one hour ahead and each host would opportunistically save the key for the next hour (if there is any) as a decryption-only key. Now even with one or more faster-clock hosts, session resumption by ticket still works without interruption because they can still decrypt session tickets encrypted by any other.
Also we set the session ticket lifetime hint to be 18 hours, the same value for SSL session timeout. Each server also keeps ticket keys for the past 18 hours for ticket decryption.
To summarize, we support TLS session resumption globally using both sessions IDs and session tickets. For any web site on CloudFlare’s network, HTTPS performance has been made faster for every user and every device.
Do the ChaCha: better mobile performance with cryptography | 23-02-15
CC BY-ND 2.0 image image by Clinton Steeds
CloudFlare is always trying to improve customer experience by adopting the latest and best web technologies so that our customers (and their visitors) have a fast and a secure web browsing experience.
More and more web sites are now using HTTPS by default. This sea change has been spearheaded by many groups including CloudFlare enabling free SSL for millions of sites with Universal SSL, Google moving towards marking plain HTTP as insecure in Chrome, and the Let’s Encrypt project’s plans to make certificates free in 2015.
Not only is the encrypted web more secure, it can also be faster than the unencrypted web if the latest HTTPS features are implemented. HTTPS sites are blazing fast on CloudFlare because we keep up with the latest performance-enhancing features:
- SPDY 3.1 is on by default for all customers. SPDY enables faster-than-HTTP download speeds by enabling multiplexing
- OCSP stapling: faster revocation checking.
- Optimized certificate bundles using CFSSL, our open source SSL bundler: an optimized certificate chain provides faster validation of certificates in the browser
- ECDSA certificates for all free customers with Universal SSL: smaller certificates with smaller keys result in faster connection establishment times
- Global session ticket resumption for faster session resumptions on globally load balanced servers: connections to sites you have already visited are jump-started requiring one less round-trip to resume
Today we are adding a new feature — actually a new form of encryption — that improves mobile performance: ChaCha20-Poly1305 cipher suites. Until today, Google services were the only major sites on the Internet that supported this new algorithm. Now all sites on CloudFlare support it, too. This means mobile browsers get a better experience when visiting sites using CloudFlare.
As of the launch today (February 23, 2015), nearly 10% of https connections to CloudFlare use the new ciphersuites. The following graph shows the uptick when we turned ChaCha20/Poly1305 on globally:
TLS to the max
The protocol for encrypting HTTPS connections is called Transport Layer Security (TLS). One of the nice features of TLS is that new encryption algorithms or ciphers can be proposed and added to the specification.
As we described in our introduction to TLS, there are several components to a TLS cipher suite. There is one algorithm for each of the following:
- key establishment (typically a Diffie-Hellman variant or RSA)
- authentication (the certificate type)
- confidentiality (a symmetric cipher)
- integrity (a hash function)
The new cipher suites we have added include a new symmetric cipher used for the encryption of data (based on the ChaCha20 and Poly1305 algorithms). There are no secure encryption algorithms optimized for mobile browsers and APIs in TLS right now—these new ciphers fill that gap.
There are two types of ciphers typically used to encrypt data with TLS: block ciphers and stream ciphers. In a block cipher, the data is broken up into chunks of a fixed size and each block is encrypted. In a stream cipher, the data is encrypted one byte at a time. Both types of ciphers have their advantages, block ciphers are generally fast in hardware and somewhat slow in software, while stream ciphers often have fast software implementations.
TLS has a secure block cipher, AES, that has been implemented in hardware and is generally very fast. One current problem with TLS is that there is no secure choice of stream cipher. The de facto stream cipher for TLS is RC4, which has been shown to have biases and is no longer considered secure.
AES is a fine cipher to use on most modern computers. Intel processors since Westmere in 2010 come with AES hardware support that makes AES operations effectively free. This makes it an ideal cipher choice for both our servers and for web visitors using modern desktop and laptop computers. It’s not ideal for older computers and mobile devices. Phones and tablets don’t typically have cryptographic hardware for AES and are therefore required to use software implementations of ciphers. The AES-GCM cipher can be particularly costly when implemented in software. This is less than optimal on devices where every processor cycle can cost you precious battery life. A low-cost stream cipher would be ideal for these mobile devices, but the only option (RC4) is no longer secure.
In order to provide a battery-friendly alternative to AES for mobile devices, several engineers from Google set out to find and implement a fast and secure stream cipher to add to TLS. Their choice — ChaCha20-Poly1305 — was included in Chrome 31 in November 2013, and Chrome for Android and iOS at the end of April 2014.
Having the option to choose a secure stream cipher in TLS is a good thing for mobile performance. Adding cipher diversity is also good insurance. If someone finds a flaw in one of the AES-based cipher suites sometime in the future, it gives a safe and fast option to fall back to.
We previously spoke about the relative strength of different types of cryptography. Some keys are stronger than others, and when using new algorithms, the keys have to be chosen with the appropriate cryptographic strength. These new cipher suites are even more secure than the best standard choices.
The new cipher suites make use of two algorithms: ChaCha20, a stream cipher; and Poly1305, a code authenticator. Both of these cryptographic primitives were invented by Professor Dan Bernstein (djb) back in 2008 and 2005. They have been thoroughly vetted by academia and battle tested in Chrome for over a year.
From the IETF internet draft:
The ChaCha20 cipher is designed to provide 256-bit security.
The Poly1305 authenticator is designed to ensure that forged messages are rejected with a probability of 1-(n/(2^102)) for a 16n-byte message, even after sending 2^64 legitimate messages, so it is SUF-CMA in the terminology of [AE](http://cseweb.ucsd.edu/~mihir/papers/oem.html).
In sum, the security level is more than sufficient for HTTPS. CloudFlare’s AES-GCM cipher provides around 128 bits of security, which is considered more than enough to future-proof communication. ChaCha20 goes far beyond that, providing 256 bits of security.
Poly1305 provides authentication, protecting TLS against attackers inserting fake messages into a secure stream. Poly1305’s key strength is considered strong enough to stop this attack, providing around 100 bits of security. Authentication in TLS is slightly less important than encryption because even if an attacker can add a fake message to the stream, they can’t read the information inside without breaking the encryption key.
ChaCha20-Poly1305 also uses the current recommended construction for combining encryption and authentication. It’s built using an Authenticated Encryption with Associated Data (AEAD) construction. AEAD is a way of combining a cipher and an authenticator together to get the combined properties of encryption and authentication. This would be done previously with two different algorithms, typically a block cipher and an HMAC. Authenticated encryption makes it impossible to decrypt a ciphertext out of order which helps rule out a whole class of problems including BEAST, Lucky 13 and POODLE. AEAD also makes the age-old discussion of MAC-then-encrypt vs encrypt-then-MAC obsolete by combining the two in the same operation. Our other preferred TLS 1.2 encryption algorithm, AES-GCM, is also an AEAD.
The new cipher suites are fast. As Adam Langley described, ChaCha20-Poly1305 is three times faster than AES-128-GCM on mobile devices. Spending less time on decryption means faster page rendering and better battery life. Although the cipher part of TLS may not be the biggest source of battery consumption (the handshake is more expensive (PDF)), spending fewer CPU cycles on encryption saves battery life, especially on large files.
For example: decrypting a 1MB file on the Galaxy Nexus (OMAP 4460 chip):
The difference is more dramatic on less powerful Android phones and old iPhones running Chrome. There is also a comparable difference on pre-Sandy Bridge and low-powered Intel CPUs. With ChaCha/Poly, older computers and mobile devices spend less time and computational power on decryption.
On desktop computers with hardware AES support, AES-128-GCM is still the faster choice. CloudFlare is able to intelligently choose whether to choose AES or ChaCha/Poly for different clients based on the client’s advertised cipher preference. For recent Intel processors, we use the standard AES-GCM algorithm. For browsers on machines that do not have a hardware AES chip, we prefer the ChaCha20-Poly1305.
In order to support over a million HTTPS sites on our servers, we have to make sure CPU usage is low. To help improve performance we are using an open source assembly code version of ChaCha/Poly by CloudFlare engineer Vlad Krasnov and others that has been optimized for our servers’ Intel CPUs. This keeps the cost of encrypting data with this new cipher to a minimum.
Browser support and future directions
As of the most recent browser statistics, Chrome has over a third of the web browser market, making this change significant to a large number of users. Although ChaCha20-Poly1305 is a Chrome-only feature for now, it could gain even more widespread adoption soon. Mozilla is planning on adding support for ChaCha20-Poly1305 in Firefox, although this might take a while to complete. Apple also has a pending ticket tracking the implementation on iOS, although is unlikely to be completed since new 64-bit ARM processors (like the ones in iPhone 5s and later) have support for AES in hardware.
One thing to note is that the version of ChaCha/Poly implemented by both CloudFlare and Chrome is not the final version that will be standardized by the IETF. A more recent draft with slight modifications has been published and is on the road to standardization. We plan on adopting this new version of the cipher once it has been finalized.
ChaCha20-Poly1305 is a new cipher with a useful purpose: it improves performance for browsers in constrained environments. At the very least, it provides algorithm agility in case someone finds a serious flaw in AES-GCM, which is possible due to its fragility. In the future we plan to keep on adding the latest and best TLS features for our customers. You can track our configuration as it changes on Github.
If you want to enable ChaCha/Poly on your web server, we have included the patch for OpenSSL 1.0.2 here.
End of the road for RC4 | 23-02-15
Today, we completely disabled the RC4 encryption algorithm for all SSL/TLS connections to CloudFlare sites. It's no longer possible to connect to any site that uses CloudFlare using RC4.
Over a year ago, we disabled RC4 for connections for TLS 1.1 and above because there were more secure algorithms available. In May 2014, we deprecated RC4 by moving it to the lowest priority in our list of cipher suites. That forced any browser that had a good alternative to RC4 to use it. Those two changes meant that almost everyone who was using RC4 to connect to CloudFlare sites switched to a more secure protocol.
Back in May, we noted that some people still needed RC4, particularly people using old mobile phones and some Windows XP users. At the time, 4% of requests using RC4 came from a single phone type: the Nokia 6120.
At the time, we noted that roughly 0.000002% of requests to CloudFlare were using the RC4 protocol. In the last 9 months, that number is halved and so, although some people are still using RC4, we have decided to turn off the protocol. It's simply no longer secure.
The remaining users are almost all on old phones and Windows XP (those two groups make up 80% of the RC4-based requests). But we are still seeing some connections from SSL-intercepting proxy software that's using RC4. To repeat what we said in May:
Digging into the User-Agent data for the US, we see the following web browser being used to access CloudFlare-powered sites using RC4:
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.137 Safari/537.36
That's the most recent version of Google Chrome running on Windows 7 (you can see the presence in Windows 7 in the chart above). That should not be using RC4. In fact, most of the connections from Windows machines that we see using RC4 should not be (since we prioritize 3DES over RC4 for older machines).
It was initially unclear why this was happening until we looked at where the connections were coming from. They were concentrated in the US and Brazil and most seemed to be coming from IP addresses used by schools, hospitals, and other large institutions.
Although the desktop machines in these locations have recent Windows and up to date browsers (which will not use RC4) the networks they are on are using SSL-based VPNs or firewalls that are performing man-in-the-middle monitoring of SSL connections.
This enables them to filter out undesirable sites, even those that are accessed using HTTPS, but it appears that the VPN/firewall software is using older cipher suites. That software likely needs updating to stop it from using RC4 for secure connections.
Since May, that situation has remained largely unchanged: there are some institutions doing SSL-interception (probably for IDS or policy enforcement reasons) that use RC4 for outbound connections, and many apparent individuals running software that does the same.
We've been continually tracking what's happening in the academic community around RC4 attacks and the slow death of RC4 as people switch from old devices to newer ones.
With both a decline in RC4 connections to CloudFlare and whispers of another, easier attack on RC4 in the academic community, we've decided the time is right to disable RC4 completely.
SSL Week Means Less Weak SSL | 23-02-15
I'm excited to announce that today kicks off SSL Week at CloudFlare. Over the course of this week, we'll make a series of announcements on what we're doing to improve encryption on the Internet.
Inherently, for encryption to be the most effective, it has to meet three criteria: 1) it needs to be easy and inexpensive to use; 2) it needs to be fast so it doesn't tax performance; and 3) it needs to be up to date and ahead of the latest vulnerabilities.
Easy, Fast & Secure
Throughout CloudFlare's history, these priorities have guided our approach to encryption. Last September, we announced Universal SSL and brought world class encryption to every CloudFlare customer, even those on our Free service plan. While that effort doubled the size of the encrypted web, our work is far from done. This week we're announcing a series of initiatives that further our efforts to ensure we provide the easiest, fastest, and most secure encryption.
While Universal SSL made it easy to ensure that the connection from a device to CloudFlare was secure, this week we're going to begin the process of making it easy (and free) to ensure the connection from CloudFlare back to the origin is secure as well. Beyond just encrypting the connection to the origin, we will also roll out a way to cryptographically ensure that the connection to the origin is, in fact, coming from CloudFlare's network.
In the last six months, research into cipher suites has continued at a torrid pace. The good news is that new ciphers that perform particularly well on mobile devices have started to become standardized. The bad news is that some of the older ciphers that were previously standard appear more and more likely to be compromisable. This week we'll, therefore, be adding support for a fast, new cipher while deprecating support for a cipher that we no longer have faith in.
We have a number of other surprises in store to help build a better, safer Internet. Stay tuned, we're confident SSL Week will help ensure SSL is anything but weak.