Mirage 2.0: Solving the Mobile Browsing Speed Challenge | 13-06-13
Almost exactly a year ago, CloudFlare announced a feature called Mirage. Mirage was designed to make the loading of images faster in two primary ways: 1) deliver smaller images for devices with smaller screens; and 2) "lazy load" images only when they appeared in the viewport. Both of these optimizations were designed primarily to accelerate web performance on mobile devices.
Mobile devices present a number of challenges to delivering fast web performance. Because they rely on radio networks, the bandwidth to a mobile phone or tablet is often slow. However, the problem isn't limited to just slow bandwidth. Mobile connections are much more likely to experience "loss." To optimize for mobile performance you need to prioritize the most important data and download it first and you need to minimize the number of individual connections in order to limit the impact of packet loss.
The first version of Mirage was designed to accomplish these goals, but it was relatively naive in the way that it did it. We would store multiple versions of images, which make up the bulk of the data transferred for most websites, and then attempt to deliver the one that best matched the screen size. The problem was that the new versions of the images often weren't perfectly matched for the layout of the page or the size of the screen, especially if the page relied on the image's actual dimensions rather than including dimensions in the tag.
For the last year, we've studied sites using Mirage and taken what we've learned to refine and improve every aspect of the feature. Today we're excited to announce Mirage 2.0 which is designed from the ground up to solve the mobile browsing speed challenge.
Mirage 2.0 starts with the idea of image virtualization. When CloudFlare caches an image on our network for a site with Mirage 2.0 enabled, we store two versions. The first version is the full-resolution image, the second is a virtualized image that includes meta data about all the full-resolution image's dimensions but with the image itself is massively reduced in size. The reduced sized version typically as little as 1% the size of the full-resolution image.
If you enable Mirage 2.0, CloudFlare's network modifies the image tags on your page on the fly so they can be loaded by the Virtualized Image Packager ("VIP"). In parallel with the HTML of your page loading, the Mirage 2.0 VIP begins downloading the virtualized images that appear on the page. The VIP will virtualize images served from your own domain as well as images served from third party domains (e.g., Flickr or Imgur). Because the virtualized images have the full-resolution image's dimensions embedded as meta data, the VIP is able to place the images into the browser's DOM correctly sized so the browser can almost immediately begin the process of rendering the page.
After the page is rendered with the virtualized images, the VIP begins to replace them with the full-resolution versions. Since the images are already correctly sized for their tags on the page, the browser does not need to reflow the page as the full-resolution versions are loaded. The VIP prioritizes what full-resolution images to load first based on what images are in the browser's viewport. Visually, images appear to "rez" in, starting as low quality and then coming into sharp focus, similar to how a progressive JPEGs load in a browser.
While you can enable CloudFlare features such as Polish in order to optimize your images, by default Mirage 2.0 does not transcode or otherwise alter the original full-resolution images. The VIP will pull third party content directly from the original servers without passing through CloudFlare's network -- unless, of course, the third part is also using CloudFlare.
With Mirage 2.0, we've also completely rethought how we detect different browsers and respond to their capabilities. Mirage 2.0 is optimized to be more or less aggressive depending on the capabilities of the browser as well as its connection to the Internet. An iPhone connecting to the web over a wifi network is optimized for different loading priorities than the same device connecting over a cellular network. We even detect the different download speeds of cellular networks from LTE to 3G to Edge and optimize for each connection speed appropriately.
Mirage 2.0 gathers real browsing intelligence from all its connections which we then use to further optimize the VIP's performance. As more sites enable Mirage 2.0 the CloudFlare's systems automatically begins to optimize for the fastest possible browsing experience from any device on any network. In other words, the same way we use data about security threats in order to protect the sites on our network, we are now using data about real user's browsers around the world in order to ensure everyone on the CloudFlare network has the fastest possible site.
Reviews Are In
We've been testing Mirage 2.0 on some of our most image heavy sites that get significant traffic from mobile browsers. The reaction has been terrific: "As one of the largest image sharing sites in the world, speed has always been really important to us," explained Alan Schaaf, founder and CEO of Imgur. "We've invested a lot of time into getting images to load as fast as possible over mobile networks, especially since we've been developing our mobile app, and we've seen great improvements with Mirage 2.0. We're really happy that CloudFlare continues to launch innovative products to ensure pages on Imgur.com load as fast as possible."
You can see Mirage 2.0 in action for yourself in the following video:
Mirage 2.0 is currently in beta and will be made available over the next few weeks to all paid CloudFlare accounts, including our lowest level PRO accounts which are priced at only $20/month. Mirage 2.0 will fully replace the original version of Mirage in the following months and users with the old Mirage enabled will be upgraded to the newer, better version. Given the importance of mobile browsing, and the massive performance benefit Mirage 2.0 delivers with a single click, we think it is one of the most compelling features we've ever offered. Give it a try and let us know what you think.
CloudFlare will be at HostingCon 2013 in Full Force | 12-06-13
The CloudFlare team will be at HostingCon 2013 in Austin next week. This is our third year at the show and we have a lot of things in store for partners.
Here's a sneak peek:
- Complimentary limousine transfers from Austin-Bergstrom International Airport to the Hilton Austin hotel on Sunday, June 16th. Reserve your spot today!
- New CloudFlare tshirts
- Live music to supercharge your day during breakfast each morning
- Charging stations at our booth (#523) to keep your devices supercharged
- Bigger and better Nerf Railguns. There is limited quantity, so be sure to visit us at booth #523 to get your Railgun
CloudFlare Railguns ready for HostingCon 2013
We are looking forward to connecting with our current partners and meeting new partners at the show. If you are already a CloudFlare Certified Partner, be sure to stop by and introduce yourself. If you are not a partner yet, stop by to learn more about how CloudFlare can reduce your server load, improve the performance of your network, block spammers, botnets and other web threats, and provide DDOS protection. More details about the CloudFlare Certified Partner program here.
Here's where the CloudFlare team will be all week:
Sunday, June 16th
Limo transfers from Austin-Bergstrom International Airport to the Hilton Austin Hotel.
Registration is still open, reserve your spot now!
Monday, June 17th
- 7:45am-8:45am: CloudFlare sponsored breakfast located in the Level 4, Ballroom D Foyer - Live music by Alternator Jones
- 5:00pm onwards: Come find the CloudFlare team at the welcome reception!
Tuesday, June 18th
- 7:45am-8:45am: CloudFlare sponsored breakfast located in the Level 4, Ballroom D Foyer - Live music by Jackie Venson
- 12:00pm-4:00pm: CloudFlare is in Exhibit Hall 4 at booth #523
- 4:00pm-6:30pm: Visit our booth during the exhibit hall happy hour for a beverage and to supercharge your mobile phone!
Wednesday, July 18th
- 8:00am-10:00am: CloudFlare sponsored breakfast located in the Level 4, Ballroom D Foyer - Live music by Sean Evan
- 9:00am-9:45am: Our co-founder and CEO Matthew Prince will be speaking on the IPv6 panel discussion "Now is the Time for IPv6" in room #18D
- 9:00-9:45am: Maria Karaivanova and John Roberts from CloudFlare will be co-hosting a talk on partnerships, "Strategies for Successful Partnerships" in room #16
- 12:00-4:00pm: CloudFlare is in Exhibit Hall 4 at Booth #523
Connect with us on Twitter during the event to find out where we are and what's coming up next:
#hostingcon, @hostingcon, @CloudFlare
See you in Austin!
CloudFlare, PRISM, and Securing SSL Ciphers | 12-06-13
Over the last week we've closely watched the disclosures about the
alleged NSA PRISM program. At CloudFlare, we have never been approached
to participate in PRISM or any other similar program. We do, from time
to time, receive subpoenas and court orders. A human being on our team
reviews each of these requests manually. When we determine that a
request is too broad, we push back to limit the scope of the request.
Whenever possible, we disclose to all affected customers the fact that
we have received a subpoena or court order and allow them an opportunity
to challenge it before we respond.
One of the ways we limit the scope of orders we receive is by limiting
the data we store. I have written before about how CloudFlare limits what we log
and purge most log data within a few hours. For example, we cannot
disclose the visitors to a particular website on CloudFlare because
we do not currently store that data.
To date, CloudFlare has never received an order from the Foreign
Intelligence Surveillance Act (FISA) court. Moreover, we believe that
due process requires court review of executive orders. As a policy, we
challenge any orders that have not been reviewed and approved by a
court. As part of these challenges, we always request the right to
disclose at least the fact that we received such an order but we are not
always granted that request.
While we understand the need for secrecy in some investigations, we are
troubled when laws limit our ability to acknowledge that we have even
received certain kinds of requests. CloudFlare fully supports the calls for transparency
today by other web companies like Google, Microsoft, and Facebook. At a
minimum, we request the law be updated to allow companies to disclose
the number of FISA orders and National Security Letters (NSLs) they
have received. We believe this is a modest request which does not harm
the integrity of legitimate investigations while allowing for an
informed public debate over the use of these measures.
As we set policy, one of our guiding principles is that we should
neither make the job of law enforcement easier, nor should we make it
harder, than it would have been if CloudFlare did not exist. If the NSA
is listening in on any transactions traversing our network, they are not
doing so with our blessing, consent, or knowledge.
Making Sense of PRISM
As we've followed the PRISM story, we've tried to reconcile how the
PRISM slides could be accurate while so many tech executives have denied
participation in the program. One theory that surfaced was that the NSA
had broken the private SSL keys of a select number of web giants. Our
theory was that this could explain how companies were added over time --
as their private SSL keys were cracked -- while their executives
wouldn't have any knowledge of what was happening.
Even the name of the program -- "PRISM" -- led credence to this theory.
Prisms are often used with fiber optic cables in order to split the
light the cables carry into multiple copies. This is not new technology.
In 2006 in Room 641a of a data
center in San Francisco, AT&T installed a beam splitter to siphon
traffic from their optical network, reportedly at the request of the
SSL should protect these communications. However, with most SSL ciphers,
the private key remains the same for all sessions. As a result, if the
NSA were to record encrypted traffic, they could later break the SSL key
used to secure the traffic and then use the broken key to decrypt what
they previously recorded.
Breaking a SSL key is hard, but not impossible. Doing so is just a
matter of computational power and time. For example, it is known that
using a 2009-era PC cranking away for about 73 days you can reverse engineer a 512-bit key.
Each bit in a key's length doubles the effective computational power
needed to break the key. So, if the key were 513 bits long, you'd expect
the same modest PC 132 days to break the key. These tasks are highly
parallelizable, so if you have two modest PCs then you'd expect the
time to break the 513-bit key to drop down to 66 days again.
(Note: this assumes a naive factorization algorithm. The state of the
art is to use a generalized number field sieve. This
reduces the rate of complexity growth to something that is sub-exponential.
This means if you know what you're doing the problem doesn't double in
difficulty with each additional bit.)
It is not inconceivable that the NSA has data centers full of
specialized hardware optimized for SSL key breaking. According to data
shared with us from a survey of SSL keys used by various websites, the
majority of web companies were using 1024-bit SSL ciphers and RSA-based
encryption through 2012. Given enough specialized hardware, it is within
the realm of possibility that the NSA could within a reasonable period
of time reverse engineer 1024-bit SSL keys for certain web companies. If
they'd been recording the traffic to these web companies, they could
then use the broken key to go back and decrypt all the transactions.
While this seems like a compelling theory, ultimately, we remain
skeptical this is how the PRISM program described in the slides actually
works. Cracking 1024-bit keys would be a big deal and likely involve
some cutting-edge cryptography and computational power, even for the
NSA. The largest SSL key that is known to have been broken to date is
768 bits long. While
that was 4 years ago, and the NSA undoubtedly has some of the best
cryptographers in the world, it's still a considerable distance from 768
bits to 1024 bits -- especially given the slide suggests Microsoft's key
would have to had been broken back in 2007.
Moreover, the slide showing the dates on which "collection began" for
various companies also puts the cost of the program at $20M/year. That
may sound like a lot of money, but it is not for an undertaking like
this. Just the power necessary to run the server farm needed to break a
1024-bit key would likely cost in excess of $20M/year. While the NSA may
have broken 1024-bit SSL keys as part of some other program, if the
slide is accurate and complete, we think it's highly unlikely they did
so as part of the PRISM program. A not particularly glamorous alternative
theory is that the NSA didn't break the SSL key but instead just cajoled
rogue employees at firms with access to the private keys -- whether the
companies themselves, partners they'd shared the keys with, or the
certificate authorities who issued the keys in the first place -- to turn
them over. That very well may be possible on a budget of $20M/year.
Making SSL More Secure
Today many web companies have largely transitioned from 1024-bit SSL to
significantly stronger 2048-bit keys. (Remember, for a naive algorithm,
each bit doubles the time it takes to break the key, so a 2048-bit key
isn't twice as strong, it is 2^1024 times as strong.) Based on the SSL
survey data, Twitter has led the way, shifting 100 percent of its HTTPS
traffic to a 2048-bit key in mid-2010. By the end of 2012, the following
websites had approximately the amount of requests in the parenthesis
shifted to 2048-bit SSL:
- outlook.com (100%)
- microsoft.com (98%)
- live.com (90%)
- skype.com (88%)
- apple.com (85%)
- yahoo.com (82%)
- bing.com (79%)
- hotmail.com (33%)
Facebook is the laggard of the bunch and today is still using a 1024-bit
key for all HTTPS requests.
Google is a notable anomaly. The company uses a 1024-bit key, but,
unlike all the other companies listed above, rather than using a default
cipher suite based on the RSA encryption algorithm, they instead prefer
the Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) cipher suites.
Without going into the technical details, a key difference of ECDHE is
that they use a different private key for each user's session.
This means that if the NSA, or anyone else, is recording encrypted
traffic, they cannot break one private key and read all historical
transactions with Google. The NSA would have to break the private key
generated for each session, which, in Google's case, is unique to each
user and regenerated for each user at least every 28-hours.
While ECDHE arguably already puts Google at the head of the pack for web
transaction security, to further augment security Google has publicly announced
that they will be increasing their key length to 2048-bit by the end of 2013.
Assuming the company continues to prefer the ECDHE cipher suites, this will
put Google at the cutting edge of web transaction security.
SSL on CloudFlare
There is good news in all of this. If you're using SSL on CloudFlare,
your site is already at this cutting edge. We issue 2048-bit keys by
default and prefer the ECDHE cipher suites. Today, most modern browsers
running on up-to-date operating systems will support ECDHE. In our
tests, approximately two thirds of HTTPS requests to our network support
ECDHE. The remaining traffic quietly falls back on a more standard
cipher suite without the visitor noticing.
Ultimately, CloudFlare's value proposition is built on trust. Core to
that trust is ensuring transactions passing through our network are
fundamentally secure. We will continue to work on both policy and
technology to ensure the security and integrity of our network.
PRISM has sparked a conversation on privacy and transparency broadly --
among citizens, between companies, and with our governments. At
CloudFlare, we are actively engaged in this conversation at many levels.
Our mission is to build a better web and we believe privacy and
transparency are critical to its foundation.
Happy IPv6 Day: Usage On the Rise, Attacks Too | 06-06-13
June 6th is known as World IPv6 Day so we thought it was a good time to
look at the trends in IPv6 usage across CloudFlare's network. Two big
themes we've seen: 1) IPv6 usage is growing steadily, but at the current
pace we're still going to be living with IPv4 for many years to come;
and 2) while the majority of IPv6 traffic comes from legitimate users on
mobile networks, attackers too are beginning to launch attacks over the
CloudFlare has supported IPv6 on our network for the last year and a
half. We have become one of the largest providers of the IPv6 web
because we offer a free IPv6 gateway
that allows any website to be available
over IPv6 even if a site's origin network doesn't yet support the
protocol. For the last year, we've enabled IPv6 for customers on
CloudFlare by default. Today, IPv6 is enabled for more than 1 million of
our customers' websites.
Since the beginning of 2013, IPv6 connections as a percentage of
CloudFlare's total traffic fluctuate daily with the minimum 0.849% on
January 5 to a maximum of 1.645% on June 3, 2013. If look at the overall
trend, IPv6 connections to our network have grown 26.5% since the start
of the year.
Digging into where IPv6 connections are coming from it appears the
majority of the growth has been from mobile network providers.
Increasingly, traffic from mobile devices to the web has passed over
IPv6. We saw a significant drop in IPv6 connection from mid-March
through early-April when it appears a large mobile operator appears to
have disabled and then reenabled IPv6 connectivity from their network.
While the overall increase in IPv6 usage is encouraging, the trend
unfortunately indicates we are going to be living with IPv4 for some
time to come. At current growth rates, assuming adoption of IPv6 is
linear, it will take almost 67 years for IPv6 connections to surpass
IPv4 connections and the last IPv4 connection won't be retired until
May 10, 2148.
Things are a bit more optimistic if IPv6 adoption turns out to be
exponential rather than linear. In that case, IPv6 connections will
surpass IPv4 in about 5 years and 9 months. Not long thereafter, we'll
extinguish IPv4 entirely on January 10, 2020. Our guess is the reality
will be somewhere between the linear and exponential case. Regardless
of what IPv6's adoption curve looks like, as a CloudFlare user you're
covered. We anticipate we will be operating a dual-stack network with
both IPv4 and IPv6 support for all our customers until IPv4 is fully
retired, whether that takes 7 years or 140.
While the majority of IPv6 connections today are coming from legitimate
users on mobile networks, over the last two months we've seen a marked
increase in the number of IPv6-based web attacks. Largely these have
been DDoS attacks. The attacks have typically been both Layer 4 (e.g.,
SYN floods) as well as Layer 7 (e.g., application layer attacks).
To date, the IPv6-based DDoS attacks have been relatively modest. The
largest we've seen to date generated approximately 3 gigabits per second
of traffic and accompanied a much larger traditional IPv4-based DDoS.
While a novelty, these attacks don't cause significant harm to
CloudFlare's systems. We designed CloudFlare anticipating the transition
to IPv6, so our defenses assume an IPv6-enabled world. We speculate,
however, that attackers may be targeting IPv6 as a way of bypassing
older protections that base their protection largely on IPv4 blacklists.
IPv6 makes a strict blacklist on a per-IP basis much more challenging
since the number of addresses available to an attacker can be
significantly larger. This is a challenge that large blacklist operators
like Spamhaus are currently thinking
through. While IPv6 can present a challenge to some attack filtering
strategies, it also presents opportunities. For example, since IPv6
reduces the need for NATs and provides users addresses that are routable
all the way to the end device, we believe over time IPv6 will provide
the ability to build significantly more accurate whitelists.
We will continue to monitor overall IPv6 growth rates as well as
interesting trends in IPv6-based attacks. In the meantime, there's no
better way to celebrate World IPv6 Day than
signing up for CloudFlare
and ensuring your site is automatically available for the increasing
percentage of users that are accessing it over IPv6. It's free and will
only take you 5 minutes to join the modern web.
Today's Network Issue | 31-05-13
Today at 16:13 UTC a large amount of traffic began hitting our Los Angeles data center. We have an in-house team that monitors our network 24x7x365 and immediately all their alarms went off. We initially thought it was a very large attack. In fact, it was something much trickier to resolve.
CloudFlare makes wide use of Anycast routing. This gives us a very large capacity to stop huge DDoS attacks. The challenge is managing the routing to ensure that traffic goes to the correct place.
CloudFlare buys bandwidth to connect to the Internet via what are known as transit providers. The first transit provider we used starting back in 2010 was a company called nLayer. They have been a terrific partner over the years.
In the last year, nLayer merged with GTT. Then, about a month ago, GTT/nLayer purchased Inteliquent (aka., TINET). Over the last few weeks, GTT/nLayer has been consolidating their network with Inteliquent's. When this is complete, GTT/nLayer will move from a Tier 2 network provider to one of the small handful of Tier 1 network providers.
Today's issue was an indirect result of this migration. GTT/nLayer previously connected to Global Crossing, another large transit provider that is now owned by Level3. As part of the GTT/nLayer/Inteliquent consolidation, Level3 switched a route to being between Global Crossing and GTT/nLayer's route to instead be between Level3 and GTT/nLayer.
For most non-Anycasted traffic, this wouldn't cause any disruption. In our case, it shifted a large amount of traffic that would usually hit data centers on the east coast of the United States and Europe to all hit our facility in Los Angeles. In the worst case, this caused some machines in Los Angeles to overload, returning 502 Gateway Errors. Other visitors may have seen packet loss and slow connections as some links were saturated.
It wasn't immediately obvious what the cause of the issue was. We worked directly with GTT/nLayer's network team to rebalance traffic which temporarily put additional load on Seattle, then Dallas, then Chicago. While usually only customers nearby affected data centers would see an issue, in this case traffic as far away as Europe was landing in the wrong place.
Whether a visitor was affected or not was a bit of a crapshoot. We use multiple transit providers, so if your ISP wasn't connected to Level3 and you weren't naturally hitting an overloaded data center then you likely saw no problem. Overall, we estimate that around 10% of connections to our network were impacted for an approximately 20 minute window. A small percentage of users may have seen issues for a longer period of impact depending on their connection to Level3 and if they were pulled to more than one affected location.
Level3 or GTT/nLayer had no way of knowing how the changes they were making to their systems would affect us downstream.
While this was a very tricky situation for us to anticipate or even diagnose when it was happening, the responsibility lies with us to ensure our routing is getting people to the right locations and no facilities are overburdened. We've added this scenario to the conditions that we guard against so a similar change upstream should not affect us in the future.
The GTT/nLayer migration is scheduled to be completed today. One of the benefits of connecting to Tier 1 providers is route stability. While today's network issue was painful, I am encouraged that the underlying reason for the issue stems from an effort to build a more robust, stable, and reliable network.
Syrian Internet Restored | 08-05-13
Yesterday, Syria's Internet connectivity was cut off from the rest of the world. At 14:12 UTC, approximately 19 hours and 30 minutes after it had been shut down, connectivity was returned. Here's a BGPlay video of routes being restored within the country.
Two interesting points. First, the government has stated that the outage was the result of a cable cut. Based on what we've seen, we believe this is highly unlikely. Syria's network connects to the rest of the Internet at four distinct points that are geographically separated. For traffic to be terminated entirely, all four connection points would need to be severed simultaneously.
Moreover, the video of the outage, as well as the video of the routes being restored, show the systematic withdrawl of BGP routes across all of Syria's providers. This is not the signature we see when there is an actual cable cut.
Second, while most of the Internet was cut off in Syria, it appears there was a small portion of Syrian IP space that continued to have connectivity. Specifically, the following IP ranges behind AS29256:
Those prefixes continued to be announced to Deutche Telecom which means they would have continued to have access to the Internet.
We don't know who is behind that IP space. We're still investigating whether we saw any Internet traffic coming from that IP space. The fact that they were still available, however, further discredits the assertation that this was a cable cut.
Here is a graph showing the last 24 hours of Syrian traffic to the CloudFlare network.
How Syria Turned Off the Internet (Again) | 08-05-13
Today at 18:48 UTC, Syria dropped off the Internet. Based on the data we
collect from our network, as well as reports from other organizations
monitoring network routes, it appears that someone systematically
withdrew the BGP (Border Gateway Protocol) routes from the country's
border routers. This is the same technique that was used to withdraw
Syrian Internet access last November.
The video below, which we generated using BGPlay, shows the routes in to
the Syrian Internet being withdrawn:
The graph below shows the requests to CloudFlare's network from the
Syrian Internet space over the last 6 hours (times are UTC):
We will continue to monitor Syrian traffic and post updates here if we
see connectivity return.
Cribs: CloudFlare London Edition | 29-04-13
It's official, CloudFlare has arrived in London.
CloudFlare's first international office opened this month near St.
Paul's Cathedral in London. We decided to open an office outside Silicon
Valley for two major reasons: to get access to high quality software
engineering, network operations and technical support folks, and to
expand our 24/7 operations and support. Not to mention, London has a
vibrant start-up community that we are very happy to now be a part of.
London is 8 hours ahead of San Francisco making it the perfect location
for a hand-over from a team arriving at work at 0900 in California (the
London team is nearing the end of the day at 1700). By extending working
hours a little it's easy to get 24 hour operations and support with just
of our new London office
And London's start-up community means there's a pool of talented people
in engineering, operations and support for CloudFlare to hire from. Plus
it enables CloudFlare to take part in the many
and user groups that
flourish in and around Tech
City some of which
We choose to be in the St. Paul's area because of its good public
transport links to all parts of London and because the building we are
in has everything from a restaurant, sports club to state of the art
bicycle storage. Near by One New
Change and the surrounding
area are full of shops and eateries. There's also Smithfield
London hard at work
And being in London brings us close to many of customers and partners
such as GoSquared,
We're actively hiring in both San Francisco and London. Check out our
W3TC and WP Super Cache Vulnerability Discovered, We've Automatically Patched | 25-04-13
The team at the research firm Sucuri announced a serious
to W3TC and WP Super Cache this afternoon. (Update: it appears the
vulnerability was first
reported on WordPress.org
about a month ago.) The vulnerability allows remote PHP code to be
executed locally on a server for anyone running either of the two most
popular WordPress caching plugins. This is a serious vulnerability as it
could allow an attacker to execute code on your server.
Here are the versions of each plugin that are vulnerable:
- W3 Total Cache (version 0.9.2.8 and below are vulnerable,
version 0.9.2.9 and up are not vulnerable) / upgrade
- WP Super Cache (version 1.2 and below are vulnerable, version
1.3.x and up are not vulnerable) / upgrade
As a precaution, CloudFlare has applied a rule to our network which
protects against this specific vulnerability in both plugins. The
protection is applied for all CloudFlare accounts automatically, even
free accounts. You do not need to do anything to enable the protection.
Even with this protection in place, if you are running either of these
plugins you should upgrade immediately (W3TC
/ WP Super Cache
The vulnerability is serious enough that we recommend you disable the
plugins until you have completed an upgrade. If you're not already a
CloudFlare customer, you can signup for free to get protection
The attack takes advantage of several functions in these plugins
including: mfunc, mclude, and dynamic-cached-content. An attacker can
execute a PHP command running on the server by pasting a comment to a
WordPress blog running a vulnerable version of W3 Total Cache or WP
Super Cache. For example, if you are running a vulnerable version of the
plugins, the following will result in your current PHP version being
printed in the comment:
<!--mfunc echo PHP_VERSION; --><!--/mfunc-->
While this is harmless, the same mfunc call in either plugin can run
other arbitrary commands on your server. This could be used to gain
access to the server, execute arbitrary database commands, or remotely
install malware. Again, this is a very severe vulnerability and all W3TC
and W3 Super Cache users should upgrade immediately (W3TC
Upgrade / WP
What CloudFlare Logs | 24-04-13
Over the last few weeks, we've had a number of requests for information
about what data CloudFlare logs when someone visits a site on our
network. While we have provided a Privacy
Policy that outlines how we
keep information private, I wanted to take the time to clarify our
customer log retention policies.
What CloudFlare Logs
When you visit a site on CloudFlare's network, we record information
about that visit. If you run a web server you'll be familiar with these
logs as they're similar to an Apache access log. We log data for two
reasons: 1) to help us identify security threats and attacks hitting our
customers in order to mitigate them; and 2) in order to identify
performance bottlenecks and errors on our system.
It's somewhat hard to fathom the scale of the log data that we generate.
Every minute of every day we generate more than 20GB (compressed) of log
data. That translates, at our current volume, to more than 10 Petabytes
of storage needed to store a year's worth of logs, and, due to our
continued growth, that volume that has been doubling every 4 months or
so. Today, even if we wanted to, we don't have the ability to retain all
the logs we generate. This means that, for most customers, we discard
access logs within 4 hours of them being recorded.
For our Enterprise customers, we offer an optional feature that allows
them to export their raw log files in Apache format. This requires us to
store log files for a longer period of time in order to allow them to be
downloaded. By default, we store logs for these customers for 3 days.
Since CloudFlare does not keep the raw logs, it is impossible for us to
answer questions like: tell me all the visitors who have been to a
particular website on CloudFlare's network.
However, CloudFlare does generate aggregate data, so we can provide
analytics back to customers. We use the aggregated data to populate
things like the CloudFlare Analytics page which includes numbers of
hits, page views, bandwidth consumed and unique visitors. As logs are
received, we run a stream processing engine that extracts this summary
data. This data is correlated in each of our edge data centers and then
sent to one of our core facilities in order to report through our UI.
This same data summary engine also looks for attack patterns, which is
then used to provide security protection for our customer's websites.
Using this engine, we can identify an attack on one site, usually in
less than 1 minute, and then push updated security rules that then
protect every site using CloudFlare from that same attack.
Access logs for most customers are stored briefly at the edge of our
network and then deleted within 4 hours. If there is an error, those
logs are transmitted back to one of our core facilities in order for us
to diagnose the error. Error logs sent to core are currently kept for 1
week then discarded.
Going forward, we want to allow customers who would like to have more
insight into the visitors to their sites to be able to choose to do so.
As we do, we will provide details on how any feature we add changes our
log retention policy, and we will continue to be guided by the principle
that our customers should be able to understand and control what data is
being stored about visitors to their sites.