Does Compression Increase Speeds When Using a CDN?

CloudFlare automatically compresses content before serving it over their network, regardless of the origin server’s compression compatibility. When internet pipes were small, gzipping content reduced the overhead for html, css, and javascript often between 60 and 90 percent of the original size, at a sacrifice of CPU time.

With CloudFlare and similar CDN services, it is possible to offload the gzipping of content onto the middleman servers saving you CPU time before requests are served. In theory, if the pipes between your server and CloudFlare are large enough, the extra packets between your server can be sent in less time than it would take to compress the content. To test this, I setup two m4.large EC2 instances and made a couple hundred http requests between the two. Although not a perfectly representative test page for the internet, I used the hmtl of the site for the test object.

Machine Details:
Amazon Linux AMI 2015.09.0 x86_64 HVM GP2
Size: m4.large, 2 vCPU, 8 GB Memory
Disk: Provisioned IOPs SSD
Network Speed: “Medium”
Requests Served via Apache 2.4 with mod_deflate

CloudFlare “Pro” Account

First, to check if there was any possible gain room for improvement, 200 requests at each test level were made from one m4.large AMI box to another m4.large AMI box. The pages were not decompressed after they were received.


No Compression % Change
13.784 KB 100% Original Size
0.001252285 seconds
Level 1 Deflate Compression % Change
5.747 KB 41.7% Original Size
0.0016389 seconds   31% Longer Load Time
Level 6 Deflate Compression (Default) % Change
5.203 KB 37.7% Original Size
0.00178892 seconds   43% Longer Load Time
Level 9 Deflate Compression % Change
5.202 KB  37.7% Original Size
0.0017924 seconds  43% Longer Load Time

Downloading files between these two boxes (vai Apache) clocked in at 69.5MB/s – bandwidth in this instance has very little effect on the download time of a page, thereby reducing mod_deflate’s usefulness. We can see because of the CPU usage, mod_deflate actually adds at least 31 percent MORE loading time for each request. Level 1 compression manages to reduce the document 41.7% of the original size, but our fat pipes don’t care about the reduction in size.

With the “No Compression” model showing promise for delivering documents faster as long as the pipe is big enough, the next step was to determine how long these requests took when they were routed through CloudFlare’s network. This test involved the traffic flowing from one m4.large instance to CloudFlare and back to another m4.large instance. Despite routing this traffic through CloudFlare, the throughput between the two m4 instances is still 69.5MB/s.

AMI -> CloudFlare -> AMI

No Compression % Change
5.173 KB 37.5% Original Size
0.00639284 seconds
Level 1 Deflate Compression % Change
5.166 KB  37.5% Original Size
0.00673343 seconds  5.3% Longer Load Time
Level 6 Deflate Compression (Default) % Change
5.159 KB  37.4% Original Size
0.007528945 seconds  17.7% Longer Load Time
Level 9 Deflate Compression

5.174 KB  37.5% Original Size
0.013901415 seconds  117% Longer Load Time

Level 1 Deflate Compression comes out as the clear winner if you’re trying to keep your bandwidth bill low. Origin server bandwidth requirements are reduced almost 60% while only adding 5% to the load time. If you have unmetered bandwidth and a fat pipe, you would be best avoiding origin server compression at all.

Theory: CloudFlare needs to decompress and parse all content going through its servers for the optimization processes it provides. When content is compressed, it takes additional CPU time on the origin server and on CloudFlare’s network before the content can be decompressed,  parsed, and compressed again with CloudFlare’s algorithms.

Suggestion: Add “DeflateCompressionLevel 1” to your httpd.conf file.