My CloudFlare Argo Response Time Improvement

CloudFlare has added this nifty chart to give you an idea the performance boost their Argo Smart Networking feature adds (or takes away).

Unfortunately, the statistics are limited to the previous 4 hours, so I’ll try to post a couple separate charts.

Here’s one to start.

CloudFlare Argo offers 21% Improvement in routing time vs standard routing. Only about 20% of the traffic was SmartRouted
The data for this graph is on about 700 Megabytes / 100,000 Requests with about a 10% CloudFlare Cache Rate.

Above is a histogram of Time To First Byte (TTFB). The blue and orange series represent the before and after TTFB in locations, Argo found a Smart Route.
TTFB measures the delay between Cloudflare sending a request to your server and receiving the first byte in response. TTFB includes network transit time (which Smart Routing optimizes) and processing time on your server (which Argo has no effect on).

The geography of this first sample of Argo users was entirely limited to Moscow, Russia – suggesting over the past 48 hours, CloudFlare’s link to that side of the planet has performed faster. All the data originated from Google’s Northern California Data Center.

Site 2: This site is being served out of AWS East Data Center.

Sample Size: 150,000 Requests / 2 GB

Note a modest improvement in both China and Ireland.

Site 3: This site only had traffic of about 300 MB / 25,000 Requests over the past 48 hours so CloudFlare is unable to display performance data.
Argo Smart Routing is optimizing 12.0% of requests to your origin. There have not been enough requests to your origin in the last 48 hours to display detailed performance data.

Adsense Auto Ads with Regular Ads

After seeing Adsense Auto Ads beta feature pop up on my account, I was excited to jump right in. I did, however, worry adsense would be too conservative in placing ads or just not place the right type of ads for my site. In an effort to not have a couple day drop of a good chunk of revenue, I simply placed the auto-ad code in along with the existing ads on my site.

Auto Ads Setup Screen

I will be monitoring the performance closely and may remove the hard-coded ads to see if performance keeps up.

Set up Auto ads on your site

Copy and paste this code in between the <head> tags of your site. It’s the same code for all your pages. You don’t need to change it even if you change your global preferences. See our code implementation guide for more details.
For those of you looking to get started with auto-ads, you may be able to place the following code onto your site (with your own publisher-id).
<script async src="//"></script>
 (adsbygoogle = window.adsbygoogle || []).push({
 google_ad_client: "ca-pub-0545639743190253",
 enable_page_level_ads: true

The auto-ads management page can be found at:

but since the program is still in beta, many users don’t have access yet.


Google also has the Auto Ads for AMP pages, in their documentation for the AMP implementation,  things are set out more clearly:


Embed PDF, DOC, DOCX, TIFF, and more with Google or MS Doc Viewers

When it comes to embedding non-html and non-image content on your website, you have a couple options with Google or Microsoft document viewer.
In my tests, MS’s office embed tool performs the best for embedding content, but you must “urlencode” the destination url. I also found MS fails on some urls so your milage may vary.

The MS viewer scores 99 and 96 on the Google Pagespeed tool for Mobile and Desktop, respectively.

Google Docs Viewer has a couple urls to load documents through. Pagespeed, Pingdom, and Websitegrader tests show the “gview” url performs slightly better.




Update 2018-09-16: In my latest tests, the scripts still score about the same. GView being a little bit lighter (4 fewer requets, 200Kb less)

PageSpeed Score Yslow Score Time Size Requests Google PageSpeed
PageSpeed Desktop
Gview 98% 93% 2.4s 465KB 16 53/100 83/100
Viewerng 95% 93% 2.1s 662KB 20 54/100 82/100

Optimize [Google PageSpeed Suggestion]

PageSpeed Insights is a google powered tool located at . It gives suggestions on how to speed up your site.

If you’re an adwords user, you may encounter the suggestion to compress images located at  Unfortunately, you will not be able to compress these images, as they are located on google’s servers as part of their ad infrastructure.

I’ve submitted a bug report to google via their complaint form requesting they optimize their ad infrastructure images. I encourage anyone coming to this page to submit a complaint at


Flags / Options for pngcrush

Here’s the list of flags / options for pngcrush-1.8.11 – January 2017 Version.

A couple common flags are “-ow” to overwrite the original file with the crushed file and “-brute” to ‘brute-force’ all optimization methods (slowest, but most compression)

pngcrush [options except for -e -d] infile.png outfile.png
pngcrush -e ext [other options] file.png ...
pngcrush -d dir/ [other options] file.png ...
pngcrush -ow [other options] file.png [tempfile.png]
pngcrush -n -v file.png ...


-bail (bail out of trial when size exceeds best size found)
-blacken (zero samples underlying fully-transparent pixels)
-brute (use brute-force: try 148 different methods)
-c color_type of output file [0, 2, 4, or 6]
-check (check CRC and ADLER32 checksums)
-d directory_name/ (where output files will go)
-e extension (used for creating output filename)
-f user_filter [0-5] for specified method
-fix (salvage PNG with otherwise fatal conditions)
-force (write output even if IDAT is larger)
-g gamma (float or fixed*100000, e.g., 0.45455 or 45455)
-huffman (use only zlib strategy 2, Huffman-only)
-iccp length "Profile Name" iccp_file
-itxt b[efore_IDAT]|a[fter_IDAT] "keyword"
-keep chunk_name
-l zlib_compression_level [0-9] for specified method
-loco ("loco crush" truecolor PNGs)
-m method [1 through 150]
-max maximum_IDAT_size [default 524288L]
-mng (write a new MNG, do not crush embedded PNGs)
-n (no save; doesn't do compression or write output PNG)
-new (Use new default settings (-reduce))
-newtimestamp (Reset file modification time [default])
-nobail (do not bail out early from trial -- see "-bail")
-nocheck (do not check CRC and ADLER32 checksums)
-nofilecheck (do not check for infile.png == outfile.png)
-noforce (default; do not write output when IDAT is larger)
-nolimits (turns off limits on width, height, cache, malloc)
-noreduce (turns off all "-reduce" operations)
-noreduce_palette (turns off "-reduce_palette" operation)
-old (Use old default settings (no -reduce))
-oldtimestamp (Do not reset file modification time)
-ow (Overwrite)
-q (quiet) suppresses console output except for warnings
-reduce (do lossless color-type or bit-depth reduction)
-rem chunkname (or "alla" or "allb")
-replace_gamma gamma (float or fixed*100000) even if it is present.
-res resolution in dpi
-rle (use only zlib strategy 3, RLE-only)
-s (silent) suppresses console output including warnings
-save (keep all copy-unsafe PNG chunks)
-speed Avoid the AVG and PAETH filters, for decoding speed
-srgb [0, 1, 2, or 3]
-ster [0 or 1]
-text b[efore_IDAT]|a[fter_IDAT] "keyword" "text"
-trns_array n trns[0] trns[1] .. trns[n-1]
-trns index red green blue gray
-v (display more detailed information)
-version (display the pngcrush version)
-warn (only show warnings)
-w compression_window_size [32, 16, 8, 4, 2, 1, 512]
-z zlib_strategy [0, 1, 2, or 3] for specified method
-zmem zlib_compression_mem_level [1-9, default 9]
-zitxt b|a "keyword" "lcode" "tkey" "text"
-ztxt b[efore_IDAT]|a[fter_IDAT] "keyword" "text"
-h (help and legal notices)
-p (pause)

These notes were also printed with the info statement on the program.

| Copyright (C) 1998-2002, 2006-2016 Glenn Randers-Pehrson
| Portions Copyright (C) 2005 Greg Roelofs
| This is a free, open-source program. Permission is irrevocably
| granted to everyone to use this version of pngcrush without
| payment of any fee.
| Executable name is pngcrush
| It was built with bundled libpng-1.6.28
| and is running with bundled libpng-1.6.28
| Copyright (C) 1998-2004, 2006-2016 Glenn Randers-Pehrson,
| Copyright (C) 1996, 1997 Andreas Dilger,
| Copyright (C) 1995, Guy Eric Schalnat, Group 42 Inc.,
| and bundled zlib-1.2.11, Copyright (C) 1995-2017,
| Jean-loup Gailly and Mark Adler,
| and using “clock()”.
| It was compiled with gcc version 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1).

If you have modified this source, you may insert additional notices
immediately after this sentence.
Copyright (C) 1998-2002, 2006-2016 Glenn Randers-Pehrson
Portions Copyright (C) 2005 Greg Roelofs

DISCLAIMER: The pngcrush computer program is supplied “AS IS”.
The Author disclaims all warranties, expressed or implied, including,
without limitation, the warranties of merchantability and of fitness
for any purpose. The Author assumes no liability for direct, indirect,
incidental, special, exemplary, or consequential damages, which may
result from the use of the computer program, even if advised of the
possibility of such damage. There is no warranty against interference
with your enjoyment of the computer program or against infringement.
There is no warranty that my efforts or the computer program will
fulfill any of your particular purposes or needs. This computer
program is provided with all faults, and the entire risk of satisfactory
quality, performance, accuracy, and effort is with the user.

LICENSE: Permission is hereby irrevocably granted to everyone to use,
copy, modify, and distribute this computer program, or portions hereof,
purpose, without payment of any fee, subject to the following

1. The origin of this binary or source code must not be misrepresented.

2. Altered versions must be plainly marked as such and must not be
misrepresented as being the original binary or source.

3. The Copyright notice, disclaimer, and license may not be removed
or altered from any source, binary, or altered source distribution.


Jan-22 2017: Search Console performed an infrastructure update that may cause a change in your data.

Google Webmaster Tools performed an update on January 22nd which may cause anomalies in search data.

For my web properties, this resulted in a 50% drop in reported “Structured Data” elements in webmaster tools.

According to Google’s Data Anomalies reporting page, an event for January 22nd has not yet been reported.


Google Analytics “Redundant Hostnames”

As of October 14th, 2014, Google analytics now warns users of having Redundant Hostnames causing hits to their analytics.
Many users are now seeing the error: ‘You have 1 unresolved issue: Redundant Hostnames.’

Redundant Hostnames - Property is receiving data from redundant hostnames.

This means that there is more than one domain that can be used to access a particular page. For example, the domain has redundant hostnames because it is accessible from and . This issue can also occur if your site is accessible by its ip address. For optimal user experience and seo practices, webmasters need to 301 redirect traffic to one consolidated domain.

In addition to providing 301 redirects, there are some best practices you can put into place to ensure your content is not duplicated across hosts.
The first is to add the following line to your robots.txt file:
Replace with your preferred host, be it or just

Google Webmaster Tools also allows you to set a prefered hostname under “site settings”. This will ensure that your host is consistent across all traffic from google. You must have the www and non-www versions of your site verified on WMT in order to set this feature.

Recommended Crawl Rate for Bots

You can set your desired bot crawl delay in your robots.txt file by adding this after the user-agent field: Crawl-Delay: 10

That will cause any legitimate robot to wait 10 seconds between requests as they crawl your site for links.
My recommendation, however, is not to set a crawl delay at all. You want bots like Googlebot and Bingbot to crawl your website as often as possible so your freshest content is in the search results. It’s only when you have an underpowered server with perhaps poorly written code that you want to add a crawl delay because in this case, you don’t want the bots to overwhelm your server with traffic causing it to crash. Googlebot, however, is pretty smart and if it notices increased response times due to the large amount of requests they are serving you, it will back off and make the requests more slowly. I’m unsure how Bingbot works with accidental DOS, but you can set your preferred crawl settings in Bing Webmaster Tools so Microsoft can focus their crawling on non-peak times to keep from overwhelming your server.

In terms of SEO, faster crawling is better, and quality new content is key.
Questions and experiences in the comments!