You have reached your submission limit using this tool. You can add more URLs using a sitemap. Monitor your site’s search traffic in Search Console.

If you hit Google’s webpage submission limit, you may receive the errors below. The only way around these limits is to:

  1. Wait 24 hours and submit again
  2. Use a different Google account.
You have reached your submission limit using this tool. You can add more URLs using a sitemap. Monitor your site’s search traffic in Search Console.

 

An error has occurred. Please try again later.

Google chose different canonical than user [Google Search Console / Webmaster Tools]

This URL is marked as canonical for a set of pages, but Google thinks another URL makes a better canonical. Because we consider this page a duplicate, we did not index it; only the canonical page is indexed. We recommend that you explicitly mark this page as a duplicate of the canonical URL. To learn which page is the canonical, click the table row to run an info: query for this URL, which should list its canonical page.

What to do: It’s best to follow Google’s advise on this one. Please the Google gods and set their version as the canonical url.

Submitted URL not selected as canonical [Google Search Console / Webmaster Tools]

The URL is one of a set of duplicate URLs without an explicitly marked canonical page. You explicitly asked this URL to be indexed, but because it is a duplicate, and Google thinks that another URL is a better candidate for canonical, Google did not index this URL. Instead, we indexed the canonical that we selected. The difference between this status and “Google chose different canonical than user” is that, in this case, you explicitly requested indexing.

What To Do: If you are properly using canonical tags, there is nothing to do so long as you are using canonical tags to tell Google what page to prefer. It may not be necessary to include non-canonical URLs in your sitemap, but I have found it helpful as Google will index some of these non-canonical URLs and display them in the search results.

My CloudFlare Argo Response Time Improvement

CloudFlare has added this nifty chart to give you an idea the performance boost their Argo Smart Networking feature adds (or takes away).

Unfortunately, the statistics are limited to the previous 4 hours, so I’ll try to post a couple separate charts.

Here’s one to start.

CloudFlare Argo offers 21% Improvement in routing time vs standard routing. Only about 20% of the traffic was SmartRouted
The data for this graph is on about 700 Megabytes / 100,000 Requests with about a 10% CloudFlare Cache Rate.

Above is a histogram of Time To First Byte (TTFB). The blue and orange series represent the before and after TTFB in locations, Argo found a Smart Route.
TTFB measures the delay between Cloudflare sending a request to your server and receiving the first byte in response. TTFB includes network transit time (which Smart Routing optimizes) and processing time on your server (which Argo has no effect on).

The geography of this first sample of Argo users was entirely limited to Moscow, Russia – suggesting over the past 48 hours, CloudFlare’s link to that side of the planet has performed faster. All the data originated from Google’s Northern California Data Center.

Site 2: This site is being served out of AWS East Data Center.

Sample Size: 150,000 Requests / 2 GB

Note a modest improvement in both China and Ireland.

Site 3: This site only had traffic of about 300 MB / 25,000 Requests over the past 48 hours so CloudFlare is unable to display performance data.
Argo Smart Routing is optimizing 12.0% of requests to your origin. There have not been enough requests to your origin in the last 48 hours to display detailed performance data.

Adsense Auto Ads with Regular Ads

After seeing Adsense Auto Ads beta feature pop up on my account, I was excited to jump right in. I did, however, worry adsense would be too conservative in placing ads or just not place the right type of ads for my site. In an effort to not have a couple day drop of a good chunk of revenue, I simply placed the auto-ad code in along with the existing ads on my site.

Auto Ads Setup Screen

I will be monitoring the performance closely and may remove the hard-coded ads to see if performance keeps up.

Set up Auto ads on your site

Copy and paste this code in between the <head> tags of your site. It’s the same code for all your pages. You don’t need to change it even if you change your global preferences. See our code implementation guide for more details.
For those of you looking to get started with auto-ads, you may be able to place the following code onto your site (with your own publisher-id).
<script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
<script>
 (adsbygoogle = window.adsbygoogle || []).push({
 google_ad_client: "ca-pub-0545639743190253",
 enable_page_level_ads: true
 });
</script>

The auto-ads management page can be found at:

https://www.google.com/adsense/new/u/0/pub-0545639743190253/myads/auto-ads

but since the program is still in beta, many users don’t have access yet.

UPDATE:

Google also has the Auto Ads for AMP pages, in their documentation for the AMP implementation,  things are set out more clearly:

 

Google Doc Viewer URLs

Google Docs Viewer has a couple urls to load documents through. Pagespeed, Pingdom, and Websitegrader tests show the “gview” url performs slightly better.

https://docs.google.com/viewerng/viewer?url=http://rehmann.co/blog/wp-content/uploads/2017/04/HelloWorld.pdf

https://docs.google.com/gview?embedded=true&url=http://rehmann.co/blog/wp-content/uploads/2017/04/HelloWorld.pdf


 

Optimize tpc.googlesyndication.com/icore_images [Google PageSpeed Suggestion]

PageSpeed Insights is a google powered tool located at https://developers.google.com/speed/pagespeed/insights/ . It gives suggestions on how to speed up your site.

If you’re an adwords user, you may encounter the suggestion to compress images located at https://tpc.googlesyndication.com/icore_images/  Unfortunately, you will not be able to compress these images, as they are located on google’s servers as part of their ad infrastructure.

I’ve submitted a bug report to google via their complaint form requesting they optimize their ad infrastructure images. I encourage anyone coming to this page to submit a complaint at https://support.google.com/adwords/contact/aw_complaint.

PLEASE SUBMIT A COMPLAINT TOO!!

Flags / Options for pngcrush

Here’s the list of flags / options for pngcrush-1.8.11 – January 2017 Version.

A couple common flags are “-ow” to overwrite the original file with the crushed file and “-brute” to ‘brute-force’ all optimization methods (slowest, but most compression)

usage:
pngcrush [options except for -e -d] infile.png outfile.png
pngcrush -e ext [other options] file.png ...
pngcrush -d dir/ [other options] file.png ...
pngcrush -ow [other options] file.png [tempfile.png]
pngcrush -n -v file.png ...

options:

-bail (bail out of trial when size exceeds best size found)
-blacken (zero samples underlying fully-transparent pixels)
-brute (use brute-force: try 148 different methods)
-c color_type of output file [0, 2, 4, or 6]
-check (check CRC and ADLER32 checksums)
-d directory_name/ (where output files will go)
-e extension (used for creating output filename)
-f user_filter [0-5] for specified method
-fix (salvage PNG with otherwise fatal conditions)
-force (write output even if IDAT is larger)
-g gamma (float or fixed*100000, e.g., 0.45455 or 45455)
-huffman (use only zlib strategy 2, Huffman-only)
-iccp length "Profile Name" iccp_file
-itxt b[efore_IDAT]|a[fter_IDAT] "keyword"
-keep chunk_name
-l zlib_compression_level [0-9] for specified method
-loco ("loco crush" truecolor PNGs)
-m method [1 through 150]
-max maximum_IDAT_size [default 524288L]
-mng (write a new MNG, do not crush embedded PNGs)
-n (no save; doesn't do compression or write output PNG)
-new (Use new default settings (-reduce))
-newtimestamp (Reset file modification time [default])
-nobail (do not bail out early from trial -- see "-bail")
-nocheck (do not check CRC and ADLER32 checksums)
-nofilecheck (do not check for infile.png == outfile.png)
-noforce (default; do not write output when IDAT is larger)
-nolimits (turns off limits on width, height, cache, malloc)
-noreduce (turns off all "-reduce" operations)
-noreduce_palette (turns off "-reduce_palette" operation)
-old (Use old default settings (no -reduce))
-oldtimestamp (Do not reset file modification time)
-ow (Overwrite)
-q (quiet) suppresses console output except for warnings
-reduce (do lossless color-type or bit-depth reduction)
-rem chunkname (or "alla" or "allb")
-replace_gamma gamma (float or fixed*100000) even if it is present.
-res resolution in dpi
-rle (use only zlib strategy 3, RLE-only)
-s (silent) suppresses console output including warnings
-save (keep all copy-unsafe PNG chunks)
-speed Avoid the AVG and PAETH filters, for decoding speed
-srgb [0, 1, 2, or 3]
-ster [0 or 1]
-text b[efore_IDAT]|a[fter_IDAT] "keyword" "text"
-trns_array n trns[0] trns[1] .. trns[n-1]
-trns index red green blue gray
-v (display more detailed information)
-version (display the pngcrush version)
-warn (only show warnings)
-w compression_window_size [32, 16, 8, 4, 2, 1, 512]
-z zlib_strategy [0, 1, 2, or 3] for specified method
-zmem zlib_compression_mem_level [1-9, default 9]
-zitxt b|a "keyword" "lcode" "tkey" "text"
-ztxt b[efore_IDAT]|a[fter_IDAT] "keyword" "text"
-h (help and legal notices)
-p (pause)

These notes were also printed with the info statement on the program.

| Copyright (C) 1998-2002, 2006-2016 Glenn Randers-Pehrson
| Portions Copyright (C) 2005 Greg Roelofs
| This is a free, open-source program. Permission is irrevocably
| granted to everyone to use this version of pngcrush without
| payment of any fee.
| Executable name is pngcrush
| It was built with bundled libpng-1.6.28
| and is running with bundled libpng-1.6.28
| Copyright (C) 1998-2004, 2006-2016 Glenn Randers-Pehrson,
| Copyright (C) 1996, 1997 Andreas Dilger,
| Copyright (C) 1995, Guy Eric Schalnat, Group 42 Inc.,
| and bundled zlib-1.2.11, Copyright (C) 1995-2017,
| Jean-loup Gailly and Mark Adler,
| and using “clock()”.
| It was compiled with gcc version 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1).

If you have modified this source, you may insert additional notices
immediately after this sentence.
Copyright (C) 1998-2002, 2006-2016 Glenn Randers-Pehrson
Portions Copyright (C) 2005 Greg Roelofs

DISCLAIMER: The pngcrush computer program is supplied “AS IS”.
The Author disclaims all warranties, expressed or implied, including,
without limitation, the warranties of merchantability and of fitness
for any purpose. The Author assumes no liability for direct, indirect,
incidental, special, exemplary, or consequential damages, which may
result from the use of the computer program, even if advised of the
possibility of such damage. There is no warranty against interference
with your enjoyment of the computer program or against infringement.
There is no warranty that my efforts or the computer program will
fulfill any of your particular purposes or needs. This computer
program is provided with all faults, and the entire risk of satisfactory
quality, performance, accuracy, and effort is with the user.

LICENSE: Permission is hereby irrevocably granted to everyone to use,
copy, modify, and distribute this computer program, or portions hereof,
purpose, without payment of any fee, subject to the following
restrictions:

1. The origin of this binary or source code must not be misrepresented.

2. Altered versions must be plainly marked as such and must not be
misrepresented as being the original binary or source.

3. The Copyright notice, disclaimer, and license may not be removed
or altered from any source, binary, or altered source distribution.

 

Jan-22 2017: Search Console performed an infrastructure update that may cause a change in your data.

Google Webmaster Tools performed an update on January 22nd which may cause anomalies in search data.

For my web properties, this resulted in a 50% drop in reported “Structured Data” elements in webmaster tools.

According to Google’s Data Anomalies reporting page, an event for January 22nd has not yet been reported.

 

Google Analytics “Redundant Hostnames”

As of October 14th, 2014, Google analytics now warns users of having Redundant Hostnames causing hits to their analytics.
Many users are now seeing the error: ‘You have 1 unresolved issue: Redundant Hostnames.’

Redundant Hostnames - Property example.com is receiving data from redundant hostnames.

This means that there is more than one domain that can be used to access a particular page. For example, the domain example.com has redundant hostnames because it is accessible from www.example.com and example.com . This issue can also occur if your site is accessible by its ip address. For optimal user experience and seo practices, webmasters need to 301 redirect traffic to one consolidated domain.

In addition to providing 301 redirects, there are some best practices you can put into place to ensure your content is not duplicated across hosts.
The first is to add the following line to your robots.txt file:
Host: example.com
Replace example.com with your preferred host, be it www.yourdomain.com or just yourdomain.com

Google Webmaster Tools also allows you to set a prefered hostname under “site settings”. This will ensure that your host is consistent across all traffic from google. You must have the www and non-www versions of your site verified on WMT in order to set this feature.