Monday 24 April 2017

Google Update - AdWords Shake-up - February 23, 2016

Google AdWords Switching to 4 Ads on Top, None on Sidebar

Google made major changes to AdWords, removing right-column ads entirely and rolling out 4-ad top blocks on many commercial searches. While this was a paid search update, it had significant implications for CTR for both paid and organic results, especially on competitive keywords.
It seems that Google is rolling out a change to Google AdWords that sees 4 ads at the top of the search results, none on the sidebar at all, and an additional 3 ads at the bottom of the search results.  This replaces the usual mix of top, bottom and sidebar-heavy AdWords ads, depending on the specific search result.
Many of the ads do have additional features like sitelinks, but it is hard to tell if those have increased or not.
It was huge news in December when Google began testing 4 ads at the top of the search results, and quite a controversial one for many.  While advertisers loved it, regular SEOs weren’t so happy since it pushed the organic search results even further down the page.
AdWords hasn’t confirmed it publicly yet, but multiple advertisers are confirming this is what their AdWords reps are telling them.

Latest Google SEO Updates & Algorithm Changes in 2017




Latest Google SEO Updates & Algorithm Changes in 2017

The digital world is now more hyped-up, dynamic and influential than ever before. It is more focused and competitive as well. With the end goal for you to achieve high search engine rankings and to maintain them, you have to take after the latest SEO updates. This is the initial move towards staying aware of the latest SEO trends and remaining focused.
The SEO updates are directly proportional to the search algorithm updates that search engines receive. Since Google is the pioneer in the search marketing, new changes in Google’s search algorithms are vital to enhance optimization of your website. Website admins need to have phenomenal understanding of all the latest search algorithm updates and related procedures, as only this can let them know latest SEO updates essential to optimize websites, ensure better DA and high rankings in SERPs.
     Largely, Google is centered around enhancing its web search administrations for online users, and by keeping a track of changes in the Google Search Algorithm updates, marketers can increase ranking of their sites. Google has a long history of famous search algorithm updates that channelize ranking mechanism of SERPs.
To find latest Google SEO updates, marketers need to check latest updates of the following search algorithms-

10 Google SEO Updates & Algorithm Changes in 2017

1.) Google Hummingbird Update

        
Introduced around August 2013, Google Hummingbird Update is Google’s new search algorithm that plays a significant role in deciding ranking of websites. It is made up of 200+ factors that can affect search results and website ranking. The biggest changes made in Hummingbird were capability to have a sharp eye on mobile marketing, which is not surprising at all given the explosion of the smart phones in recent years. The name ‘Hummingbird’ comes from its ability of being “precise and fast” and it is mainly designed to better focus on meaning of a phrase or keyword rather than individual keywords. Hummingbird looks at the entire phrase to decipher the meaning of that phrase. Google Hummingbird SEO updates aid pages matching the meaning do better in search results.

SEO new updates related to Hummingbird

  • Application of meaning technology to billions of pages from across the web
  • Use of Knowledge Graph facts to ensure better search results
  • Easy recognition of Keyword stuffing
  • Effectiveness of Long-tail keywords

2.)  Google Penguin Update

Google propelled the Penguin Update in April 2012 to catch the websites that are spamming Google’s search results. This update is mainly aimed at decreasing search rankings of websites that violate Google’s Webmaster Guidelines and use black-hat SEO techniques to artificially increase ranking of their websites, by obtaining or buying links through some wrong practices. The primary reason behind this update was to penalize websites that use manipulative techniques for achieving high rankings.  As per Google’s estimates, Penguin influences approximately 3.1% of search queries in English, and approximately 3% of queries in languages like German, Arabic and Chinese, and an even much bigger percentage of them in “highly spammed” language categories. Pre-Penguin sites normally utilized some negative external link building tactics to rank good in SERPs and boost their traffics. However, once Penguin was introduced it implied that content was vital and those with incredible content would be recognized and those with little or spammy content would be punished.

Some confirmed Google Penguin SEO updates are

  • Penguin 1-on April 24, 2012 (impacting around 3.1% of queries)
  • Penguin 2-on May 26, 2012 (impacting less than 0.1%)
  • Penguin 3-on October 5, 2012 (impacting around 0.3% of queries)
  • Penguin 4(a.k.a. Penguin 2.0)- on May 22, 2013 (impacting 2.3% of queries)
  • Penguin 5(a.k.a. Penguin 2.1)- on October 4, 2013 (impacting around 1% of queries)
  • Penguin 6(a.k.a. Penguin 3.0-) on October 17, 2014 (impacting less than 1% English queries). On December 1, 2014, Google confirmed that the update was still rolling out with webmasters continuing to report significant fluctuations.
  • Penguin 7(a.k.a. Penguin 4.0)- on September 23, 2016

3.) Google Panda Update

Google’s Panda Update is acquainted in February 2011 and it is known as the powerful search filter implied to stop sites with low quality content from making their way into top search results of Google. Panda is updated every once in a while. At the point when this happens, sites already hit; may get away, if then they have rolled out the correct improvements according to Panda updates. Through its different updates, Panda can likewise catch sites that got away some time recently. Google Panda was quite effective in affecting ranking of entire sites or a specific section rather than individual pages on a site.

Some important SEO updates according to Google Panda Update are

  • No Multiple Pages with the Same Keyword
  • Get Rid of Auto-generated Content and Roundup/Comparison Type of Pages
  • No Pages with 1-2 Paragraphs of Text Only
  • No Scraped Content
  • Panda Likes New Content
  • Be Careful with Affiliate Links and Ads
  • Too Many Outbound Links with Keywords are bad

4.) Google Pigeon Update

Propelled on July 24, 2014 for U.S. English results, Google Pigeon Update” is another search algorithm update introduced to give more valuable, significant and exact local search results that are attached more closely to conventional web search ranking factors. Google said that this new search algorithm enhances their distance and location tracking parameters in more result-oriented manner. The changes made through Google Pigeon Update will also affect search results shown in Google Maps as this update lets Google provide search results based on the user location and listing at hand in the local directory. The main purpose behind introducing Google Pigeon Update is to provide preferences to local search results in SERPs and that is why this is extremely beneficial for local businesses. 

Latest updates in SEO based on Google Pigeon Updates are

  • Location Matters More Than Ever
  • Don’t Over-Optimize Your Website
  • Strong Domains Matter more

5.) Google Mobile-Friendly Update

Introduced On April 21, 2015, Google introduced its Mobile-Friendly search algorithm that is intended to give a lift to mobile friendly sites pages in Google’s mobile search results. The change is significant to the point that the date it happened is being alluded by a variety of names such as Mobilegeddon, Mobilepocalyse, Mopocalypse or Mobocalypse. One of the ideal approaches to get ready is to test that Google considers your site pages to be mobile friendly by utilizing its Mobile-Friendly Test tool. It is very effective in including approaches that bring more mobile-friendliness in SEO campaigns.

Latest Google Mobile-Friendly SEO updates are

  • Google mobile-friendly testing tool now has API access
  • Google may pick desktop over AMP page for the mobile-first index
  • Google begins mobile-first indexing, using mobile content for all search rankings
  • Google will show AMP URLs before App deep link URLs in mobile results
  • Google says page speed ranking factor to use mobile page speed for mobile sites
6.) Google: Payday Update
Propelled on June 11, 2013, Google Payday Update was a new Google search algorithm focused at cleaning up list items related to “spammy queries” such as payday loans or pornographic or some other kinds of heavily spammed queries. It can be understood as a set of algorithm updates for the Google search engine results initiated to identify and penalize web sites that use different kinds of search engine spam techniques (also known as  Black Hat SEO or spamdexing) for improving their rankings for particular search queries that are actually “spammy”.

Recent Google Payday updates are

  • Google Payday Loan 1.0
  • Google Payday Loan 2.0
  • Google Payday Loan 3.0

7.) Google: Pirate Update

Introduced in August 2012, Google’s Pirate Update is a filter that prevents sites that have many copyright infringement reports, as documented through Google’s DMCA system. It is periodically updated and at the point when updates happen, websites beforehand affected may get away, if they have made the correct changes. It may likewise catch new websites that circumvented being caught recently; in addition, it may also release ‘false positives’ about those who were caught.
Some of the latest Google Pirate SEO updates-
  • The Pirate Update Penalized Websites That Received A High Volume Of Copyright Violation Reports
  • The Pirate Update Is A Win For Media And Artists
  • Getting A Page Removed From The Index Requires Valid Documentation

8.) Google: EMD Update

Launched in September 2012, The EMD (Exact Match Domain) Update is a filter used by Google to impede low quality sites from positioning admirably just on the grounds that they had words that match search terms in their domains. At the point when a crisp EMD Update happens, sites that have enhanced their content may recover great rankings. New sites that comprise poor content or all those that were previously missed by Google EMD updates may get caught. Likewise, “false positives” may also get discharged.
 According to Matt Cutts, “EMD is set to reduce low-quality ‘exact-match’ domains in search results.”[Image Google's EMD Algo Update-source- moz] (2)

 9.) Google: Top Heavy Update

Google Top Heavy update was propelled in January 2012 as a way to avoid sites that were “top heavy” with advertisements from positioning well in Google search listings. Top Heavy is updated repeatedly, and at the point when a Top Heavy Update occurs, websites that have evacuated extreme advertisements may recapture their lost rankings. New sites considered as “top heavy” may get caught again with new Top-heavy update.

Some of the Google Top Heavy SEO Updates

  • Google Updates Its Page Layout Algorithm To Go After Sites “Top Heavy” With Ads
  • Have The Same Ad-To-Organic Ratio As Google Search? Then You Might Be Safe From The Top Heavy Penalty
  • The Top Heavy Update: Pages With Too Many Ads Above The Fold Now Penalized By Google’s “Page Layout” Algorithm

 10.) Google Page Rank Update

On the off chance that you do SEO or are involved with search marketing, you will for sure come across Google Page Rank Topic eventually. Page Rank is Google’s arrangement of tallying link votes and figuring out which pages are most critical in view of them. These scores are then utilized alongside numerous other things to figure out whether a page will rank well in a search or not. However, some of the experts find Page Rank as a metric that is out of date and deprecated now and they suggest marketers to not waste time on them. Google came up with its Last Toolbar Page Rank Update in 5/6 December 2013 and thereafter it declared- “PageRank is something that we haven’t updated for over a year now, and we’re probably not going to be updating it again going forward, at least the Toolbar version.”
Some of the Toolbar Page Rank Updates that decide SEO new updates are-
  • Toolbar Page Rank Updates released on 5/6 December 2013 (LAST PAGERANK UPDATE EVER)
  • Toolbar Page Rank Updates released on 4 February 2013
  • Toolbar Page Rank Updates released on 7 November 2012
  • Toolbar Page Rank Updates released on 2 August 2012
  • Toolbar Page Rank Updates released on 2 May 2012
  • Toolbar Page Rank Updates released on 7 February 2012
  • Toolbar Page Rank Updates released on 7 November 2011
  • Toolbar Page Rank Updates released on 1st Week August 2011
  • Toolbar Page Rank Updates released on JULY 2011
            

11.)What is the Fred Algorithm Update?

While Google remains vague about what this update specifically targeted, numerous SEOs report that Fred affected low-quality, ad-heavy sites with thin content and bad backlinks.

Error Code List

Error Code List

  1. 1xx Informational Error Code List
  2. 2xx Success Error Code List
  3. 3xx Redirection Error Code List
  4. 4xx Client Error Code List
  5. 5xx Server Error Code List
  6. Unofficial Error Code List
Error Code List

1xx Informational Error Code List

This class of status code means a temporary response, consisting notably of the Status-Line and optional headers, and is eliminated by an empty line.
Since HTTP/1.0 did not specify any 1xx status codes, servers must not[note 1] transfer a 1xx response to an HTTP/1.0 client but under experimental circumstances.

2xx Success Error Code List

This class of status codes means the action asked by the client was accepted, understood and processed successfully.
200 – OK
201 – Created
202 – Accepted
203 – Non-Authoritative Information (since HTTP/1.1)
204 – No Content
205 – Reset Content
206 – Partial Content (RFC 7233)
207 – Multi-Status (WebDAV; RFC 4918)
208 – Already Reported (WebDAV; RFC 5842)
226 – IM Used (RFC 3229)

3xx Redirection Error Code List

This class of status code means the client must take extra action to complete the request. Several of these status codes used in URL redirection. A user agent may carry out the further action with no user interaction only if the system utilised in the second request is GET or HEAD. A user may automatically redirect a request. A user agent should detect and interrupt to prevent cyclical redirects.
300 – Multiple Choices
301 – Moved Permanently
302 – Found
303 – Check Other (since HTTP/1.1)
304 – Not Modified (RFC 7232)
305 – Use Proxy (since HTTP/1.1)
306 – Switch Proxy
307 – Temporary Redirect (since HTTP/1.1)
308 – Permanent Redirect (RFC 7538)

4xx Client Error Code List

This class of status code is reserved for conditions in which the client seems to have erred.
400 – Bad Request
401 – Unauthorised (RFC 7235)
402 – Payment Required
403 – Forbidden
404 – Not Found
405 – Method Not Allowed
406 – Not Acceptable
407 – Proxy Authentication Required (RFC 7235)
408 – Request Timeout
409 – Conflict
410 – Gone
411 – Length Required
412 – Precondition Failed (RFC 7232)
413 – Payload Too Large (RFC 7231)
414 – URI Too Long (RFC 7231)
415 – Unsupported Media Type
416 – Range Not Satisfiable (RFC 7233)
417 – Expectation Failed
418 – I’m a teapot (RFC 2324)
421 – Misdirected Request (RFC 7540)
422 – Unprocessable Entity (WebDAV; RFC 4918)
423 – Locked (WebDAV; RFC 4918)
424 – Failed Dependency (WebDAV; RFC 4918)
426 – Upgrade Required
428 – Precondition Required (RFC 6585)
429 – Too Many Requests (RFC 6585)
431 – Request Header Fields Too Large (RFC 6585)
451 – Unavailable For Legal Reasons

5xx Server Error Code List

The server failed to fulfil a possibly valid request. These response codes apply to any request method.
500 – Internal Server Error
501 – Not Implemented
502 – Bad Gateway
503 – Service Unavailable
504 – Gateway Timeout
505 – HTTP Version Not Supported
506 – Variant Also Negotiates (RFC 2295)
507 – Insufficient Storage (WebDAV; RFC 4918)
508 – Loop Detected (WebDAV; RFC 5842)
510 – Not Extended (RFC 2774)
511 – Network Authentication Required (RFC 6585)

Unofficial Error Code List

These codes are not defined in any RFC but are used by 3rd-party services to provide RESTful error responses.
103 – Checkpoint
420 – Method Failure (Spring Framework)
420 – Enhance Your Calm (Twitter)
450 – Blocked by Windows Parental Controls (Microsoft)
498 – Invalid Token (Esri)
499 – Token Required (Esri)
499 – Request forbidden by antivirus
509 – Bandwidth Limit Exceeded (Apache Web Server/cPanel)
530 – Site is frozen

Internet Information Services Error Code List

IIS expands 4xx error space to signal errors.
449 – Retry With
451 – Redirect
444 – No Response

Nginx Error Code List

The nginx web server software extends the 4xx error space to indicate issues with the client’s request. These codes used for only logging purposes. No actual response is sent with these codes.
495 – SSL Certificate Error
496 – SSL Certificate Required
497 – HTTP Request Sent to HTTPS Port
499 – Client Closed Request

CloudFlare Error Code List

CloudFlare’s reverse proxy service expands the 5xx series of errors space to signal issues with the origin server.
520 – Unknown Error
521 – Web Server Is Down
522 – Connection Timed Out
523 – Origin Is Unreachable
524 – A Timeout Occurred
525 – SSL Handshake Failed
526 – Invalid SSL Certificate

Wednesday 19 April 2017

Which search engine supports which robots meta tag values?

This table shows which search engines support which values:
Robots valueGoogleYahoo!MSN / LiveAsk
indexYesYesYesYes
noindexYesYesYesYes
noneYesDoubtDoubtYes
followYesDoubtDoubtYes
nofollowYesYesYesYes
noarchiveYesYesYesYes
nosnippetYesNoNoNo
noodpYesYesYesNo
noydirNo useYesNo useNo use

The different robots meta tag values

An explanation of all the different values you can use in the robots meta tags:
index
Allow search engines robots to index the page, you don’t have to add this to your pages, as it’s the default.
noindex
Disallow search engines from showing this page in their results.
noimageindex
Disallow search engines from spidering images on that page. Of course, if images are linked to directly from elsewhere, Google can still index them, so using an X-Robots-Tag HTTP header is a better idea.
none

This is a shortcut for noindex,nofollow, or basically saying to search engines: don’t do anything with this page at all.
follow
Tells the search engines robots to follow the links on the page, whether it can index it or not.
nofollow
Tells the search engines robots to not follow any links on the page at all.
noarchive
Prevents the search engines from showing a cached copy of this page.
nocache
Same as noarchive, but only used by MSN/Live.
nosnippet
Prevents the search engines from showing a snippet of this page in the search results and prevents them from caching the page.
noodp
Used to block search engines from using the description for this page in DMOZ (aka ODP) as the snippet for your page in the search results. However, DMOZ doesn’t exist anymore.
noydir
Blocks Yahoo! from using the description for this page in the Yahoo! directory as the snippet for your page in the search results. No other search engines use the Yahoo! directory for this purpose, so they don’t support the tag. Since Yahoo! closed its directory this tag is deprecated, but you might come across it once in awhile.

The resources from the search engines

The search engines themselves have pages about this subject as well:
You can block all robots at once by the markup above, or just block one robot by specifying it specifically:
Google
GOOGLEBOT
Yahoo!
SLURP
MSN / Live
MSNBOT
Ask
TEOMA


Monday 17 April 2017

Using Noindex, Nofollow HTML Metatags: How to Tell Google Not to Index a Page in Search

Indexing as many pages on your website as possible can be very tempting for marketers who are trying to boost their search engine authority.
But, while it’s true that publishing more pages that are relevant for a particular keyword (assuming they’re also high quality) will improve your ranking for that keyword, sometimes there’s actually more value in keeping certain pages on your website out of a search engine’s index.

Download our free SEO ebook here for more search engine optimization tips from experts. 


... Say what?! 
Stay with us, folks. This post will walk you through why you might want to remove certain webpages from the SERPS (search engine results pages), and exactly how to go about doing it. 

Why You'd Want to Exclude Certain Web Pages From Search Results

There are a number of occasions where you may want to exclude a webpage -- or a portion of a webpage -- from search engine crawling and indexing.
For marketers, one common reason is to prevent duplicate content (when there is more than one version of a page indexed by the search engines, as in a printer-friendly version of your content) from being indexed.
Another good example? A thank-you page (i.e., the page a visitor lands on after converting on one of your landing pages). This is usually where the visitor gets access to whatever offer that landing page promised, such as a link to an ebook PDF.
Here's what the thank-you page for our SEO tips ebook looks like, for example:
hubspot-seo-thank-you-page.png
You want anyone who lands on your thank-you pages to get there because they've already filled out a form on a landing page -- not because they found your thank-you page in search.
Why not? Because anyone who finds your thank-you page in search can access your lead-generating offers directly -- without having to provide you with their information to pass through your lead-capture form. Any marketer who understands the value of landing pages understands how important it is to capture those visitors as leads first, before they can access your offers.
Bottom line: If your thank-you pages are easily discoverable through a simple Google search, you may be leaving valuable leads on the table.
What's worse, you may even find that some of your highest ranking pages for some of your long-tail keywords might be your thank-you pages -- which means you could be inviting hundreds of potential leads to bypass your lead-capture forms. That's a pretty compelling reason why you'd want to remove some of your web pages from SERPs.
So, how do you go about "de-indexing" certain pages from search engines? Here are two ways to do it.

2 Ways to De-Index a Webpage From Search Engines

Option #1: Add a Robots.txt file to your site.

Use if: You want more control over what you de-index, and you have the necessary technical resources.

One way to remove a page from search engine results is by adding a robots.txt file to your site. The advantage of using this method is that you can get more control over what you are allowing bots to index. The result? You can proactively keep unwanted content out of the search results.
Within a robots.txt file, you can specify whether you’d like to block bots from a single page, a whole directory, or even just a single image or file. There’s also an option to prevent your site from being crawled while still enabling Google AdSense ads to work, if you have them.
That being said, of the two options available to you, this one requires the most technical kung fu. To learn about how to create a robots.txt file, you'll want to read through this article from Google Webmaster Tools.
HubSpot customers: You can learn how to install a robots.txt file on your website hereand learn how to customize the contents of the Robots.txt file here.
If you don’t need all the control of a robots.txt file and are looking for an easier, less technical solution, then this second option is for you.

Option #2: Add a "noindex" metatag and/or a "nofollow" metatag.

Use if: You want an easier solution to de-indexing an entire webpage, and/or de-indexing the links on an entire webpage.

Using a metatag to prevent a page from appearing in SERPs -- and/or the links on a page -- is both easy and effective. It requires only a tiny bit of technical know-how -- in fact, it's really just a copy/paste job if you’re using the right content management system.
The tags that let you do these things are called "noindex" and "nofollow." Before I get into how to add these tags, let's take a moment to define and distinguish between the two. They are, after all, two completely different directives -- and they can be used either on their own, or alongside one another.

What's a "noindex" tag?

When you add a "noindex" metatag to a webpage, it tells a search engine that even though it can crawl the page, it cannot add the page into its search index.
So any page with the "noindex" directive on it will not go into the search engine's search index, and can therefore not be shown in search engine results pages.

What's a "nofollow" tag?

When you add a "nofollow" metatag to a webpage, it disallows search engines from crawling the links on that page. This also means that any ranking authority the page has on SERPs will not be passed on to pages it links to.
So any page with a "nofollow" directive on it will have all its links ignored by Google and other search engines.

When would you use "noindex" and "nofollow" separately vs. together?

Like I said before, you can put add a "noindex" directive either on its own, or together with a "nofollow" directive. You can also add a "nofollow" directive on its own, too.
Add only a "noindex" tag: when you don't want a search engine to index your web page in search, but you do want it to follow the links on that page -- thereby giving ranking authority to the other pages your page links to.
Paid landing pages are a great example of this. You don't want search engines to index landing pages in search that people are supposed to pay to see, but you might want the pages it links to benefit from its authority.
Add only a "nofollow" tag: when you do want a search engine to index your web page in search, but you don't want it to follow the links on that page.
There aren't too many examples of when you'd add a "nofollow" tag to a whole page without also adding a "noindex" tag. When you're figuring out what to do on a given page, it's more a question of whether to add your "noindex" tag with or without a "nofollow" tag.
Add both a "noindex" and "nofollow" tag: when you don't want search engines to index a webpage in search, and you don't want it to follow the links on that page.
Thank-you pages are a great example of this type of situation. You don't want search engines to index your thank-you page, nor do you want them to follow the link to your offer and start indexing the content of that offer, either. 

How to Add a "noindex" and/or a "nofollow" metatag

Step 1: Copy one of the following tags.
For "noindex":
<META NAME="robots" CONTENT="noindex">
For "nofollow":
<META NAME="robots" CONTENT="nofollow">
For both "noindex" and "nofollow":
<META NAME="robots" CONTENT="noindex,nofollow">
Step 2: Add the tag to the <head> section of your page's HTML, a.k.a. the page's header.
If you're a HubSpot customer, this is super easy -- click here or scroll down for those instructions specific to HubSpot users.
If you're not a HubSpot customer, then you'll have to paste this tag into the code on your webpage manually. Don't worry -- it's pretty simple. Here's how you do it.
First, open the source code of the web page you're trying to de-index. Then, paste the full tag into a new line within the <head> section of your page’s HTML, known as the page’s header. The screenshots below will walk you through it.
The <head> tag signifies the beginning of your header: 
header_start_capture
Here's the metatag for both "noindex" and "nofollow" pasted within the header:
meta_tag_in_heaer
And the </head> tag this signifies the end of the header:
end_of_header_capture
Boom! That’s it. This tag tells a search engine to turn around and go away, leaving the page out of any search results.
HubSpot customers: Adding the "noindex" and "nofollow" metatags is even easier. All you have to do is open the HubSpot tool to the page you want to add these tags to, and choose the "Settings" tab.
hubspot-settings-1.png
Next, scroll down to Advanced Options and click "Edit Head HTML." In the window that appears, paste the appropriate code snippet. In the example below, I've added both a "noindex" and a "nofollow" tag since it's a thank-you page.
hubspot-add-meta-tag.png

Press "Save," and you're golden.