Monday, 24 April 2017

Error Code List

Error Code List

  1. 1xx Informational Error Code List
  2. 2xx Success Error Code List
  3. 3xx Redirection Error Code List
  4. 4xx Client Error Code List
  5. 5xx Server Error Code List
  6. Unofficial Error Code List
Error Code List

1xx Informational Error Code List

This class of status code means a temporary response, consisting notably of the Status-Line and optional headers, and is eliminated by an empty line.
Since HTTP/1.0 did not specify any 1xx status codes, servers must not[note 1] transfer a 1xx response to an HTTP/1.0 client but under experimental circumstances.

2xx Success Error Code List

This class of status codes means the action asked by the client was accepted, understood and processed successfully.
200 – OK
201 – Created
202 – Accepted
203 – Non-Authoritative Information (since HTTP/1.1)
204 – No Content
205 – Reset Content
206 – Partial Content (RFC 7233)
207 – Multi-Status (WebDAV; RFC 4918)
208 – Already Reported (WebDAV; RFC 5842)
226 – IM Used (RFC 3229)

3xx Redirection Error Code List

This class of status code means the client must take extra action to complete the request. Several of these status codes used in URL redirection. A user agent may carry out the further action with no user interaction only if the system utilised in the second request is GET or HEAD. A user may automatically redirect a request. A user agent should detect and interrupt to prevent cyclical redirects.
300 – Multiple Choices
301 – Moved Permanently
302 – Found
303 – Check Other (since HTTP/1.1)
304 – Not Modified (RFC 7232)
305 – Use Proxy (since HTTP/1.1)
306 – Switch Proxy
307 – Temporary Redirect (since HTTP/1.1)
308 – Permanent Redirect (RFC 7538)

4xx Client Error Code List

This class of status code is reserved for conditions in which the client seems to have erred.
400 – Bad Request
401 – Unauthorised (RFC 7235)
402 – Payment Required
403 – Forbidden
404 – Not Found
405 – Method Not Allowed
406 – Not Acceptable
407 – Proxy Authentication Required (RFC 7235)
408 – Request Timeout
409 – Conflict
410 – Gone
411 – Length Required
412 – Precondition Failed (RFC 7232)
413 – Payload Too Large (RFC 7231)
414 – URI Too Long (RFC 7231)
415 – Unsupported Media Type
416 – Range Not Satisfiable (RFC 7233)
417 – Expectation Failed
418 – I’m a teapot (RFC 2324)
421 – Misdirected Request (RFC 7540)
422 – Unprocessable Entity (WebDAV; RFC 4918)
423 – Locked (WebDAV; RFC 4918)
424 – Failed Dependency (WebDAV; RFC 4918)
426 – Upgrade Required
428 – Precondition Required (RFC 6585)
429 – Too Many Requests (RFC 6585)
431 – Request Header Fields Too Large (RFC 6585)
451 – Unavailable For Legal Reasons

5xx Server Error Code List

The server failed to fulfil a possibly valid request. These response codes apply to any request method.
500 – Internal Server Error
501 – Not Implemented
502 – Bad Gateway
503 – Service Unavailable
504 – Gateway Timeout
505 – HTTP Version Not Supported
506 – Variant Also Negotiates (RFC 2295)
507 – Insufficient Storage (WebDAV; RFC 4918)
508 – Loop Detected (WebDAV; RFC 5842)
510 – Not Extended (RFC 2774)
511 – Network Authentication Required (RFC 6585)

Unofficial Error Code List

These codes are not defined in any RFC but are used by 3rd-party services to provide RESTful error responses.
103 – Checkpoint
420 – Method Failure (Spring Framework)
420 – Enhance Your Calm (Twitter)
450 – Blocked by Windows Parental Controls (Microsoft)
498 – Invalid Token (Esri)
499 – Token Required (Esri)
499 – Request forbidden by antivirus
509 – Bandwidth Limit Exceeded (Apache Web Server/cPanel)
530 – Site is frozen

Internet Information Services Error Code List

IIS expands 4xx error space to signal errors.
449 – Retry With
451 – Redirect
444 – No Response

Nginx Error Code List

The nginx web server software extends the 4xx error space to indicate issues with the client’s request. These codes used for only logging purposes. No actual response is sent with these codes.
495 – SSL Certificate Error
496 – SSL Certificate Required
497 – HTTP Request Sent to HTTPS Port
499 – Client Closed Request

CloudFlare Error Code List

CloudFlare’s reverse proxy service expands the 5xx series of errors space to signal issues with the origin server.
520 – Unknown Error
521 – Web Server Is Down
522 – Connection Timed Out
523 – Origin Is Unreachable
524 – A Timeout Occurred
525 – SSL Handshake Failed
526 – Invalid SSL Certificate

Wednesday, 19 April 2017

Which search engine supports which robots meta tag values?

This table shows which search engines support which values:
Robots valueGoogleYahoo!MSN / LiveAsk
indexYesYesYesYes
noindexYesYesYesYes
noneYesDoubtDoubtYes
followYesDoubtDoubtYes
nofollowYesYesYesYes
noarchiveYesYesYesYes
nosnippetYesNoNoNo
noodpYesYesYesNo
noydirNo useYesNo useNo use

The different robots meta tag values

An explanation of all the different values you can use in the robots meta tags:
index
Allow search engines robots to index the page, you don’t have to add this to your pages, as it’s the default.
noindex
Disallow search engines from showing this page in their results.
noimageindex
Disallow search engines from spidering images on that page. Of course, if images are linked to directly from elsewhere, Google can still index them, so using an X-Robots-Tag HTTP header is a better idea.
none

This is a shortcut for noindex,nofollow, or basically saying to search engines: don’t do anything with this page at all.
follow
Tells the search engines robots to follow the links on the page, whether it can index it or not.
nofollow
Tells the search engines robots to not follow any links on the page at all.
noarchive
Prevents the search engines from showing a cached copy of this page.
nocache
Same as noarchive, but only used by MSN/Live.
nosnippet
Prevents the search engines from showing a snippet of this page in the search results and prevents them from caching the page.
noodp
Used to block search engines from using the description for this page in DMOZ (aka ODP) as the snippet for your page in the search results. However, DMOZ doesn’t exist anymore.
noydir
Blocks Yahoo! from using the description for this page in the Yahoo! directory as the snippet for your page in the search results. No other search engines use the Yahoo! directory for this purpose, so they don’t support the tag. Since Yahoo! closed its directory this tag is deprecated, but you might come across it once in awhile.

The resources from the search engines

The search engines themselves have pages about this subject as well:
You can block all robots at once by the markup above, or just block one robot by specifying it specifically:
Google
GOOGLEBOT
Yahoo!
SLURP
MSN / Live
MSNBOT
Ask
TEOMA


Monday, 17 April 2017

Using Noindex, Nofollow HTML Metatags: How to Tell Google Not to Index a Page in Search

Indexing as many pages on your website as possible can be very tempting for marketers who are trying to boost their search engine authority.
But, while it’s true that publishing more pages that are relevant for a particular keyword (assuming they’re also high quality) will improve your ranking for that keyword, sometimes there’s actually more value in keeping certain pages on your website out of a search engine’s index.

Download our free SEO ebook here for more search engine optimization tips from experts. 


... Say what?! 
Stay with us, folks. This post will walk you through why you might want to remove certain webpages from the SERPS (search engine results pages), and exactly how to go about doing it. 

Why You'd Want to Exclude Certain Web Pages From Search Results

There are a number of occasions where you may want to exclude a webpage -- or a portion of a webpage -- from search engine crawling and indexing.
For marketers, one common reason is to prevent duplicate content (when there is more than one version of a page indexed by the search engines, as in a printer-friendly version of your content) from being indexed.
Another good example? A thank-you page (i.e., the page a visitor lands on after converting on one of your landing pages). This is usually where the visitor gets access to whatever offer that landing page promised, such as a link to an ebook PDF.
Here's what the thank-you page for our SEO tips ebook looks like, for example:
hubspot-seo-thank-you-page.png
You want anyone who lands on your thank-you pages to get there because they've already filled out a form on a landing page -- not because they found your thank-you page in search.
Why not? Because anyone who finds your thank-you page in search can access your lead-generating offers directly -- without having to provide you with their information to pass through your lead-capture form. Any marketer who understands the value of landing pages understands how important it is to capture those visitors as leads first, before they can access your offers.
Bottom line: If your thank-you pages are easily discoverable through a simple Google search, you may be leaving valuable leads on the table.
What's worse, you may even find that some of your highest ranking pages for some of your long-tail keywords might be your thank-you pages -- which means you could be inviting hundreds of potential leads to bypass your lead-capture forms. That's a pretty compelling reason why you'd want to remove some of your web pages from SERPs.
So, how do you go about "de-indexing" certain pages from search engines? Here are two ways to do it.

2 Ways to De-Index a Webpage From Search Engines

Option #1: Add a Robots.txt file to your site.

Use if: You want more control over what you de-index, and you have the necessary technical resources.

One way to remove a page from search engine results is by adding a robots.txt file to your site. The advantage of using this method is that you can get more control over what you are allowing bots to index. The result? You can proactively keep unwanted content out of the search results.
Within a robots.txt file, you can specify whether you’d like to block bots from a single page, a whole directory, or even just a single image or file. There’s also an option to prevent your site from being crawled while still enabling Google AdSense ads to work, if you have them.
That being said, of the two options available to you, this one requires the most technical kung fu. To learn about how to create a robots.txt file, you'll want to read through this article from Google Webmaster Tools.
HubSpot customers: You can learn how to install a robots.txt file on your website hereand learn how to customize the contents of the Robots.txt file here.
If you don’t need all the control of a robots.txt file and are looking for an easier, less technical solution, then this second option is for you.

Option #2: Add a "noindex" metatag and/or a "nofollow" metatag.

Use if: You want an easier solution to de-indexing an entire webpage, and/or de-indexing the links on an entire webpage.

Using a metatag to prevent a page from appearing in SERPs -- and/or the links on a page -- is both easy and effective. It requires only a tiny bit of technical know-how -- in fact, it's really just a copy/paste job if you’re using the right content management system.
The tags that let you do these things are called "noindex" and "nofollow." Before I get into how to add these tags, let's take a moment to define and distinguish between the two. They are, after all, two completely different directives -- and they can be used either on their own, or alongside one another.

What's a "noindex" tag?

When you add a "noindex" metatag to a webpage, it tells a search engine that even though it can crawl the page, it cannot add the page into its search index.
So any page with the "noindex" directive on it will not go into the search engine's search index, and can therefore not be shown in search engine results pages.

What's a "nofollow" tag?

When you add a "nofollow" metatag to a webpage, it disallows search engines from crawling the links on that page. This also means that any ranking authority the page has on SERPs will not be passed on to pages it links to.
So any page with a "nofollow" directive on it will have all its links ignored by Google and other search engines.

When would you use "noindex" and "nofollow" separately vs. together?

Like I said before, you can put add a "noindex" directive either on its own, or together with a "nofollow" directive. You can also add a "nofollow" directive on its own, too.
Add only a "noindex" tag: when you don't want a search engine to index your web page in search, but you do want it to follow the links on that page -- thereby giving ranking authority to the other pages your page links to.
Paid landing pages are a great example of this. You don't want search engines to index landing pages in search that people are supposed to pay to see, but you might want the pages it links to benefit from its authority.
Add only a "nofollow" tag: when you do want a search engine to index your web page in search, but you don't want it to follow the links on that page.
There aren't too many examples of when you'd add a "nofollow" tag to a whole page without also adding a "noindex" tag. When you're figuring out what to do on a given page, it's more a question of whether to add your "noindex" tag with or without a "nofollow" tag.
Add both a "noindex" and "nofollow" tag: when you don't want search engines to index a webpage in search, and you don't want it to follow the links on that page.
Thank-you pages are a great example of this type of situation. You don't want search engines to index your thank-you page, nor do you want them to follow the link to your offer and start indexing the content of that offer, either. 

How to Add a "noindex" and/or a "nofollow" metatag

Step 1: Copy one of the following tags.
For "noindex":
<META NAME="robots" CONTENT="noindex">
For "nofollow":
<META NAME="robots" CONTENT="nofollow">
For both "noindex" and "nofollow":
<META NAME="robots" CONTENT="noindex,nofollow">
Step 2: Add the tag to the <head> section of your page's HTML, a.k.a. the page's header.
If you're a HubSpot customer, this is super easy -- click here or scroll down for those instructions specific to HubSpot users.
If you're not a HubSpot customer, then you'll have to paste this tag into the code on your webpage manually. Don't worry -- it's pretty simple. Here's how you do it.
First, open the source code of the web page you're trying to de-index. Then, paste the full tag into a new line within the <head> section of your page’s HTML, known as the page’s header. The screenshots below will walk you through it.
The <head> tag signifies the beginning of your header: 
header_start_capture
Here's the metatag for both "noindex" and "nofollow" pasted within the header:
meta_tag_in_heaer
And the </head> tag this signifies the end of the header:
end_of_header_capture
Boom! That’s it. This tag tells a search engine to turn around and go away, leaving the page out of any search results.
HubSpot customers: Adding the "noindex" and "nofollow" metatags is even easier. All you have to do is open the HubSpot tool to the page you want to add these tags to, and choose the "Settings" tab.
hubspot-settings-1.png
Next, scroll down to Advanced Options and click "Edit Head HTML." In the window that appears, paste the appropriate code snippet. In the example below, I've added both a "noindex" and a "nofollow" tag since it's a thank-you page.
hubspot-add-meta-tag.png

Press "Save," and you're golden.

Friday, 24 March 2017

Google Maps latest update encourages the use of Local SEO strategies

Google Maps is automatically recommending the best local businesses to consumers. Without a local SEO Strategy, consumers will be choosing your competitors every day!
Google Maps has over 1 billion users and Apple Maps receives over 5 billion map-related requests every week. Could you even imagine a time when people were forced to use paper maps? The amount of trust consumers place on mapping applications to help them find new places (unbranded searches like “restaurants near me”) and take them where they need to go is amazing. Today’s customer surrenders all decision making processes to the opinion of a faceless, impersonal application. What does all this mean?  Leveraging today’s search engines and creating effective local SEO strategies will allow businesses to organically own their market.
86% of consumers are using Google Maps as a tool to look up local businesses. The popularity in digital mapping applications has turned maps that simply get us where we need to be into a thing of the past. Consumers are using maps as a single source for discovering, researching and comparing new businesses. Features such as nearby “Place Labels”, “Areas of Interest” and the ability to compare ratings and reviews for businesses directly within these apps have turned digital mapping applications into a one stop shop for choosing local businesses.
Paradox of Choice.jpg


Consumers face so many options these days that they are overloaded with choices. They can rely on these tools to quickly narrow down their choices and give them only the most trusted and reliable options, allowing them to make quick and confident decisions about where to spend their money. This especially holds true for someone from out of town or who is otherwise unfamiliar with the area. Tools like Google and Apple maps can be a safe haven for finding where to grab a bite to eat, the best places to go shopping, or where to order a quick cup of coffee.
“Areas of Interest”: How are map updates changing the way consumers make decisions and can your businesses reputation withstand this new trend?
In mid 2016, Google (and it seems Apple as well) added a few new updates to their mapping applications, one of which is now helping consumers decide “what to do” and “where to go”. In addition to being able to queue specific searches for businesses nearby, Google’s new update allows users to explore the map and find nearby “Areas of Interest” (highlighted in orange) or places where there’s a lot of activities and things to do. You’ll find that In general, these areas include will include downtown and tourist areas with a lot of foot traffic.
Google Maps (San Diego):                              Apple Maps (New York):
SD Google arrow.jpg                maps NY.jpg
These new “Areas of Interest” are a great way to help consumers explore the area and find local businesses nearby. Google wrote:
“Whether you’re looking for a hotel in a hot spot or just trying to determine which way to go after exiting the subway in a new place, 'areas of interest' will help you find what you’re looking for with just a couple swipes and a zoom.”
This new feature is another tool consumers have at their disposal for sorting through the masses of information and choices available to them. This update is especially valuable for travelers who don’t know the city; it allows them to quickly swipe through their trusted maps and find the most popular bars, restaurants, shops, and things to do in the area. Customers and tourist can spend less time thinking about exactly what they want to do or where exactly they want to go and instead let their web mapping applications give them the most popular and trusted options around.
As more users become aware of this feature and begin to incorporate it into their daily lives, it will raise a serious question: “Why does one business deserve to be an ‘Area of Interest’, while another-just a block or two away-does not?”
Google stated they will be using an algorithmic process to determine areas with the highest concentration of restaurants, bars and shops. In high density areas like NYC or San Francisco, they are using a combination of algorithms and a human touch to ensure they are highlighting the most relatively active areas.
Graph.jpg
91% of consumers regularly or occasionally read online reviews to determine whether a local business is good or bad.

Online visibility will immediately increase for those businesses highlighted as an “Area of Interest” which could either be a blessing or a curse. Either way, having a proper reputation management strategy will be crucial for winning over customers once they have found your business listing. For businesses with a great online reputation, the fact that “Areas of Interest” provide even easier access to your online profile and reviews will help your business thrive. In contrast, a business listing littered with poor ratings and reviews will be more easily dismissed by potential customers.
Not an “Area of Interest”, but still want to take advantage of popular web mapping services? Use Local SEO and Reputation Management to earn a “Place Label”!
Google’s and Apple’s “Place Label’s” work very similarly to the “Areas of Interest”. These are places Google and Apple recommend based on popularity, reliability and previous customer opinions. They allowing consumers to open up their mapping application and quickly swipe around to find nearby places that stand out.
These Icons highlight and draw attention to some of the most reputable and trustworthy businesses in the area and sit atop other great content such as landmarks and tourist attractions. All of the icons are clickable, giving users instant access to information such as business hours and address, ratings and reviews, and customer images.

Google Maps Place Labels:                       Apple Maps Place Labels:

Google Icon.png          Apple Icon.png
The “Place Labels” are determined algorithmically through a large number of factors. However, Google has said that one of the major factors used to determine “Place Labels” is the accuracy of the business information and the richness of the content associated with the business. This means in order to earn a “Place Label”, businesses must have some of the most accurate and consistent listings all across the web (specifically business name, address and phone number). In addition to accurate and consistent listing information, to earn a place label, businesses need rich and engaging content tied to their listings; this includes a high quantity and quality of reviews (with responses), photos and videos. Put it all together and earning a place label can come down to having a great Local SEO and Reputation Management strategy.

As consumers become more and more comfortable with local search and the power of web mapping services, businesses will compete for new customers based more off of what mapping applications decide to make available to users. Businesses with great online reputations and accurate listings will win more customers by encouraging mapping applications to recommend their business to users. Schedule a FREE brand audit with Chatmeter to ensure your listing accuracy and learn more about the most effective and efficient Reputation Management software available.