Tag Archives: X-Robots-Tag

Bing – features and SEO recommendations, one month on

At the end of May Microsoft announced its new search engine, Bing. Microsoft justified many of Bing’s new features by noting that 50% of search queries are either abandoned or refined – users aren’t getting the right answer on the first try, citing studies by Jakob Nielsen, Enquiro and internal testing. Microsoft also said that searchers are becoming more focused more on tasks and decisions – consequently search engine sessions are becoming longer as users work their way through their decision making process.

As data from Bing’s first full month becomes available, I thought it would be interesting to take a quick look at what the Bing rollout means for search marketers and, in a separate article, current search engine market shares.

9 Comments

7 sources of link intelligence data and key link analysis considerations

It may seem like a cliché but on the web no website is an island. Any site worth its salt will have accumulated inbound links and will most certainly contain outbound links to other resources on the web. Indeed, one can easily say that without links to interconnect websites, there wouldn’t be a worldwide web.

For search engines, such as Google, incoming links provide a strong signal as to the authority of a website. If multiple websites link to a specific website for a given topic, there is a good chance the website cited by others is deemed to be highly relevant for a good reason. Google and other search engines identify the theme of a website page by analyzing a page’s content and the text of the incoming links – the underlined text you click on to arrive at a page. Links, especially inbound links, are thus one of the most significant in the over 200 factors Google considers in its ranking algorithms. Inbound links from related sites in a business’ sector are also an excellent source of highly qualified direct traffic.

9 Comments

Simon Says… or is it Google Says?

The rel=”canonical” link duplicate content panacea

As many readers probably know, Google and other search engines recently announced support for a rel=”canonical” link attribute value. The new attribute value canonical (not a tag mind you, link is the html tag) can be used by website developers to specify which of essentially similar web pages is the definitive version.

A SEO problem known as duplicate content arises when websites use different URLs, generally through parameters, to provide slightly different versions of a page, such as a printer friendly version, or to support web analytics campaign tracking. In order to give search users unique choices, search engines tend to choose the “best” URL for a page, filtering out similar versions.

Leave a comment

Now there are 6 ways to keep website content out of search engines

Several months ago a client inspired me to write a comprehensive guide to keeping website content out of search engines. Usually website owners are focused on the opposite side of search engine optimization, insuring web content is well indexed. Yet, as many can attest, search engines can be all too efficient at finding documents they shouldn’t. Thus, the need to understand what options exist, how they work and which search engines support them.

One problem with the techniques available up until now is that options for digital media have been limited. The official way to keep video, audio and pdf files out of search engines was through the robots.txt protocol, not a very efficient tool when setting indexing options on a file level.

Leave a comment

6 methods to control what and how your content appears in search engines

While it may seem paradoxical, there are many occasions where you may want to exclude a website or portion of a site from search engine crawling and indexing. One typical need is to keep duplicate content, such as printer friendly versions, out of a search engine’s index. The same is true for pages available both in HTML and PDF or word processor formats. Other examples include site “service pages” such as user friendly error message and activity confirmation pages. Special considerations apply for ad campaign landing pages.

There are several ways to prevent Google, Yahoo!, Bing or Ask from indexing a site’s pages. In this article, we look at the different search engine blocking methods, considering each method’s pros and cons.

Just need to review REP directive support? Jump to the:

33 Comments