6 methods to control what and how your content appears in search engines


While it may seem paradoxical, there are many occasions where you may want to exclude a website or portion of a site from and indexing. One typical need is to keep duplicate content, such as printer friendly versions, out of a search engine’s index. The same is true for pages available both in HTML and PDF or word processor formats. Other examples include site “service pages” such as user friendly error message and activity confirmation pages. Special considerations apply for ad campaign landing pages.

There are several ways to prevent Google, Yahoo!, Bing or Ask from indexing a site’s pages. In this article, we look at the different search engine blocking methods, considering each method’s pros and cons.

Just need to review REP directive support? Jump to the:

1. Use a robots exclusion file

Way back in 1994 members of a robots discussion list voluntary agreed on a method to tell well behaved web robots, such as search engine spiders and crawlers, certain site content is off-limits.

The robots exclusion standard, as articulated in the robots.txt protocol, says that spiders should look for a plain text file called robots.txt in a site’s top (root) directory. To exclude all robots from crawling directories called sales and images, the following syntax is used:

User-agent: *
Disallow: /sales/
Disallow: /images/

A common error is to forget the trailing slash – we even spotted this error in a recent google blog post.

User-agent: googlebot
Disallow: /sales

will stop any file beginning with sales* from being indexed – not usually what you want. In this case, we have limited the exclusion to googlebot. See our article on search engine spiders for a list of spider bots associated with each of the major search engines.

Tip1. We recommend using at least a default robots.txt file to avoid logging “404 file not found errors” in your server web logs every time a well-behaved bot looks for an inexistent robots.txt file. The default file would contain the following lines:

User-Agent: * 
Allow: / 

Note that allow is the default; the only reason to set up such a file is to avoid triggering file not found errors.

Pattern matching

Some search engines support extensions to the original robots.txt specification which allow for URL pattern matching.

Pattern CharacterDescriptionExampleSearch Engine Support
*matches a sequence of charactersUser-Agent: *
Disallow: /print*/
Google, Yahoo, Bing
$matches the end of a URLUser-Agent: *
Disallow: /*.pdf$
Google, Yahoo, Bing

References: Google, Yahoo!, Bing (Sad note: Microsoft Bing’s help system is awful – they don’t allow direct linking to a topic section). At the time of this writing, Ask does not officially support these extensions.

Site directory organization considerations

When designing a new site, or revising an existing site, we recommend organizing content to be excluded from search engines in dedicated directories; otherwise a robots.txt file becomes unwieldy. At the time of this writing, the whitehouse.gov robots.txt file contains almost 2000 lines.

Using AdWords? Special considerations for ad campaign landing pages

Many sites create dedicated web pages as starting pages for traffic from specific on-line or off-line advertising campaigns. These pages, called landing pages, provide a means to offer a targeted message to visitors who have responded to a specific promotion. Landing pages also allow marketers to measure the response rate to a particular campaign.

Tip When measuring traffic to a landing page, you should measure unique visitors by excluding internal site referrals, i.e. users who return to the page by using the back button.

Generally, a site would want to block robots from indexing landing pages – the pages should only be accessible to visitors who follow a link from a promotion.

User-agent: *
Disallow: /promo07/

There are however cases where this is not a good idea. Some search engine advertising programs, such as Google’s AdWords, consider the quality of your landing page in their ad placement algorithm. By default, Google’s AdWords quality checking robot, AdsBot-Google, WILL crawl pages unless specifically excluded by name in your robots.txt file:

User-agent: AdsBot-Google
Disallow: /promo07/

In most cases, you do not want to block quality scoring robots such as AdsBot-Google.

Pro

Con

  • robots.txt is ignored by misbehaved bots.
  • Anyone can read your robots.txt file – indeed there is even a robots.txt blog. Thus, this is not the place to list “secret” directories and files.

Search Engine robots.txt References

Summary of Search Engine REP robots.txt support

The following table was compiled from search engine help files and blog posts. Unfortunatley the information supplied is not always complete, although Google and Bing have improved significantly over time.

DirectiveDescriptionGoogleBingYahooTeomaBlekkoNaverYandexRamblerBaiduSogou
AllowAllow crawling of a particular path?
DisallowDisallow crawling of a particular path
Crawl-delayControls the time between successive requests to a site. Generally in seconds.
Pattern Match ** used to represent multiple characters?
Pattern Match $$ used to terminate the match string?
SitemapUsed to specify a sitemap location. Not a good idea – you’re telling the entire world where a list of your files is located. A better approach is to ping each search engine when this file changes.
searchpreviewDon’t allow search engine to present a preview image of the site in search resultsWas used by Windows Live Search to disable a page preview thumbnail
Clean-paramSpecify one or more parameters to be removed from path URL
HostUsed to identify mirror sites which should be excluded from indexing
Known robotsList of robots used by each search engine. Some use separate robots to crawl images, feeds and other media types. Google uses specific bots for its AdWords and AdSense advertising programs. Some bots are region specific, such as Slurp China. Not all meta tags are applicable to all of the specialized crawlers.adsbot-google, feedburner, feedfetcher-google, google wireless transcoder, google-site-verification, google-sitemaps, googlebot, googlebot-image, googlebot-mobile, googlebot-news, gsa-crawler, mediapartners-googlebingbot, bingbot-media, msnbot, msnbot-academic, msnbot-media, msnbot-newsblogs, msnbot-products, msnbot-udiscoveryteomascoutjetnaverbot, yetiYandexStackRamblerbaiduspider, baiduspider-cpro, baiduspider-favo, baiduspider-image, baiduspider-mobile, baiduspider-news, baiduspider-videoSogou web spider
GeographyInternationalInternationalExcept US/CandaUS, UK, Germany, France, Italy, Japan, Netherlands, SpainUSKoreaRussiaRussiaChinaChina

2. Use “noindex” page meta tags

Pages can be tagged using “meta data” to indicate they should not be indexed by search engines. Simply add the following code to any page you do not want a search engine to index:

<meta name="robots" content="noindex" />

Keep in mind that search engines will still spider these pages on a regular basis. The continue to crawl “noindex” pages in order to check the current status of a page’s robots meta tag.

TipThere is no need to use an index tag; index is the default option. Using a default tag just adds bloat to your web pages. The only time you might use them is to override a global setting:

<meta name="robots" content="noindex" />
<meta name="googlebot" content="index" />

Pro

  • Allows page level granularity of robots commands.

Con

  • The use of a noindex meta tag is only possible with html pages (which includes dynamic pages such as php, jsp, asp). It is not possible to exclude other file types such as PDF, DOC, ODT which don’t support html meta tags.
  • Pages will still be spidered by search engines to check the current robots meta tag settings. This additional traffic is avoided when using robots.txt file settings.

3. Password protect sensitive content

Sensitive content is usually protected by requiring visitors to enter a username and password. Such secure content won’t be crawled by search engines. Passwords can be set at the web server level or at the application level. For server level logon setup, consult the Apache Authentication Documentation or the Microsoft IIS documentation.

Pro

  • An effective way to keep search engines, other robots, and the general public away from content destined for a limited audience.

Con

  • Visitors will only make an effort to access protected website areas if they have a strong motivation to view that content.

4. Nofollow: tell search engines not to spider some or all on a page

As a response to blog comment “spam”, search engines introduced a way for websites to tell a search engine spider to ignore one or more links on a page. In theory, the search engine won’t “follow”, or crawl, a link which has been “protected”. To keep all links on a page off-limits, use a nofollow meta tag:

<meta name="robots" content="nofollow" />

To specify nofollow at the link level, add the attribute rel with the value nofollow to the link:

<a href="mypage.html" rel="nofollow" />

Con

  • Our tests show that some search engines do crawl and index nofollow links. The nofollow tag will probably diminish the ranking value a link will provide but it cannot be reliably used to stop search engines from following a link.

5. Don’t link to pages you want to keep out of search engines

Search engines won’t index content unless they know about it. Thus, if no one links to pages nor submits them to a search engine, a search engine won’t find them. At least this is the theory. In reality, the web is so large, one can assume that sooner or later a search engine will find a page – someone will link to it.

Con

  • Anyone can link to your pages at any time.
  • Some search engines can monitor pages visitors view through installed toolbars. They may use this information in the future as a means to discover and index new content.

6. Use X-Robots-Tag in your headers

In solution 1 above, we noted that use of robots.txt explicitly exposes some of your site’s structure, something you may want to avoid. Unfortunately, solution 2, use of meta tags, only works for html documents – there’s no way to specify indexing instructions for PDF, odt, doc and other non-html files.

In July 2007, Google officially introduced a solution to this problem: the ability to deliver indexing instructions in the http header information which is sent by the web server along with an object. Yahoo! joined Google by supporting this tag in December 2007. Microsoft first mentions x-robots-tag in a June 2008 blog post, although I don’t see their webmaster documentation updated. They do make one mention of X-Robots-Tag in their Bing guide for webmasters.

The web server simply needs to add X-Robots-Tag and any of the Google or Yahoo! supported meta tag values to the http header for an object:

X-Robots-Tag: noindex

Pro

  • An elegant way to specify search engine crawling instructions for non-html files without having to use robots.txt.
  • Easy to configure using the Apache Header append syntax.
  • X-Robots-Tag is officially supported by Google, Yahoo and Bing. Ask has not yet mentioned it.

Con

  • Most webmasters are probably not comfortable setting http headers.
  • Microsoft IIS support for adding http headers has traditionally been very limited.

Added 2007-07-27. Updated 2009-06-17.

Partially Stop Page Content from appearing in Search Engines

There are times where only a section of a page should be kept out of a search engine. Yahoo supports a class=”robots-nocontent” html tag attribute for this purpose. See our discussion of class=”robots-nocontent” for more details.

Removing pages which have already been indexed.

The best approach is to use one of the above methods. Over time search engines will update their indexes with regular crawling. If you want to remove content immediately, Google offers a tool specifically for this purpose. Pages will be removed for at least six months. This process is not without risk: improperly specify your URL and you may find your entire site removed. Bing has a request form you can use. Yahoo! will consider removal requests for copyright infringement and violation of their search quality guidelines. They don’t currently provide a way to expedite removal of a site owner’s pages.

Removing your content which appears on third party sites

There are occasions when you may find a site ranking in a search engine with content they have “repurposed” without permission from your site. In this case, copyright infringement procedures apply. The best approach to this problem is to directly ask the offending party to remove the copyrighted material. Should this path not prove effective, you should notify each search engine of the copyright infringement. Most of the US based search engines model their copyright violation procedures on the requirements set forth in the American “Digital Millennium Copyright Act” (pdf).

Copyright Infringement procedure

Each search engine provides for notification of copyright violations, a procedure to follow in the event the copyright violator proves non-responsive.

Automated Content Access Protocol

Several commercial publishing associations have united behind a project to allow for the specification of more granular restrictions on content use by search engines. The project, Automated Content Access Protocol, appears to be as much a desire to share in the profits that search engines accrue when presenting abstracts of a publisher’s content, rather than a response to limitations in the current robots.txt and meta tag solutions.

At the time of this writing (February 2007), no search engines have yet announced support for this project.

Additional Search Engine Content Display Control

Several search engines also support ways for webmasters to further control the use of their content by search engines.

No archive

Most search engines allow a user to view a copy of the web page that was actually indexed by the search engine. This snapshot of a page in time is called the cache copy. Internet visitors can find this functionality to be really useful if the link is no longer available or the site is down.

There are several reasons to consider disabling the cache view feature for a page or an entire website.

  • Web site owners may not want visitors viewing data, such as price lists, which are not necessarily up to date.
  • Web pages viewed in a search engine cache may not display properly if embedded images are unavailable and/or browser code such as CSS and JS does not properly execute.
  • Cached page views will not show up in web log based web analytics systems. Reporting in tagged based solutions may be incorrect as well as the cached view is on a third party domain, not yours.

If you want a search engine to index your page without allowing a user to view a cached copy, use the noarchive attribute which is officially supported by Google, Yahoo!, Bing and Ask:

<meta name="robots" content="noarchive" />

Microsoft documents the nocache attribute, which is equivalent to noarchive, also supported by Microsoft; there is no reason to use it.

No abstract option: nosnippet

Google offers an option to suppress the generation of page abstracts, called snippets, in the search results. Use the following meta tag in your pages:

<meta name="googlebot" content="nosnippet" />

They note that this also sets the noarchive option. We would suggest you set it explicitly if that is what you want. In a 2008 article, Bing says it supports nosnippet too. In November 2010 Google introduced website previews like Ask’s Binoculars and Bing’s site preview. Google says the nosnippet tag will suppress instant previews.

Page title option: noodp

Search engines generally use a page’s html title when creating a search result title, the link a user clicks on to arrive at a website. In some cases, search engines may use an alternative title taken from a directory such as dmoz, the open directory, or the Yahoo! directory. Historically, many sites have had poor titles – i.e. just the company name, or worse, “default page title“. Use of a human edited title from a well known directory was often a good solution. As webmasters improve the usability of their sites, page titles have become much more meaningful – and often better choices than the open directory title. The noodp metatag, supported by Microsoft, Google and Yahoo, allows a webmaster to indicate that a page’s title should be used rather than the dmoz title.

<meta name="robots" content="noodp" />

Similarly, Yahoo! offers a “noydir” option to keep Yahoo! from using Yahoo! Directory titles in search results for a site’s pages:

<meta name="slurp" content="noydir">

Bing Site Preview

Microsoft’s Bing offers a thumbnail preview of most search results, what Bing calls Document Preview. This isn’t new, Live Search offered a preview of the first six search results in some geographies. Ask.com also offers a similar feature called binoculars. Bing’s preview can be disabled by specifying nopreview as a meta robots value for a page. Microsoft also notes support for x-robots-tag: nopreview in http headers, the first time I’ve noted Microsoft mentioning support for the x-robots-tag.

Microsoft previously supported different methods to disable the thumbnail previews. They were the searchpreview robot in the robots.txt file,

User-agent: searchpreview
Disallow: /

or by using a meta tag containing “noimageindex,nomediaindex”:

<meta name="robots" content="noimageindex,nomediaindex" />

This meta tag was used by AltaVista at one point; it is not known to be used by any of the other major search engines.

Expires After with unavailable_after

One problem with search engines is the delay which occurs from when content is removed from a website and when that content actually disappears from search engine results. Typical time dependent content includes event information and marketing campaigns.

Pages removed from a website which still appear in search engine results generally result in a frustrating user experience – the Internet user clicks through to the website only to find themselves landing on a “Page not found” error page.

In July 2007, Google introduced the “unavailable_after” tag which allows a website to specify in advance when a page should be removed from search engine results, i.e. when it will expire. This tag can be specified as a html meta tag attribute value:

<meta name="robots" content="unavailable_after: 21-Jul-2037 14:30:00 CET" />

or in an X-robots http header:

X-Robots-Tag: unavailable_after: 7 Jul 2037 16:30:00 GMT

Google says the date format should be one of those specified by the ambiguous and obsolete RFC 850. We hope Google clarifies what date formats their parser can read by referring to a current date standard, such as IETF Internet standard RFC 3339. We’d also like to see detailed page crawl information in Google’s Webmaster Tools. Not only could Google show when a page was last crawled, they could add expiration information, confirming proper use of the unavailable_after tag. At one point, Google did show an approximation of the number of pages crawled relative to the number specified in a sitemap, but that feature was removed. This is one case where Google should follow Yahoo’s example.

Pro

  • A nice way to ensure search engine results are synchronized with current website content.

Con

  • Old date specification RFC 850 is too ambiguous, thus subject to error.
  • unavailable_after support is currently limited to Google. We do hope the other major search engines embrace this approach as well.

Added 2007-07-27.

Crawl Delay

While not directly related to content, I was asked about regulating crawling speed in a class, so here’s the formal answer. Both Yahoo! and Microsoft’s Bing support the robot exclusion protocol value crawl-delay. Yahoo cites a delay value in the form x.x where 5 or 10 is “high”. While Yahoo doesn’t specify the delay units, Microsoft uses seconds.


User-agent: Slurp
Crawl-delay: 0.5

User-agent: msnbot
Crawl-delay: 4

Google does not support Crawl-delay nor will bots which are imposters. For Google, there is a setting which can be changed in Google’s Webmaster Tools for a site. Now that you know you can set a crawl delay, you probably shouldn’t. Search engine crawlers need to access your site’s contents to find any changes – new pages, deleted pages, changed pages. It is in your interest that they do this frequently. Except in rare occurrences, the major search engines won’t be hammering your site. Imposters, maybe, but they won’t look at the robots.txt content.

Meta Tag Summary

The following table summarizes the page level meta tags which can be used to specify how a search engine crawls a page. Positive tags, such as follow, are not listed as they are the default. Tags are case insensitive and can usually be combined.

TagGeneral DescriptionGoogleBingYahooAskBlekkoNaverYandexRamblerBaiduSogou
noindexDon’t index a page (implies noarchive / nocache)??
nofollowDon’t follow, i.e. crawl, the links on the page?
noarchiveDon’t present a cached copy of the indexed page?
nocacheSame as noarchive?
noneEquivalent to noindex, nofollow?
nosnippetDon’t display an abstract for this page. For Google, also implies noarchive and, as of Nov 2010, disables site instant preview.?
noodpDon’t use an Open Directory title for the page?
noydirDon’t use a Yahoo! Directory title for the page
nopreviewDon’t display a site preview in search results
noimageindexDon’t crawl images specified in this pageWas used by Windows Live Search to disable a page preview thumbnail
nomediaindexDon’t crawl other media specified in the pageWas used by Windows Live Search to disable a page preview thumbnail
unavailable_after: <date in one of the RFC 850 formats>Don’t offer in search results after this date and time. In reality, Google says:
This information is treated as a removal request: it will take about a day after the removal date passes for the page to disappear from the search results. We currently only support unavailable_after for Google web search results.
syndication-sourceindicates the URL which is the definitive source of a syndicated article. Only used by Google News.
original-sourceindicates the URL which first reported on a news story. Only used by Google News; not yet active (11/2010)
notranslateDon’t allow automatic translation of a page. Notranslate was introduced by Google with apparently little thought. The syntax takes the noun “google” instead of “robots”, e.g. <meta name=”google” content=”notranslate” /> or Google bot
msvalidate.01verify ownership for Bing Webmaster Toolsnananananananana
google-site-verification (was verify-v1)verify ownership for Google Webmaster Toolsnananananananana
Known robotsList of robots used by each search engine. Some use separate robots to crawl images, feeds and other media types. Google uses specific bots for its AdWords and AdSense advertising programs. Some bots are region specific, such as Yahoo! Slurp China. Not all meta tags are applicable to all of the specialized crawlers.adsbot-google, feedburner, feedfetcher-google, google wireless transcoder, google-site-verification, google-sitemaps, googlebot, googlebot-image, googlebot-mobile, googlebot-news, gsa-crawler, mediapartners-googlebingbot, bingbot-media, msnbot, msnbot-academic, msnbot-media, msnbot-newsblogs, msnbot-products, msnbot-udiscoveryteomascoutjetnaverbot, yetiYandexStackRamblerbaiduspider, baiduspider-cpro, baiduspider-favo, baiduspider-image, baiduspider-mobile, baiduspider-news, baiduspider-videoSogou web spider
GeographyInternationalInternationalExcept US/CandaUS, UK, Germany, France, Italy, Japan, Netherlands, SpainUSKoreaRussiaRussiaChinaChina

Source: search engine help files, blog posts. Last Update:November 2010 – updated table

Similar Posts:

Registration is now open for the next SEO Course and Google Analytics Course in Milan. Don’t miss the opportunity!


About Sean Carlos

Sean Carlos is a digital marketing consultant & teacher, assisting companies with their Search (SEO + SEA = SEM), Social Media & Digital Media Measurement strategies. Sean first worked with text indexing in 1990 in a project for the Los Angeles County Museum of Art. Since then he worked for Hewlett-Packard Consulting and later as IT Manager of a real estate website before founding Antezeta in 2006. Sean is an official instructor of the Digital Analytics Association and collaborates with the Bocconi University. He is Chairman of the SMX Search and Social Media Conference, 13 & 14 November in Milan. He is also a co-author of the Treccani encyclopedic dictionary of computer science, ICT & digital media. Born in Providence, RI, USA, Sean received Honors in Physics from Bates College, Maine. He speaks English, Italian and German.

34 Responses to "6 methods to control what and how your content appears in search engines"

Leave a reply

Warning: Comments are very welcome insofar as they add something to the discussion. Spam and/or polemical comments without a rational justification of the author's position risk being mercilessly deleted at the sole discretion of the administrator. Yes, life is hard :-).