Tag Archives: nofollow

7 sources of link intelligence data and key link analysis considerations

It may seem like a cliché but on the web no website is an island. Any site worth its salt will have accumulated inbound links and will most certainly contain outbound links to other resources on the web. Indeed, one can easily say that without links to interconnect websites, there wouldn’t be a worldwide web.

For search engines, such as Google, incoming links provide a strong signal as to the authority of a website. If multiple websites link to a specific website for a given topic, there is a good chance the website cited by others is deemed to be highly relevant for a good reason. Google and other search engines identify the theme of a website page by analyzing a page’s content and the text of the incoming links – the underlined text you click on to arrive at a page. Links, especially inbound links, are thus one of the most significant in the over 200 factors Google considers in its ranking algorithms. Inbound links from related sites in a business’ sector are also an excellent source of highly qualified direct traffic.

10 Comments

Keep sections of web pages out of Yahoo! with class=”robots-nocontent”

There are occasions when some content on a web page just shouldn’t appear in search engines. The most frequent example is repeating header and footer details, such as site copyright information. This site uses the hcard format to provide contact information site visitors can save in a vcard for use with a PIM such as Thunderbird or Outlook. Yet some of the information required for a detailed vcard is not really appropriate for a search engine’s index. Historically, the best solution was to place such content on a page using JavaScript as search engines have avoided indexing JavaScript (they probably do analyze it). A JavaScript approach to keeping some page information out of search engines isn’t perfect – not all visitors will have JavaScript enabled.

The same folks behind the hcard format proposed providing robots instructions in the css class html attribute to give search engine crawlers detailed handling information for tagged page content sections.

Leave a comment

6 methods to control what and how your content appears in search engines

While it may seem paradoxical, there are many occasions where you may want to exclude a website or portion of a site from search engine crawling and indexing. One typical need is to keep duplicate content, such as printer friendly versions, out of a search engine’s index. The same is true for pages available both in HTML and PDF or word processor formats. Other examples include site “service pages” such as user friendly error message and activity confirmation pages. Special considerations apply for ad campaign landing pages.

There are several ways to prevent Google, Yahoo!, Bing or Ask from indexing a site’s pages. In this article, we look at the different search engine blocking methods, considering each method’s pros and cons.

Just need to review REP directive support? Jump to the:

34 Comments

Howto – AWStats Enhancements and Extensions

AWStats Logo This area focuses on resources to enhance the functionality of the web analytics tool AWStats.

These resources have been developed based on our client needs. As a contribution, we offer them here. Some may even make it into a future version AWStats!

WarningThe information here is provided on a „worked for us” as-is basis for your testing, verification and potential adoption.

ExtraSection Samples

AWStats has an excellent custom report syntax called ExtraSection which enables an organization to both extend standard AWStats and add organization specific reports. Below we offer ExtraSection samples useful for sites involved in search engine optimization web marketing and / or monitoring of traffic from external sites.

WarningWeb server log analysis can be memory and CPU resource intensive. AWStats documentation notes that each ExtraSection reduces AWStats speed by about 8%. Proceed with caution.

10 Comments