Tag Archives: Inbound Links

Yahoo Search Marketing Tools: What’s at Risk & How to Avoid Surprises

When Yahoo and Microsoft announced their Search Alliance in July 2009, only the high level agreement details were available:

  • Microsoft will provide the development and management of search engine results technology (bing)
  • Microsoft will provide the search and content network ad platform (adCenter)
  • Microsoft will manage the relationship with self-service advertisers
  • Yahoo will manage the relationship with large accounts
  • Yahoo will provide their own user interface on top of the Bing results which will appear on Yahoo properties

Microsoft - Yahoo Search AllianceNow that US and EU regulators have approved the deal, search marketers need to assess which Yahoo tools they rely on – and need to be prepared with alternatives should these tools be discontinued.

During the SMX West 2010 session Microsoft + Yahoo: What’s It All Mean?, I looked at the agreement’s implications for three Yahoo tools search marketing professionals have come to know and love:

2 Comments

7 sources of link intelligence data and key link analysis considerations

It may seem like a cliché but on the web no website is an island. Any site worth its salt will have accumulated inbound links and will most certainly contain outbound links to other resources on the web. Indeed, one can easily say that without links to interconnect websites, there wouldn’t be a worldwide web.

For search engines, such as Google, incoming links provide a strong signal as to the authority of a website. If multiple websites link to a specific website for a given topic, there is a good chance the website cited by others is deemed to be highly relevant for a good reason. Google and other search engines identify the theme of a website page by analyzing a page’s content and the text of the incoming links – the underlined text you click on to arrive at a page. Links, especially inbound links, are thus one of the most significant in the over 200 factors Google considers in its ranking algorithms. Inbound links from related sites in a business’ sector are also an excellent source of highly qualified direct traffic.

11 Comments

So many aspiring SEOs! – the SEO Quiz results are in

15 questions, 5 weeks and 5 books: almost 700 people took the 2008 SEO quiz challenge.

Note to the reader: this article was originally posted on our Italian blog on December 2nd. The quiz targeted an Italian audience; we’ve published this translation in order to allow a wider audience to follow search marketing developments in Italy.

Why a SEO quiz

The idea of the quiz came from reflections on the state of SEO knowledge and usage in Italy, observed from the perspective of a SEO practitioner.

Search engines, with Google in particular (question 1), are the gate keepers between us and the net. We use search engines not only to search for information that we imagine is out there somewhere, but also to navigate to a specific site, such as Fiat, or to perform a task, such as buy a ticket for a Tiziano Ferro concert (question 15).

1 Comment

Links and Algorithms behind Blog Statistics: BlogBabel reopens.

I couldn’t help but notice the reopening of Italy’s primary blog classification service, BlogBabel. Just over a year ago I wrote about BlogBabel:

“While it is worth keeping in mind that BlogBabel’s ranking is just one measure of the importance of a particular blog, Ludo deserves kudos for the transparency in which BlogBabel’s rankings are calculated.”

Since then, the ranking factors have changed a bit. Currently BlogBabel says the following parameters are considered1:

BlogBabel Ranking FactorDescriptionWeight
Google PageRankThe “official” global weight Google assigns to a site. (Its worth noting that this is updated only once every 3-4 months and is not what Google uses internally.)1
FeedBurnerNumber of feed subscribers for blogs.0, thus not considered
Link/6Inbound links from posts on other sites, added within the last 6 months.1
1 Comment

The Google Webmaster Dashboard, a.k.a. Google Sitemaps

In order to index and display web content in their search results, search engines need to be able to find the content. The first generation of Internet search engines relied on webmasters to submit a site’s primary URL, the site’s “home page”, to the search engine’s crawler database. The crawler would then follow each link it found on the home page. Problems soon emerged – much site content can be inadvertently hidden from crawlers, such as that behind drop-down lists and forms.

Update: Google Sitemaps was renamed Google Webmaster Tools on 5-Aug-2005 to better reflect its more expansive role.

Fast forward to 2005. Search engine crawlers have improved their ability to find sites through from other sites – site submission is no longer relevant. Yet many web sites are still coded in ways which impede automatic search engine discovery of the rich content often available in larger, complex web sites.

Comments Off