Tag Archives: URL
There is a wonderful saying that one hears in big companies, particularly when discussing Knowledge Management – KM initiatives, “if only we knew what we know“. It is unlikely that Google is exempt from this problem, but their data-driven culture has launched several information dashboards which aim to overcome this problem by facilitating internal and external communication of data, from Google service statuses to internet statistics.
Search engines are great in helping us find something when we suspect that there is an answer out there somewhere, to borrow a phrase from X-File’s Fox Mulder. Yet search engines aren’t very helpful when you don’t even know or imagine a resource exists. This article aims to help insure these mostly lessor known Google tools and resources get the visibility they deserve.
So now that most of the uninformed hype surrounding Google Instant has been written, let’s take a hard look at what Google Instant really means for most companies and organizations.
Google Instant is an interface change
First of all, it is important to understand what Google Instant is and what it is not. Google Instant is a user interface change, it changes the way Google presents search results to Google users.
How Google Instant works
As the user types a query, Google refreshes the displayed search results which, according to Google, best respond to the query typed so far or what Google predicts the query will be based on past queries.
I’m currently using an Italian hosting service, Webperte, for my Italian blog. The service is fine, except for one minor frustration: they don’t support ftp retrieval of the web server access logs used by log based web analytics systems such as Google’s Urchin.
The official solution is to log in to a hosting control panel, navigate a few screens and click on the log download links… a rather tedious process. Fortunately, the perl scripting language offers a relatively easy way to automate this process. I’ve hacked together a script, retrieve-hosting-logs.gz, which is designed to log in to a Parallels Business Automation control panel and download the two most recent access, error or ftp log files as desired. Feel free to use it at your own risk and don’t expect support you haven’t paid for .
- Hard code username and password or specify them on the command line
Say It Isn’t So: Marketing Resource Site Marketing Profs Seems To Be Cloaking Search Engines – Inadvertantly?
Years ago savvy webmasters realized they could achieve better search engine visibility by creating two copies of a web page. One, text rich and graphics poor, would be seen by search engine robots, such as Googlebot, Yahoo Slurp and Microsoft Bing’s msnbot/bingbot. Everyday web users, surfing with Internet Explorer, Firefox, Chrome or Safari, would see a different version, often graphics rich and text poor.
The process of providing different web content to search engines and site visitors is often called cloaking although some may prefer terms such as conditional content delivery. Cloaking is expressly prohibited by Google, Yahoo and Microsoft’s bing.
The real world problem is that cloaking works, and if you’re important enough, you can get away with it until you get caught. At that point you’ll probably get a slap on the wrist, but little more. As an SEO consultant, this leads to many frustrating discussions with clients (and their webmasters) who can’t understand why they shouldn’t cloak too. My official answer is that if you’re site isn’t a throwaway site, you shouldn’t take the risk. Yet this discussion happens too often and, needless to say, clients don’t really like my answer.
With the approval of the Microsoft-Yahoo search deal by EU regulators, search engine marketers will soon be working in a new landscape. In western Europe where Google dominates with about 90% of the market, it’s tempting to react to the deal with a big yawn.
Yet Yahoo! often has a bigger impact on our search marking than we might like to acknowledge. For many, Yahoo, through its Site Explorer and the Yahoo Search Boss / Site Explorer APIs , is a primary source of competitive backlink data. And who among us doesn’t perform a few searches in Yahoo to benchmark the quality of Google’s results?
For paid search practitioners, the consolidation of three PPC / Keyword Advertising platforms down to two (Google & Bing) will certainly reduce operational and training costs. In some countries it will make the choice to expand beyond just Google much easier to justify. Yet Bing’s adCenter PPC service is currently limited to the US, Canada, the UK& France. What will happen to the other 30 countries served by Yahoo today?
At the end of May Microsoft announced its new search engine, Bing. As data from Bing’s first full month becomes available, I thought it would be interesting to take a quick look at the current market share enjoyed by the major search engines in the US and a “typical” European market, Italy. The real test of Bing’s success will to be to check back in a few months to see if Bing has picked up traction with users or not. As the folks from Cuil can attest, a burst of publicity doesn’t necessary translate into loyal search users.
Search Engine statistics, USA vs. Italy
Most web intelligence services are currently US centric with very little worldwide reach. Unless stated otherwise, the data which follows is for the US market. Where available, I’ve also provided data for the Italian market, which for search engine usage is rather typical of most west European markets.