Exploiting Intranet Search Engines for Data Discovery

Article excerpt

Analysis of data from your search engine is instrumental in helping you improve your intranet.

No doubt about it: A good intranet search engine is a, maybe the, critical service to offer users. But have you thought about its value to the Webmaster? No, not for searching per se, but as a monitoring tool. Search query log reports shed light on how employees are using your intranet. You can discover what's in demand today and what topics are waning or gaining in popularity; it's your individual intranet zeitgeist. Search log analysis can offer insights into employees' expectations about intranet content and services. Another nugget of gold to mine from search logs is employees' own words for key concepts.

Analysis of search engine data is instrumental in helping you improve your intranet. There are four key ways that you can make use of the data gathered from your search engine:

* Content synopsis

* Site search performance

* Intranet usability

* "Best bets"

Search engines typically generate two types of data files that can be analyzed. As the search engine robot crawls the intranet, it records the URLs and files that it indexes, the number of terms, and so on. The second type of file is similar to a Web server access log. The search query log file typically stores the date, time, and search strings entered.

In the event that your search engine doesn't produce a query log data file, you may still be able to glean some information from the referrer portion of your Web server logs. If searchers follow a link from the search results screen, their query terms often show up in the referrer in the Web server log. With just a few minutes of setup, savvy Web log reporting tools can create a special report showing term frequencies for a single search engine. The caveat of using referrer data is that unsuccessful searches, where the user didn't click on any of the results, are not recorded.

CONTENT SYNOPSIS

For many distributed intranets containing documents on dozens or hundreds of servers, your search engine is the best tool for a bird's eye view of the breadth and depth of the content. Some spiders will generate a brief report indicating types and number of documents indexed, number of words identified and added to the index, bad links or URLs, and secure or password-controlled areas. In some cases, it will indicate if the content changed from the previous visit or not, showing how many pages are revised month to month. Swish-e (Simple Web Indexing System for Humans-Enhanced), a free search engine [www.swish-e.org] with a GNU General Public License arrangement, will generate reports showing a list of directories that were visited during its crawl. It will also provide a brief summary of the number of files and terms indexed, making note of the fact that no stop words were designated.

Other indexes will dynamically create a list of stop words based on frequency of occurrence. While many stop words are fine to exclude, like "a," "an," and "the," some stop words can be detrimental to your users.

At one university, a top search on the campus Web site was Library. Its popularity was probably due to the fact that there was no library link on the home page, but that's a story for another day. The term was so common that the search engine marked it as a stop word-a search to actually find the library home page inevitably failed. Fortunately, the Webmaster was easily persuaded not only to allow the word "library" as a valid search term but also to add some custom programming to ensure that the main library home page link appeared first in the result set. What helped make the case for adding a "best bet" for the term Library is the fact that the acronym for Webmaster's department-ITS (Information Technology Services)-was also a stop word, proving it's definitely worth looking at indexing reports from time to time to fine-tune what content is included and excluded from the index. …