A Methodology for Extracting Quasi-Random Samples from World Wide Web Domains

Article excerpt

ABSTRACT

The purpose of this paper is to describe a process for sampling specific domain name zones on the World Wide Web. Because of the size of the Web, sampling strategies must be employed in order to effectively model and study the Web business environment.

This paper discusses Various efforts employed to sample the Web, which ranged from random generation of Internet Protocol Addresses and domain names, to the process finally employed to create descriptive models of the dot-com domain name zones. The paper suggests that sampling the Web Top Level Domains offers a reasonable alternative for business researchers because it requires only familiarity with the use of the simple Web utilities such as File Transfer Protocols to obtain initial domain name listings.

INTRODUCTION

The Web is characterized by relentless growth. VeriSign estimates over 1 million domain names are acquired in the dot-com domain each month. But how much of the growth of the Web may be attributed to business? What types and proportions of businesses populate the Web? Is the Web more amenable to large business or to small business? Does the Web consist mostly of entrepreneurial start-ups or companies who have adapted their pre-existing business models to this new environment? How 'entrepreneurial' is the Web? Throughout (or because of) the frenzy of the dot com craze and the uproar over the bursting of the dot com bubble in 2001 many of these fundamental questions about business on the Web have remained unanswered.

Barabási (2002) states 'Our life is increasingly dominated by the Web. Yet we devote remarkably little attention and resources to understanding it'. Relative to the extensive literature produced on the importance and potential of the Internet as a tool (Porter, 2001) or as an element of the physical world's business environment, empirical research regarding the demographics of the vast majority of Web business entities, or their marketing and revenue strategies, is limited and sketchy (Colecchia, 2000; Constatinides, 2004). Compounding the problem, Drew (2002) notes that 'Many academic empirical investigations and surveys in e-business suffer from small sample sizes, with consequent questions as to the meaning, validity and reliability of findings'.

Because of the extraordinary growth and the sheer size of the Web, sampling methodologies are essential in order to make valid inferences regarding the nature of Web businesses. This paper discusses probability sampling methodologies which may be employed to give researchers tools to assist in answering some of the fundamental "how much", "how many" and "what type" questions regarding the conduct of business on the Web. The paper discusses procedures employed, as well as mistakes we made which finally pointed to a more productive process. This methodology does not require mastery of esoteric web software packages, nor familiarity with Web crawlers or algorithms they employ to sample pages on the Web.

WEB SAMPLING ISSUES

The original objective of the present research project required that we draw a representative sample of Web sites across multiple top level domains. The first attempt adapted a method based on O'Neill, McClain and Lavoie's (1998) methodology for sampling the World Wide Web utilizing Internet Protocol (IP) addresses. The first step taken was to develop a program which would generate random IP addresses, test the address for validity, and store resulting valid IP addresses in a file. This would enable us to resolve the domain name and then manually enter the valid domain names into a Web browser for further evaluation and classification.

In the mid 1990's, nearly all web domain names were assigned a unique, non-ambiguous IP address, referred to as a 'static' IP address. Around 1999, the practice of assigning 'dynamic' IP addresses became more common due in part to the perceived diminishing supply of static or unique IP addresses. …