This is the first book of a two-volume set on technology assessment (see also O'Neil & Baker, 1994). Our intention is to capture the best evidence to date of the useful strategies to judge the future utility of technology. As the reader will see, some of the chapters straddle the boundary between evaluating a particular case of technology and broader conceptions of assessment. In chapters that report the evaluation of particular developments, the view to the future is often vague and left to the reader to infer in greater detail. For other chapters, however, there is a clear effort to present and review particular methods by which technologies in general may be evaluated. Three different approaches are taken by authors: analytic models, empirical summaries, and reports of development and evaluation.
It becomes obvious that it is almost impossible to meet the day-to-day demands of deadlines, software glitches, resource development, and empirical research, and to keep one's eye on the future of the field's progress. Conceptual bifocals are needed. Nonetheless, the authors in this volume represent a selection of the very best thinkers and actors in the practice of technology research, development, and evaluation. Trained as psychologists, their particular areas of expertise differ substantially and include cognitive, developmental, instructional, psychometric, and educational specialties. Most have decided to work in a particular environment--postgraduate, military, elementary, or secondary school. Together they present a wide array of approaches, from deep qualitative analysis, to classroom observation, performance assessment, or controlled experimentation. Evidence of validity is presented by some in tight frameworks and by others in freewheeling empiricism.
From their efforts we can learn our status in a representative set of important