Meta-Analytic Studies of
Findings on Computer-Based
James A. Kulik
University of Michigan
What do evaluation studies say about computer-based instruction? It is not easy to give a simple answer to the question. The term computer-based instruction has been applied to too many different programs, and the term evaluation has been used in too many different ways. Nonetheless, the question of what the research says cannot be ignored. Researchers want to know the answer, school administrators need to know, and the public deserves to know. How well has computerbased instruction worked?
Reviewers handle such questions in two different ways. Some reviewers are selective in their approach to evidence. They hold that evaluation questions are best answered by key experiments, and so they sift through piles of reports to find the studies with the most convincing results. These studies become the focus of their reviews. Other reviewers feel that evaluation results are inherently variable and that evaluation questions are seldom decided by the results of an experiment or two. Such reviewers put together a composite picture of all the findings on a topic, and they use statistical methods to identify representative results. Both approaches are valuable. The first shows what researchers and developers can accomplish in extraordinary circumstances; the second shows what is likely to be accomplished under typical conditions. We need both types of reviews in the area of computer-based instruction.
All of my reviews on the topic of computer-based instruction, however, have been of the second type. For more than 10 years, my colleagues and I have been organizing and summarizing the evaluation literature on computer-based instruction and trying to identify representative results. I believe that comprehensive reviews like ours provide a good context for discussing the more exceptional results in the area. Our reviews provide a background. They make