A perplexing problem arises when multiple conditions are studied in an experiment. Multiple tests are then usually required; each test could err by producing a miss or a false alarm. The data analysis may thus suffer from multiple misses, multiple false alarms, or both. Statisticians have produced a variety of procedures to handle such situations, but all require hazardous compromise between the threat of misses and the threat of false alarms.
This chapter presents a commonsense approach that incorporates two empirical guidelines—planned comparisons and replication—as part of a per comparison philosophy.
When an investigator makes more than one test in a given experiment, the effective α escalates. This problem of α escalation is serious. With only two conditions in your experiment, you have only one test so no problem arises. But now suppose you add a third condition and test between the largest and smallest of the three means. Since the.05 level holds with two conditions, it might seem that adding a third would not have much effect, at worst perhaps an effective α of.075. In fact, this test between the largest and smallest means has an effective α of almost.13, more than 2½ times larger. Even with small experiments, the effective α for a family of tests taken together can be markedly greater than the α used for each separate test.
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: Empirical Direction in Design and Analysis. Contributors: Norman H. Anderson - Author. Publisher: Lawrence Erlbaum Associates. Place of publication: Mahwah, NJ. Publication year: 2001. Page number: 525.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.