Recognizing problems, of course, is not enough. Organizations have to do something about them. It must be remarked that although high-performance teams often have error-tolerant systems, the teams themselves are not tolerant of error, do not accept error as "the cost of doing business," and constantly try to eliminate it. High-performance teams spend a lot of time going over past successes and failures, trying to understand the reasons why. Then they fix the problems.
But many organizations do not always follow through after the recognition of problems. Politically influenced systems may respond with glacial slowness while key problems remain, as with the systems used to carry out air traffic control in the United States ( Wald, 1996). Many of the computers used to direct traffic at U.S. airports can otherwise be found only in computer museums. At other times aviation organizations are caught up in political pressures that influence them to act prematurely. New equipment may be installed (as in the case of the new Denver Airport) before it has been thoroughly tested or put through an intelligent development process ( Paul, 1979).
Sometimes aviation organizations seem to need disaster as a spur to action. Old habits provide a climate for complacency while problems go untreated ( Janis, 1972). In other cases, the political community simply will not provide the resources or the mandate for change unless the electorate demands it and is willing to pay the price. Often it can require a horrendous event to unleash the will to act. For instance, the collision of two planes over the Grand Canyon in 1956 was a major stimulus to providing more en route traffic control in the United States ( Adamski & Doyle, 1994, pp. 4-6; Nance, 1986, pp. 89-107). When FAA chief scientist Robert Machol warned of the danger of Boeing 757-generated vortices for small following aircraft, the FAA did not budge until two accidents with small planes killed 13 people ( Anonymous, 1994). After the accident, following distance was changed from 3 to 4 miles. It is possible to trace the progress of the aviation system in the United States, for instance, through the accidents that brought specific problems to public attention. Learning from mistakes is a costly strategy, no matter how efficient the action is following the catastrophe. The organization that waits for a disaster to act is inviting one to happen.
"Human factors" has moved beyond the individual and even the group. Human factors is now seen to include the nature of the organizations that design, manufacture, operate, and evaluate aviation systems. Yet although recent accident reports acknowledge the key roles that organizations play in shaping human factors, this area is usually brought in only as an afterthought. It needs to be placed on an equal footing with other human factors concerns. We recognize that "organizational factors" is a field just opening up. Nonetheless, we hope to have raised some questions that further investigations can now proceed to answer.
On one point we are sure. High integrity is difficult to attain, as its rarity in the literature attests. Nonetheless, it is important to study those instances where it exists and understand what makes it operate successfully there. We have attempted here to show that "high-integrity" attitudes and behaviors form a coherent pattern. Those airlines, airports, corporate and commuter operations, government agencies, and manufacturers that have open communication systems, high standards, and climates sup