Alan F. Stokes1 and Kirsten Kite
Human factors has been defined, perhaps just a little tongue-in-cheek, as the consideration of "fallible humans in interaction with systems designed by fallible humans" ( Stokes & Kite, 1994). However, it would be misleading to mistake human evolution's cognitive design specifications for human fallibilities. After all, we do not blame the 747 (or its pilots, or designers) for its relatively poor hovering performance. But human factors and engineering psychology have taken their cues from cognitive psychology, and this, in turn, has shown surprisingly little interest in the systems engineering of humans; that is, what humans were designed to do, and for what ecological contexts (rather than what they can do -- like learning lists of nonsense words). Just because the system engineer in this case is a self-organizing Darwinian process acting (still) over time should not blind us to the fact that, at any point in that development, Homo sapiens does have an engineering specification, a 'factory fit', as it were. Engineering psychologists, we argue, are therefore the very folk who should be in the vanguard of the new developments in evolutionary psychology and related sciences, and should not merely be pulled along by conventional cognitive psychology in general, and an unreconstructed information-processing (IP) framework in particular.
For most of its history cognitive psychology has, in essence, wholly disregarded our evolutionary past. The mind has been treated idealistically and prescriptively, as a unitary general-purpose learning mechanism, and, in general, likened to a computer in its information-processing functions. For example, there has been a marked focus on isolated "knowledge in the head," rather than on functionally and ecologically relevant "worlded knowledge" (in Merleau-Ponty's phrase). Reasoning processes that diverge from the traditional precepts of mathematical logic and Bayesian statistics have been labeled as inefficient or biased. Emotion, when it has been considered at all, has tended to be regarded as a "spoiler" that has been tacked on as an afterthought to standard, "cold" models of intellectual functioning (and generally to the detriment of that functioning). Consciousness, the jewel in the cognitive crown, has, until recently, been hidden in the discipline's basement less like a jewel than some mad aunt too embarrassing to deal with. Moreover, traditional cognitive psychology has failed to apply the Darwinian test of evolutionary plausibility to its accounts of human mental processes. Given this, the question of human interaction with automation can be recast. What happens when a conscious and self-monitoring, ecologically-embedded, socially- oriented, biologically-motivated and emotionally 'hot' cognition (as described by evolutionary psychology), interacts with the unconscious, context-insensitive (unworlded), 'cold' processing of machine cognition? The latter, of course, is much closer to architectures described by cognitive psychology in standard IP models, and realized as automation hardware by engineers.
Cognitively, we are hunter-gatherers, the descendants of Pleistocene hunter-gatherers, and the products of a lifestyle and environment that has characterized virtually the entire history of the hominid lineage--several million years. Even in the past 100,000 years of fully modern humans, only the most recent 10,000 have seen any agriculture, and then only in some societies. Some remain hunter-gatherers today. The Pleistocene environment lacked not only most of the technologies that we now take for granted, but also a reliable food supply, writing, permanent settlements, and formalized class divisions. Communication was____________________