Robert S. Goldfarb
There is a substantial case made by serious scholars that, despite a methodological rhetoric that often seems to imply and require sustained attempts to falsify economic theories, economists as a group do not take seriously this “responsibility”. Some scholars, such as Mark Blaug (1980), subscribe to this description of the behavior of economists, and bemoan it. Others, such as Donald McCloskey (1983), appear to accept the description but reject the appropriateness of a falsificationist attitude. Still others, such as E. Roy Weintraub (1988), seem to take issue with the description itself. 1 This chapter attempts to shed light on this debate by focusing on the actual practice of empiricism in economics. Is it reasonable to characterize the typical objective of empirical work in economics as severe testing of theories (“falsification”) or simply “verification”, or something quite different? 2
One well-known participant in the methodological debate, Mark Blaug (1980:254) asserts that “the central weakness of modern economics is, indeed, the reluctance to produce the theories that yield unambiguously refutable predictions, followed by a general unwillingness to confront these implications with the facts”. This proposition is put forward after quoting like-minded criticisms by Leontief (1971), Ward (1972) and others. Blaug elaborates on the lack-of-empirical-testing part of this proposition by observing that economists do “engage massively in empirical research”, but “unfortunately, much of it is like playing tennis with the net down: instead of attempting to refute testable predictions, modern economists all too frequently are satisfied to demonstrate that the real world conforms to their predictions, thus replacing falsification, which is difficult, with verification, which is easy” (pp. 256-7).
An empirically oriented economist, reacting to my verbal description of Blaug’s complaint, indicated that the complaint had some appeal to him based on his own experience. In particular, he reported that when he had strong a priori sign expectations based on theory, he would tend to go on doing additional statistical estimation so long as his results displayed “incorrect” signs. That is, his “stopping rule” was based on finding the “right” signs. His “stopping rule” procedure is surely not designed to severely test the theory in