Academic journal article Political Research Quarterly

How Good Is Good Enough? A Multidimensional, Best-Possible Standard for Research Design

Academic journal article Political Research Quarterly

How Good Is Good Enough? A Multidimensional, Best-Possible Standard for Research Design

Article excerpt

Abstract

Recent years have seen a shift in methodological emphasis from the observable properties of a sample to its unobservable properties, that is, judgments about the process by which the data were generated. Considerations of research design have moved front and center. This article attempts to bridge discussions of experimental and quasi-experimental data and of quantitative and qualitative approaches, so as to provide a unified framework for understanding research design in causal analysis. Specifically, the author argues that all research designs aim to satisfy certain fundamental criteria, applicable across methods and across fields. These criteria are understood as desirable, ceteris paribus, and as matters of degree. The implications of this framework for methodological standards in the social sciences are taken up in the final section of the article. There, the author argues for a best-possible standard of proof that judges overall methodological adequacy in light of other possible research designs that might be applied to a particular research question.

Keywords

research design, quasi-experiment, natural experiment, qualitative methods

If you insist on strict proof (or strict disproof) in the empirical sciences, you will never benefit from experience, and never learn from it how wrong you are.

-Karl Popper (1934/1968, 50)

Our problem as methodologists is to define our course between the extremes of inert skepticism and naïve credulity . . .

-Donald Campbell (1988, 361)

Methodological standards are central to the working of any scientific discipline. Yet the problem of establishing and maintaining standards seems to pose greater practical difficulties in the social sciences than in the natural sciences. Scholars in astronomy, biology, chemistry, computer science, earth sciences, engineering, and physics are less conflicted over what constitutes a finding than their counterparts in anthropology, economics, political science, sociology, and related fields.1

What is a reasonable methodological standard for analyzing causal inferences in social science? When is a method of analysis good, or at least good enough? Various approaches to this question may be derived from work in philosophy of science and history of science. However, work emanating from these fields tends to float hazily above the surface of social science praxis. My interest here is in standards as they are applied, or might be applied, to workaday decisions such as those facing reviewers and editors. When should we accept or reject a manuscript, and what are plausible methodological grounds for doing so?

Recent years have seen a dramatic shift in methodological emphasis from the observable properties of a sample to its unobservable properties, in particular, judgments about the process by which the data was generated, which I shall refer to as issues of research design. However, we have not fully come to grips with the diversity of research design issues relevant to causal inference. In this article, I propose a unified and relatively comprehensive framework to summarize these research design considerations-a common set of criteria that should apply across experimental and nonexperimental settings and to research in quantitative and qualitative modes.

My second objective is to argue against the application of a binary threshold for scientific adequacy. I shall argue that methodological criteria are multiple, diverse, and prone to methodological trade-offs and practical constraints. Most importantly, different substantive research questions require different methods and result in different overall levels of uncertainty. Some things are easier to study than others. In the concluding section of the article, I lay out the argument for a "best-possible" approach to methodological adequacy.

Note that space constraints compel a terse presentation of complex issues in this article. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.