A total of 40 measures of effect size were collected from the literature by Kirk (1996) and are listed in his Table 1. Notably absent from this list are the most important indexes: the mean, the one-variable regression coefficient, and their confidence intervals. This state of affairs is a symptom of the pervasive orientation that fixates on statistics to the neglect of empirics.
Questions of size and importance have no easy answer because they are basically extrastatistical issues. They need to be addressed in empirical terms, within the framework of each experiment. Focus on statistical indexes obscures the real issue. See further Importance Indexes and Self-Estimation Methodology in Chapter 6 of Anderson (1982) as well as Anderson and Zalinski (1991). Also of interest are Kruskal and Majors (1989) and Wright (1988).
Instead, Cohen tabulated sample size for each report. This sample size was used to estimate power for the three cited hypothetical effect sizes of.20,.50, and.80 using Equation 3 (or formulas for comparable measures of effect size such as r). Cohen found that power averaged just under.50 for a “medium” effect size. He arbitrarily assumed that a “medium” effect size was the norm for empirical studies and so concluded that published experiments generally lack power.