The authors will introduce a new method of analysis that combines qualitative and quantitative methods to help researchers analyze data when they do not have national random samples.
Glass notes in his "Meta-analysis at 25" that he could not believe the success that his statistical method had and the number of entries on the Internet that use Meta-analysis. (Glass.ed.asu.edu/gene/papers/meta25.html) His original idea was to question Eysinck's literature review on psychotherapy. Glass had found inner peace with therapy and Eysinck's indicated the whole talk therapy issue a fraud or a placebo. Glass reviewed the same studies and others and aggregated the numbers in the direction of successful outcomes and those that found no difference. To control for bias due to larger numbers in some samples as opposed to others, he was able to homogenize the data by using measures of central tendency over variance. Thus means were compared with the two groups and were divided by the means of the standard deviations or in academic jargon, he randomize the data and used a "t" or "f" test (depending on the number of studies.)
The whole procedure was incredible success. As most researchers know, purposive samples are often drawn because the researcher cannot afford to sample the entire nation.
Corporations and political parties can do so, but individual researchers do not have that kind of money. Thus, samples are drawn from available samples (purposive sampling) and are not random. Non-random samples are used both in experimental and control with matching demographics and a goodness of fit test is used to ascertain if there is a difference at the .05 level of confidence. Another strategy uses a large purposive sample and cross sectional design of analyzing the "with- in difference" between two demographics or psychographics. Both assume "as if" there is a large randomized national population. A third strategy is to draw a random sample from a school, city, or target area and assume "as if" it is a large randomized national sample. All the examples listed above are flawed, but very useful.
Glass takes this a step further by aggregating ALL studies and uses significance testing for differences or lack thereof. In other words, he quantifies literature reviews. To individuals with little monies, one can contact the reference librarian and get over time a number of studies on a particular topic, quantify them, run a significance test, and publish the findings. In 25 years, Glass notes how much the strategy has been used.
Further incarnations by others have used statistical manipulations to further randomize the data and some have stratified it by using only the best studies and those with the most transparent findings that can be manipulated. (Ibid.) Thus, where original studies had double-digit samples, Meta-analysis could provide thousands of individuals. Further, various controls, different stimuli, various measures of outcomes were leveled into a single set of numbers to analyze by a "t" test. Last, all studies that may have had nominal or ordinal qualities were treated as interval or ratio data and hard number theory was assumed. Meta-analysis gave individual researchers with little or no grant money a chance to compete with large research institutions.
Glass defended his method with exuberance, but did admit that Meta-analysis was not as robust as a large national random sample. He indicated, "Moreover, the typical meta-analysis virtually never meets the condition of probabilistic sampling of a population." (Ibid.) To make this clearer to some, in a national presidential election Meta- analysis would take all the candidates primary wins and losses, aggregate and randomize them and predict the winner. On the other hand, the two major political parties would have a large random sample that would keep interviewing and continuously sample up to Election Day. …