Once you have carried out a triangulated data collection plan and have all the data you had hoped for, it is time to start making meaning out of it all. It is likely that some of your data are quantitative (test scores, attendance data, and so on) and some are qualitative (journals, interview transcripts, student portfolios, and so on). On the surface it may look like quite a hodgepodge. In this chapter you will learn how to apply a generic process for making sense of this disparate data.
Some may argue that in order to truly understand a phenomenon, there is no acceptable substitute for personally reviewing all the relevant data. This assertion holds more than a little truth. Returning to the jury analogy from the previous example, if we were to arrive at a verdict in a criminal trial, the parties to the dispute would expect us to consider every available piece of evidence when we deliberated on the guilt or innocence of the defendant. However, in the business of real life, few of us have the time to review every piece of pertinent material related to the myriad tasks before us. This usually isn't a problem, because most of us have consciously or subconsciously developed a reliance on dependable sources to synthesize this information for us. To illustrate this point, let's consider two synthesizers many of us rely on—the news reporter and the history teacher.
If we, as citizens, wanted to fully understand what is happening with a piece of legislation before Congress, we would need to review each of the statements, predilections, and biases of the 535 senators and representatives. Furthermore, we would find it helpful to review the history of