Magazine article Computers in Libraries

Making a Difference: The Great Reading Adventure Revisited: Since It Was First Announced in CIL (May 2014), the Software Has Seen a Full Cycle of Code, Sweat, and Tears

Magazine article Computers in Libraries

Making a Difference: The Great Reading Adventure Revisited: Since It Was First Announced in CIL (May 2014), the Software Has Seen a Full Cycle of Code, Sweat, and Tears

Article excerpt

In 2013, the idea of public libraries having a fresh platform to gamify summer reading programs became a reality with the Great Reading Adventure (GRA). The software was a game changer, introducing digital badges, robust reporting, and embedded literacy content into an open source framework, which remains free to use, modify, and share. It addressed a need to bring library services into the modern era of computing and user experience. It kick-started new conversations within communities of educators, developers, and librarians. It even won some awards.

But above all, it worked.

The GRA drew 64,987 participants in its pilot year for the Maricopa County Library District, which grew to a userbase of 77,880 last summer. Since it was first announced in CIL (May 2014), the software has seen a full cycle of code, sweat, and tears.

The first year of the GRA was all about getting the platform off the ground. It was a new application that had to be put through its paces in order to refine the user experience and make it a truly viable platform for summer reading programs. In the second year, our aim was to make sure it worked. The GRA introduced a number of new concepts to summer reading programs, and we wanted to make sure they did what we intended.

Method

One of the GRA's most compelling features is its built-in assessment tool. This new functionality gives us the long-sought-after ability to determine our objective impact on the infamous "summer slide," which is a label representing the learning loss experienced by a large portion of our nation's students as they transition between school years. With this feature, we were able to introduce literacy assessments into summer reading programs.

Through an innovative and award-winning partnership with the Maricopa County Education Service Agency, we developed a standardized literacy assessment to gauge the change in the reading comprehension scores of students exiting first and second grade. A full technical brief is available on the GRA development site, which explains the assessment blueprints, test maps, and standards.

Reading skills were measured with multiple-choice questions--based on reading levels from the Developmental Reading Assessment--over a 2 month window during the summer. Two pretest and post-test pairs were constructed in parallel to offer the best possible comparison of raw scores. The tests were administered in the first and last weeks of the program and were taken by 286 students--39 (age 6), 147 (age 7), and 100 (age 8).

Limitations

Before launching into the results of our work, we'd like to make note of our study's limitations. First and foremost, it looked at very few demographic factors outside of age and geography. We made no effort to collect socioeconomic information (outside of readily available Census data). For this study, we were looking exclusively at children's pre-test and post-test scores, correlating their performance with the amount of activity that was logged.

The literacy measure we used is still in its infancy. The tool was created by assessment development experts, but it has yet to go through the rigorous testing and analysis necessary to prove its efficacy. We were not looking for a tool for measuring children's literacy levels--merely a quick and dirty assessment to tell us whether or not our program was on the right track. With time and additional study, it will become more effective in measuring changes in reading comprehension scores.

There are also some technical limitations that will need to be addressed both by software updates and an assessment that's wider in scope. The measure we used doesn't target an individual at his or her own reading level. Instead, we took a more standardized middle path that had a number of questions designed for different reading levels, all delivered in the same assessment for participants of a specific age. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.