Academic journal article T H E Journal (Technological Horizons In Education)

Getting the Story Straight: Recent Articles in the Nation's Two Most Influential Newspapers Draw an Inaccurate Picture of Education Technology. A Three-Pronged Counteroffensive Is in Order

Academic journal article T H E Journal (Technological Horizons In Education)

Getting the Story Straight: Recent Articles in the Nation's Two Most Influential Newspapers Draw an Inaccurate Picture of Education Technology. A Three-Pronged Counteroffensive Is in Order

Article excerpt

I REMEMBER TAKING a Red Cross class a number of years ago, and one thing has stuck in my head: to stop the bleeding, apply more pressure.

That phrase has come to mind several times over the past few weeks as the popular press has featured educational technology in a less than favorable light. The first instance was an article in The Washington Post, reported widely in newspapers across the country, about the Department of Education's study on the impact of educational technology, titled "Effectiveness of Reading and Mathematics Software Products." The second was a piece in The New York Times that appeared on the front page on May 4, titled "Seeing No Progress, Some Schools Drop Laptops."

Punctured by the two most revered newspapers in the country, the ed tech industry needs to stop the bleeding. How do we do that? Apply pressure--by debunking the source, asking the right questions, and being our own advocates. All of these approaches are interconnected, but taking a look at each may help you and others in your district and your state counter the negative coverage educational technology is getting.

Debunk the source. In this case, I don't mean the Post or the Times; I mean the arguments the articles make, and the "facts" presented in support of the arguments. There are a number of reasons for the findings in the DoE study, and they are readily discovered by reading the full report, or even simply scanning the executive summary. It's unfortunates--and unfair--that the Post writer did not bring those reasons to light, which would have tempered the conclusion that the study is "a rebuke of educational technology."

For example, one factor the Post left unexamined is the way the technology products were administered. The report says that students used the software on average of about 10 percent of instructional time. For fourth-grade reading, one product was used seven hours in the year, and another was used 20 hours in the year. Can we really expect to change learning outcomes using a tool seven hours in a single year?

Another problem with the DoE study is the averaging of different types of software products into a final result. In sixth-grade math, the study notes, "two products were supplements to the math curriculum, and one was intended as a core curriculum." In fourth-grade reading, "three of the four products provided tutorials, practice, and assessment geared to specific reading skills, one as a core reading curriculum and two as supplements to the core curriculum. The fourth product offered teachers access to hundreds of digital resources such as text passages, video clips, images, internet sites, and software modules, from which teachers could choose to supplement their reading curriculum."

Cheryl Lemke of the Metiri Group (www.metiri.com), a consulting firm dedicated to advancing the use of technology in schools, points out that summaries across studies conducted on different technology-based learning approaches tend to average out the negative and positive gains of any individual approach. She cites a recent report by Yigal Rosen and Gavriel Salomon of Israel's University of Haifa demonstrating this.

Yet more fault with the study can be found regarding the quality of teacher training. "At the end of the training," the executive summary explains, "most teachers reported that they were confident that they were prepared to use the products with their classes. Generally, teachers reported a lower degree of confidence in what they had learned after they began using products in the classroom." However, reading the full report reveals more specific information about apparent inadequacies in the training, such as this from the first-grade reading component: "The need for such additional support is suggested by the finding that by the time of the first classroom observation (generally about mid-fall), when most teachers had begun to use products, the proportion of teachers indicating that the initial training had adequately prepared them had declined from 95 percent at the end of the initial training to 60 percent. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.