Moving toward Cognitive Alignment: Effective Data Provides Feedback Teachers Can Use to Make Adjustments in Learning Activities That Result in Standards Alignment with Content and Cognitive Rigor

Article excerpt

Check out this list of effective teaching practices:

* Identifying similarities and differences

* Summarizing and note-taking

* Reinforcing effort/providing recognition

* Homework and practice

* Nonlinguistic representations

* Cooperative learning groups

* Setting objectives/providing feedback

* Generating and testing hypotheses

* Cues, questions, and advance organizers (Marzano, 2001)

Or this list of effective strategies for English learners:

* Introduce new material in a "whole-part-whole" framework

* Provide for active student involvement

* Maintain a print rich environment

* Access prior learning

* Provide for peer interaction

* Ensure that "meaning" precedes "form"

* Provide multiple opportunities to verbalize thoughts

* Use formative assessments

* Use models, demonstrations, realia, visuals

* Prompt and correct

(These are among the practices identified by the New Teacher Center at the University of California, Santa Cruz as essential for assisting English language learners.)

There is something these lists have in common. Overwhelmingly, the strategies require higher order thinking. However, in walkthroughs of now more than 50 schools, most of which are feeling some kind of pressure related to state and federal accountability requirements, I've noticed that one of these practices makes up more than 50 percent of the strategies being used during these short visits. And it's not a strategy requiring higher order thinking--it's the strategy of practice.

This got me wondering. If the most effective teaching strategies require higher order thinking, but the most used strategies seem to involve lower order thinking, then what kind of information would be helpful to motivate folks to do things differently? Aha! If a comparison could be made between the cognitive rigor of the content standards that students are to be learning and the cognitive rigor of the actual work students are doing, then that kind of data should be helpful in analyzing whether or not the work matched the demands of the standards. But, how to you do that?

There are a number of ways to look at cognition, the most popular of which is "Bloom's Taxonomy," which first made its appearance in 1956. In my own work with the Walk'bout I used a simpler method of looking at cognition designed by Norman Webb in his work evaluating state standards tests. Familiarity with a system of cognitive demand allows one to evaluate the cognitive level of student work, but how could a busy school leader ever have the time to analyze all of California's content standards so that a comparison could be made?

Having created a database of the content standards (ACSA's Standard Finder) I realized that this database could also include a rating of each standard. Before embarking on the task of analyzing all 4,638 standards on a cognitive scale, I wanted to be sure I used a scale that would provide the most useful information. In the process of examining different cognitive scales I realized that "A Taxonomy for Learning, Teaching, and Assessing" (Anderson, et al, 2001), the new "Bloom's," if you will, had a number of advantages over other systems. The revised taxonomy was developed by David Krathwohl, Loren Anderson and others as a salute to the original "Bloom's Taxonomy" after 35 years of use. In the revised taxonomy, the names of the original six levels of cognition have been changed from nouns to verbs:

1. Knowledge became Remember

2. Comprehension became Understand

3. Application became Apply

4. Analysis became Analyze

5. Synthesis became Create

6. Evaluation became Evaluate

By adding a sense of action to these words it became easier, for me at least, to look at student work and analyze the level of work that was being done. …