Since schools and districts transitioned to using the Common Core standards, I’ve been asked a number of times to show teachers how to write questions similar to the new high stakes tests. For example, PARCC has three different types of ELA items: Evidence-based selected response questions, technology-enhanced constructed response items, and prose-constructed response items.
This test uses three types of math items: Type I items assess concepts, skills, and procedures, Type II items assess math reasoning, and Type III items assess modeling and application in a real-world situation. Most schools that use the SBAC assessment have become more familiar with the performance tasks students are asked to complete on their assessments and often want to add these items to their own assessments. If students are never asked to complete similar items before the high stakes tests, will they be prepared to show what they really are able to do on that test? When does practice become “teaching to the test?”
The research is clear that when high schools students simply practice items similar to ACT items, there is little learning occurring (Allensworth, Correa, & Panisciak, 2008). So if using valuable instructional time to practice these items is not an effective instructional strategy, is there a reason to use items that are similar to our high stakes tests in our own work? I think we can easily make a case that there are a number of reasons.
I’ve observed several benefits to student learning as I’ve worked with teachers who are using items similar to the PARCC and/or SBAC items for their own formative and summative assessments. The first is that teachers and their students are better able to understand the rigor of these new standards. While not necessarily harder, these standards are expecting students to master concepts with more cognitive demand. In the past, students might only have had multiple choice questions on a high stakes test, and those items may have asked for recall of information or some analysis. The reality is that students were still able to guess at the answer and sometimes be lucky enough to choose the right answers. Now they are asked to “make a conjecture,” “critique the reasoning of others,” “defend an argument,” and “support your thinking with evidence from the text.” Items that require students to expose their thinking on these types of questions don’t allow guessing, and when teachers can see student thinking, it makes responding with additional support much easier.
The second change I’m seeing is that teachers are using instructional strategies in their classrooms that foster more critical thinking and reasoning. When they have aligned their lessons to the more demanding learning targets, the instructional strategies and resources they’ve previously used may no longer fit. Consider the math practice “critique the reasoning of others.” During math class, the teacher might provide several student-provided solutions to a problem, and have students work in groups to determine whether the thinking is accurate and if it leads to a correct solution. In ELA classrooms, teachers are using more complex text with their students—text that is difficult even for students who are reading at “grade level.” They are using close reading strategies to give students experiences with complex text and with trying out different scaffolding strategies such as “chunk the text,” “use context to understand difficult vocabulary,” and “annotate the text.” When students encounter more difficult text on high stakes tests, they have strategies to move forward.
A third change I’ve noticed is that teachers are able to more effectively use the results of their formative assessments during the instructional cycle. Instead of formative assessments simply identifying whether students are correct or incorrect, these types of items allow teacher to be more diagnostic in their work. Student misconceptions are more apparent when they’ve had to show what their thinking is when solving a problem or answering a question. Teachers are able to design corrective instruction and interventions more precisely. This also allows the fourth change I’ve noticed. When responding to these types of items that demonstrate student thinking, teachers can provide more accurate and informative feedback to their students rather than simply a grade or percent correct. Allowing students to have more responsibility for their own learning increases student achievement. (Hattie, 2011)
Given these benefits for student learning, teachers should feel justified and empowered to align their own assessment items to those that are being used in newer high stakes tests. They aren’t “teaching to the test,” but are teaching concepts at the rigor expected by these standards.
Allenworth, E., M. Correa, and S. Panisciak. From high school to the future: Why ACT scores are low in Chicago and what it means for schools. Consortium on Chicago Schools Research. University of Chicago, May 2008. Web. 10 July 2016 at https://consortium.uchicago.edu/sites/default/files/ publications/ACTReport08.pdf.
Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. London, England: Routledge.