Over the last few months, I’ve had the opportunity to work with collaborative teams at meetings where they are discussing the results of their common formative assessments and planning how they will respond to the information from them. In some of these situations, I’ve been asked to work with teams who are frustrated by a process that they see as being overly cumbersome and complex, as well as not very helpful in their work. So, I’ve been sharing my mantra with them: “This has to work with real teachers in real schools, working with real kids.” What I’ve noticed is that teams that feel the process is convoluted and overwhelming can usually simplify their work with a few tweaks either in process or assessment design. These tweaks include things like:
- Keep the CFA short and focused on a few learning targets at a time. When you assess more than 2-3 targets, you will likely have students that miss many (if not all) of those targets, and your team must then be prepared to respond to all of the assessed targets. Instead, focus on the essentials; those concepts that your team agrees ALL students must know and do to be successful. Your CFA might end up with only one or two constructed response items, but if they are the right items, you’ll be able to identify which students need help, and what that help needs to be. With 2-3 targets, your corrective instruction can easily occur the next day in classrooms. Having shorter assessments likely means you’ll need to assess more frequently, however, the research supports the idea that the more frequently you assess, the more students will learn (Bangert-Downs, J., et.al, 1991).
- When you plan the assessment, decide on the expectations for proficiency. I’ve seen many teams debate the issue of whether 80% or 85% is the right “cut score.” The reality is that if you’re assessing essential learning targets, students must reach proficiency on ALL of them. The team, however, needs to decide what that proficiency must look like. For example, if you are using multiple-choice items, decide how many do they need to get correct to be considered proficient. Let’s say that you have three multiple-choice items to assess an essential learning target. Your team may decide that the student must get two of them correct. This allows the possibility that a student might misread or misunderstand a question. For another learning target, you might be using one constructed response question. Your team might develop a rubric that has three levels of proficiency: proficient, partially proficient, and not proficient. By writing the rubric, all team members as well as all students, know what the expectation for their response is. Recognize that some constructed response questions are either correct or incorrect and won’t need a rubric. Make sure you team has developed an answer sheet with the correct responses prior to giving the assessment.
- Be careful about what questions you’re using to assess each learning target on the assessment. To be valid, the items in the assessment must match not only the content you’ve taught, but also the rigor level you taught it at. Consider the learning target, “Analyze multiple accounts of the same event and noting similarities and differences in the point of view they represent.” If the student is given a piece of text and asked to explain the point of view it’s written from, the content is the same as the target, but the rigor is different. When using pre-written items from a textbook or test bank, teams must be extremely careful to make sure both rigor and content are a match.
- Plan the response based on the information your assessment provides. With well-designed assessment items, you should be able to uncover not only which students need help, but also what that help needs to look like. When students expose their thinking when responding to a constructed response item, the team can put the “not yet proficient” students into smaller response groups based on their misconceptions or misunderstanding they’ve identified. Consider, for example, an eighth grade science team who asked students to response to the following:Samantha said that aluminum in a block and aluminum in a soda can have the same density. Is she correct? Justify your answer.
By bringing the student work to the table and categorizing the responses to this question, the team sees that there are several different misconceptions among the incorrect responses. Some students believe that if the volume of aluminum is different in the two objects, they don’t have the same density. Other students see the thin aluminum in the can as affecting the density. Based on these misconceptions, the team then designs the response specifically around the student need. For example, for those who see the thin can as having a different density, the students are supported as they work together to calculate the density of the aluminum in the can in the lab the next day. For the students who thought volume would affect the density, they are supplied with several blocks that are the same volume but made out of different materials and asked to observe what happens when the volume is the same.
Assessment construction and use doesn’t have to be complex and confusing. Remember that the purpose of these assessments is to guide instruction and who understands instruction better than the collaborative teams that work with students every day in their classrooms.
Bangert-Drowns, R. L., Kulik, J. A., & Kulik, C.-L. C. (1991). Effects of frequent classroom testing. Journal of Educational Research, 85(2), 89-99.