I love using Twitter as a way to communicate thinking in a markedly different way than when using blogs, articles, and books. By limiting the number of characters, Twitter forces us to be succinct in our thinking. I’ve discovered that followers often reply or ask a question related to a topic I’ve thrown out for discussion because the tweet only allows me to share a small part of my thinking.
Recently, I was asked an interesting question by a follower related to something I had tweeted. He asked “How many questions should you have on a summative assessment?” Now, I often discuss the number of questions necessary for reliable data depending on the type of item—multiple choice, constructed response, etc. And my co-author, Kim Bailey, and I regularly share that formative assessments should be written around a small number of learning targets. However, the only times I can remember thinking about the requisite number of questions for a summative assessment was many years ago when I was a teacher and wanted to make sure the total points on the test added up to 100. That meant 25 questions, 33 1/3 questions, or 50 questions (I wish I was kidding!). In attempting to put my thoughts into a response, this question forced me to think more deeply about designing summative assessments.
The first thing I thought about was the fact that, for many teachers, summative assessment equals a test. They don’t necessarily see that a project or performance might be a preferable way to demonstrate mastery for some of their units of study. Music, art, and P.E. teachers are used to using products or performances to determine mastery. I wonder, however, whether science teachers who are using the Next Generation Science Standards (NGSS) and asking students to develop and use a model to explain a concept or phenomenon would easily move away from traditional test questions to allowing students to create a product instead.
I encourage the teams I’m working with to talk about the summative assessment while they are unwrapping their essential standards. If they don’t know what the final learning will look like, and if they don’t communicate that expectation to their students, it will be difficult for their students to get there. There will certainly be times that a paper and pencil assessment will be the most effective way to determine whether students have reached proficiency, but teams should always keep in mind the possibility that a product might better represent that learning.
I suspect that sometimes teachers feel limited to questions that look like the ones students will see on the high-stakes, end-of-the-year test. While I agree that teams should examine released items from these tests to see the language used, the expectations for rigor, and the types of items students will be expected to answer, I believe that teachers shouldn’t limit their assessments to these types of items. Certainly, using constructed response questions that require students to expose their thinking around a question allows teachers to provide a far more nuanced response than when they use multiple choice questions.
In addition, many of the current standards can best be assessed by an argument, essay, or piece of writing. In these cases the team should use a rubric that aligns to their state test and to their writing expectations.
So, I guess the answer to this follower’s question is that there isn’t one accurate number of questions one should use on a summative assessment. I would also add that it’s important to consider ways to assess student learning beyond the typical multiple choice and constructed response questions.