Jim Smith is data and assessment coordinator at Twin Cities International Elementary School in Minnesota. Previously, he served as a middle school history teacher for 15 years.

Diving Into Common Assessment—Does it Work?

Assessment is a messy process. Developing, using and responding to assessment supporting learning requires reflection, trusting relationships and, at times, re-learning and re-assessing. When assessment is simply seen as a test, resulting in points scored and grades assigning, the fundamental learning opportunities of effective assessment practices are lost; and what a tragedy this is.

When a balanced assessment system rests on a foundation of collaboratively built common assessments used formatively, the gains in student achievement are impressive. In fact, this foundation is essential in order to reliably collect evidence of learning, communicate essential feedback to the students and prompt actions enhancing further learning. Without clear evidence, individual learning and school-wide improvement efforts flounder.

The image of floundering calls us back to the metaphor of “diving” into common assessment. Are the rewards worth the risks? The answer this question will be in two parts.

    1. An examination of what the researchers and education leaders say about common formative assessment practice
    2. A first-hand account of common assessment in action

Richard Dufour, author of In Praise of American Educators, (2015) makes a strong case for the use of team-developed common formative assessment tied to school improvement and the PLC process. The evidence is so overwhelming that he simply states that the practice is “well established in the research” (p 171). Dufour cites the work of Ron Gallimore and colleagues (2009), Michael Fullan (2011) and others. The prevailing attitude is best summarized by Fullan (2011) who reports that in every case of significant school improvement, “there are common assessment frameworks linked to individualized instructional practice . . . progress and problems were also transparent . . . with corresponding discussions of how to improve results.” The results overwhelming support the application of common formative assessment practice for all students. It is typical for successful schools with high rates of students struggling with poverty to embrace this practice. (Chenoweth, 2009).

Robert Marzano (2006) examined the reliability of collaborative formative use of assessment in the typical standards-based 1–4 scoring system. He found, when one teacher scored assessments, the reliability came in at an impressive .719. With additional team members scoring assessments the reliability increases even further. When four teachers completed the scoring reliability increases to an astounding .901. Marzano’s eye-popping conclusion: “that a school or district could use teacher designed assessments to obtain scores for students that rival standardized and state tests in their accuracy”(p. 118).

A first-hand account:
I have been fortunate enough to experience these results directly while working in a school featuring collaborative, standards-based, common assessments. Typically, there was pre-testing at the start of the instruction, a variety of progress monitoring along the way, and scaffolded common assessments near the end. The final assessment was summative for those who met the standard (levels 3 and 4), but formative for students below standard (levels 1 and 2). The scaffolded nature of the assessment pointed to the next steps in learning for all the students and made visible the gaps of students scoring below the proficiency bar. Additionally, these assessments were the foundation for discussions centered on the effectiveness of instruction.

We knew the assessment practices were good for student learning, but how would their progress reflect on the state tests of accountability? We reasoned if our classroom assessments were aligned the standards, and the state tests were also aligned, there should be a correlation between the two. We were also keenly aware that many other factors could be at play.

What if:

  • Our interpretation of the state standards was incorrect
  • Instruction was not aligned with the standards
  • Cognitive levels of the learning targets were not sufficient to meet the standards
  • Other variables we could not control became issues; lack of sleep or nutrition, for example

Student records for all the classroom assessment results were collected for the school year. The students were scored (1-4) by learning target on the scaffolded assessments. What did we find? First, a high correlation was found between our classroom assessments and the state tests. This translated to a predictive average cut score of 2.8 on our classroom math tests and 2.65 on the reading tests. Students scoring at or above these scores were very likely to show proficiency on the state tests. There are numerous standardized tests used to predict student proficiency levels on the state test. Our classroom assessments were at least a match, if not a better, predictor of state outcomes. Additionally, the classroom tests were given at the end of initial instruction. Results were quickly communicated along with next steps for those not successful. This means that the struggling students had additional learning opportunities beyond the final test that might have the effect of meeting standard proficiency. While this was not recorded in the data, it suggests why cut scores below level 3 were observed.

The research is crystal clear, formative use of common assessment is one of the most potent learning strategies for the classroom. My personal experience with this practice has made me a strong advocate as well. I hope my gentle nudge gives you the confidence to take the dive into common assessment. The result will be life changing for students and give teachers a formidable research-based tool to foster hope, and deepen learning for all students.

Chenoweth, K. (2009). It can be done, its being done, and here’s how. Phi Delta Kappan, 91(1), 38-43

DuFour, R. (2015). In Praise of American Educators. Bloomington, IN: Solution Tree Press

Fullan, M. (2011). The Moral Imperative Realized. Thousand Oaks, CA: Corwin Press

Gallimore, R., Ermeling, B.A., Saunders, W. M., & Goldenberg, C. (2009). Moving the learning of teaching closer to practice: Teacher education implications of school-based inquiry teams. Elementary School Journal, 109(5), 537-553

Marzano, R. (2006). Classroom Assessment & Grading That Work. Alexandria, VA: ASCD


  1. Ethan Dawes

    What is your opinion on whether there should be marks on assessments or not

    • Jim Smith

      I am all for responding and interacting with the learner when it comes to assessment. The response depends on many factors, the type of assessment, (formative/summative), the purpose, (advance the learning/accountability and grading) and the student, (what does this student need and how can they respond to my feedback). The true learning power of assessment is found in the responses of both the teacher and the student to the results of the assessment and how it moves students to their next step in learning.


Leave a Reply

  • (will not be published)