It’s been well-established in the literature around professional learning communities that team-developed common assessments can serve as powerful tools to monitor students’ level of proficiency in the essential standards (DuFour, DuFour, Eaker, Many, and Mattos 2016). Common assessments, particularly those that are formative in nature, provide actionable data to teams focused on learning. Using these assessments, teams take action by examining which of their students attained proficiency on the skills and concepts that they deemed most essential, and providing additional time and support to those students. They take action when the data reveals the instructional strategies that appear to be most effective across the team. They take action by using the information to provide frequent and specific feedback to students found highly effective in strengthening learning and developing a growth mindset (Hattie, 2013; Dweck, 2008).
Few educators would argue the benefits of common assessments for monitoring student learning and impacting their instructional practice. Yet many teams are reluctant to design these powerful tools for fear that they may not be of high quality or even “appropriate” in light of the more rigorous standards adopted by states and reflected in high stakes assessments.
The truth is that teachers are the best qualified to design assessments that monitor what they are teaching in their classrooms and therefore must be engaged in the design process. However, the reality is that the targets for learning have changed and do require that teams reconsider the design of their formative and summative assessments. So how can teams build confidence that the assessments they design and use will “hit the mark?” Here’s a simple tool called the “ACID” test that teams can use. The tool acts as a protocol that quickly walks teams through a quick evaluation of their assessment.
Each letter in the word ACID relates to an attribute of quality assessments. The tool includes guiding questions and suggestions for actions that teams can take to better meet that attribute. Using the ACID test can empower your team to design high quality and appropriate assessments that lead to the actions that will increase student learning of what’s most important.
Guiding Question | What teams can do | |
A Alignment |
Is the assessment aligned to the context, content, and rigor/complexity of the standards? |
|
C Clarity |
Are the items on the assessment clearly written? |
|
I Informative |
Will this assessment be informative and meaningful about student learning? |
|
D Design |
Is the assessment designed to reflect and support the demands of the state standards and assessments? |
|
References:
Bailey, K. and Jakicic, C. (2016). Simplifying Common Assessment: Practical Strategies for Professional Learning Communities at Work (in press). Bloomington, IN: Solution Tree Press.
DuFour, R., DuFour, R., Eaker, R., & Many, T. and Mattos, M. (2016). Learning by Doing: A Handbook for Professional Learning Communities at Work (3rd ed.). Bloomington, IN: Solution Tree Press.
Dweck, C. S. (2008). Mindset: The new psychology of success. New York, US: Ballantine Books.
Hattie, John A. C. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor and Francis.
Thank you Kim for the nice article. I agree that teachers know what’s the best for their students since they work with them daily. However, the assessments take so much time to create that it may take away from the teachers time to perfect their upcoming lesson. I have often sat until 1 or 2 am creating them. So I’m all for online assessments but like you stated, they must stand up to your ACID test and at the same time be flexible enough to embrace a change or a tweak to meet the needs of the students. Thank you!
Hi Michele! So true–we need to use our professional filters when selecting pre-made assessments so that we know they are going to give us good information. The good news, too, is that formative assessments needn’t be lengthy. One or two well-designed constructed response items can tell us a lot about what our students can know or do. If using multiple choice or other forms of selected response, then we only need 3 or 4 items per learning target to reliably assess our students’ skill or understanding of a concept. Sometimes less is more! Have a great year, Michele!