Kim Bailey is former director of professional development and instructional support for the Capistrano Unified School District in California. She also served as an adjunct faculty member at Chapman University in California. Follow @bailey4learning on Twitter.

Do Your Assessments Pass the ACID Test?

It’s been well-established in the literature around professional learning communities that team-developed common assessments can serve as powerful tools to monitor students’ level of proficiency in the essential standards (DuFour, DuFour, Eaker, Many, and Mattos 2016). Common assessments, particularly those that are formative in nature, provide actionable data to teams focused on learning. Using these assessments, teams take action by examining which of their students attained proficiency on the skills and concepts that they deemed most essential, and providing additional time and support to those students. They take action when the data reveals the instructional strategies that appear to be most effective across the team. They take action by using the information to provide frequent and specific feedback to students found highly effective in strengthening learning and developing a growth mindset (Hattie, 2013; Dweck, 2008).

Few educators would argue the benefits of common assessments for monitoring student learning and impacting their instructional practice. Yet many teams are reluctant to design these powerful tools for fear that they may not be of high quality or even “appropriate” in light of the more rigorous standards adopted by states and reflected in high stakes assessments.

The truth is that teachers are the best qualified to design assessments that monitor what they are teaching in their classrooms and therefore must be engaged in the design process. However, the reality is that the targets for learning have changed and do require that teams reconsider the design of their formative and summative assessments. So how can teams build confidence that the assessments they design and use will “hit the mark?” Here’s a simple tool called the “ACID” test that teams can use. The tool acts as a protocol that quickly walks teams through a quick evaluation of their assessment.

Each letter in the word ACID relates to an attribute of quality assessments. The tool includes guiding questions and suggestions for actions that teams can take to better meet that attribute. Using the ACID test can empower your team to design high quality and appropriate assessments that lead to the actions that will increase student learning of what’s most important.

  Guiding Question What teams can do
Is the assessment aligned to the context, content, and rigor/complexity of the standards?
  • Look at the language of the standard and the learning targets (from the unwrapped standard) in comparison to the task.  Are the thinking types on the assessment aligned to those targets?
  • Do the various items target the various levels of rigor or application (e.g., DOK) represented in the learning targets?  For example, is the difficulty of the task or questions at the same level?
  • Examine any exemplars related to your targeted level of complexity; is the level of scaffolding or cuing appropriate?
  • Is the designated level of mastery or proficiency appropriate and aligned?
Are the items on the assessment clearly written?
  • Read the prompt and any distractors provided. By completing this task as written, will students be demonstrating the skills and concepts you are targeting?
  • Will students understand what you want them to do?
Will this assessment be informative and meaningful about student learning?
  • Will teams benefit from gathering data on these learning targets in this fashion?
  • Will specific information on learning targets steer teams toward meaningful interventions/support?
  • Will this assessment be an opportunity to provide student feedback?
Is the assessment designed to reflect and support the demands of the state standards and assessments?
  • Will the items ask students to show what they know in a way similar to high stakes assessments?
  • Are students asked to provide reasoning for their answers?
  • Are they looking for evidence?
  • Are they digging into information in a variety of text/ sources?


Bailey, K. and Jakicic, C. (2016).  Simplifying Common Assessment: Practical Strategies for Professional Learning Communities at Work  (in press).  Bloomington, IN: Solution Tree Press.

DuFour, R., DuFour, R., Eaker, R., & Many, T. and Mattos, M. (2016). Learning by Doing: A Handbook for Professional Learning Communities at Work (3rd ed.). Bloomington, IN: Solution Tree Press.

Dweck, C. S. (2008). Mindset: The new psychology of success. New York, US: Ballantine Books.

Hattie, John A. C. (2013). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor and Francis.


  1. Michele Knutsen

    Thank you Kim for the nice article. I agree that teachers know what’s the best for their students since they work with them daily. However, the assessments take so much time to create that it may take away from the teachers time to perfect their upcoming lesson. I have often sat until 1 or 2 am creating them. So I’m all for online assessments but like you stated, they must stand up to your ACID test and at the same time be flexible enough to embrace a change or a tweak to meet the needs of the students. Thank you!

    • Kim Bailey

      Hi Michele! So true–we need to use our professional filters when selecting pre-made assessments so that we know they are going to give us good information. The good news, too, is that formative assessments needn’t be lengthy. One or two well-designed constructed response items can tell us a lot about what our students can know or do. If using multiple choice or other forms of selected response, then we only need 3 or 4 items per learning target to reliably assess our students’ skill or understanding of a concept. Sometimes less is more! Have a great year, Michele!


Leave a Reply

  • (will not be published)