Effectively using the data that we gain from our assessments is always important, and perhaps never more so than right now. There is a reason that accurate interpretation is a tenet in the Solution Tree Assessment Center model, and it is certainly worth taking the time to explore. There are a few definitions of the word “interpret”; some focus on more artistic endeavors, while many others focus on the idea of explaining something. As educators, we must interpret things each and every day—from whether we will be able to accomplish everything in our lesson plan to whether our students are really understanding what we want them to know. We should strive to draw informed inferences in our work, recognizing that doing this requires professional knowledge, skill, and ongoing effort. Read more
Educators across the country are sharing how this school year was far more difficult than the previous two years during the pandemic. There have been many pivots (I know, I know . . . that is like a four-letter word), many shifts, and many concerns raised as students return to school and socialize with peers they have not seen for a long time. This was a year like no other. As it comes to an end, educators have an opportunity to take a breath and reflect on what worked well and areas in which to seek growth. There is also an opportunity to think about going back to the basics with assessment practices. The pace of the year had many teachers juggling way too many responsibilities; summer brings time to reflect and opportunities for collaboration. This time allows teams to dig into the skills and knowledge students struggled with the most and design formative and summative assessment practices that align with the standards. Read more
“Whether we plan it or not, culture will happen. Why not create the culture we want?”
—Carmine Gallo, The Storyteller’s Secret
Have you ever started a new book and just . . . lost interest? Have you ever started a book and found yourself so enthralled that you could hardly put it down? Each school year, educators have the opportunity to write a new story—and the beginning of that story is critical. No matter the setting (face-to-face, virtual, blended), many educators begin with a similar focus: creating a culture of learning. Time dedicated to this work varies. Some educators feel the pressure of beginning content and spend minimal time focused on culture. Some believe the work of culture never truly ends. Regardless of where you fall on this spectrum, do you know the impact your assessment practices have on the culture you are trying to create? Read more
“Feedback is honesty. Don’t just tell me ‘good job’ when I didn’t.” —Middle years student
My colleagues and I work with systems across North America who are undergoing assessment reform. Educators and leaders alike are asking themselves how to shift their assessment practices, when to do it, and what it will entail. The questions generated in a single coaching session illuminate the complexity of this shift. Teachers are wondering how assessment should be designed, which symbol (if any) to attach to products and performances, and how to respond to assessment evidence in ways that will advance learning. This work is both significant and challenging, and no one is taking it lightly. However, in the quest to “get it right,” adults often forget a key source of wisdom and insight available to us every single day. Perhaps we see this source as a receptor of our refined assessment system, rather than as a collaborative partner in its design. Whatever the reason, maybe it is time we turned to this source—our students—and consulted them on decisions we are making.
Each day, we make choices. Some decisions yield good outcomes, and some do not. Whatever the result, we know these decisions are the ones we made; we considered the situation and chose. However, this sense of agency in our choices may be illusory. Researchers Banaji and Greenwald found that “our rationality is often no match for our automatic preferences” (Pg. 42, 2016). Researchers find that we make decisions less from our conscious processing in other studies (Maitland & Sammartino, 2015). Instead, much of what factors into our choices result from quicker, more automatic processes that are less available in our conscious thinking (Blumenthal-Barby, 2016). These automatic processes are what is commonly known as heuristics.
At their core, heuristics are “judgments of likelihood in the absences of deliberative intent” (Fischhoff et. al, 2002. p. 5). They are a kind of natural assessment that can influence the judgment of a person, place, or thing without being used deliberately or strategically (Blumenthal-Barby, 2016). For example, someone may pass a broken-down building as they walk to work. Upon seeing the building and without the time to thoroughly investigate the reason for its condition, they will rely on heuristics and conclude that fire destroyed the building and that it is unsafe. When someone does this, they make generalized judgments about the building without thoroughly investigating its history, which, as you are probably thinking, can sometimes be inaccurate. Read more
Despite decades of research on sound assessment practices, misunderstandings and myths still abound. In particular, the summative purpose of assessment continues to be an aspect where opinions, philosophies, and outright falsehoods can take on a life of their own and hijack an otherwise thoughtful discourse about the most effective and efficient processes.
Assessment is merely the means of gathering of information about student learning (Black, 2013). We either use that evidence formatively through the prioritization of feedback and the identification of next steps in learning, or we use it summatively through the prioritization of verifying the degree to which the students have met the intended learning goals. Remember, it is the use of assessment evidence that distinguishes the formative form the summative.
The level of hyperbole that surrounds summative assessment, especially on social media, must stop. It’s not helpful, it’s often performative, and is even sometimes cynically motivated to simply attract followers, likes, and retweets. Outlined below are my responses to six of the most common myths about summative assessment. These aren’t the only myths, of course, but these are the six most common that seem to perpetuate and the six that we have to undercut if we are to have authentic, substantive, and meaningful conversations about summative assessment.
Myth 1: “Summative assessment has no place in our 21st century education system”
While the format and substance of assessments can evolve, the need to summarize the degree to which students have met the learning goals (independent of what those goals are) and report to others (e.g., parents) will always be a necessary of any education system in any century. Whether it’s content, skills, or 21st century competencies, the requirement to report will be ever-present.
However, it’s not just about being required; we should welcome the opportunity to report on student successes because it’s important that parents and even our larger community or the general public understand the impact we’re having on our students. If we started looking at the reporting process as a collective opportunity to demonstrate how effective we’ve been at fulfilling our mission then a different mindset altogether about summative assessment may emerge. It’s easy to become both insular and hyperbolic about summative assessment but using assessment evidence for the summative purpose is part of a balanced assessment system. Cynical caricatures of summative assessment detract from meaningful dialogue.
Myth 2: “Summative assessments are really just formative assessments we choose to count toward grade determination.”
Summative assessment often involves the repacking of standards for the purpose of reaching the full cognitive complexity of the learning. Summative assessment is not just the sum of the carefully selected parts; it’s the whole in its totality where the underpinnings are contextualized.
A collection of ingredients is not a meal. It’s a meal when all of those ingredients are thoughtfully combined. The ingredients are necessary to isolate in preparation; we need to know what ingredients are necessary and their quantity. But it’s not a meal until the ingredients are purposefully combined to make a whole.
Unpacking standards to identify granular underpinnings is necessary to create a learning progression toward success. We unpack standards for teaching (formative assessment) but we repack standards for grading (summative assessment). Isolated skills are not the same thing as a synthesized demonstration of learning. Reaching the full cognitive complexity of the standards often involves the combination of skills in a more authentic application, so again, pull apart for instruction, but pull back together for grading.
Myth 3: “Summative assessment is a culminating test or project at the end of the learning.”
While it can be, summative assessment is really a moment in time where a teacher examines the preponderance of evidence to determine the degree to which the students have met the learning goals or standard; it need not be limited to an epic, high stakes event at the end. It can be a culminating test or project as those would provide more recent evidence, but since we know some students need longer to learn, there always needs to be a pathway to recovery in that these culminating events don’t become disproportionately pressure packed and one-shot deals.
Thinking of assessment as a verb often helps. We have, understandably, come to see assessment as a noun – and they often are – but it is crucial that teachers expand their understanding of assessment to know that all of the evidence examined along the way also matters; evidence is evidence. Examining all of the evidence to determine student proficiency along a few gradations of quality (i.e., a rubric) is not only a valid process, but is one that should be embraced.
Myth 4: “Give students a grade and the learning stops.”
This causal relationship has never been established in the research. While it is true that grades and scores can interfere with a student’s willingness to keep learning, that reaction is not automatic. The nuances of whether the feedback was directed to the learning or the learner matters. Avraham Kluger & Angelo DeNisi (1996) emphasized the importance of student responses to feedback as the litmus test for determining whether feedback was effective.
There are no perfect feedback strategies but there are more favorable responses. If we provide a formative score alongside feedback, and the students reengage with the learning and attempts to increase their proficiency then, as the expression goes, no harm, no foul. If they disengage from the learning then clearly there is an issue to be addressed. But again, despite the many forceful assertions made on social media and in other forums, that relationship is not causal.
Again, context and nuance matters, especially when it comes to the quality of feedback. Remember, when it comes to feedback, substance matters more than form. Tom Guskey (2019) submits that had the Ruth Butler (1988) study, the one so widely cited to support this assertion that grades stop learning, examined the impact of grades that were criterion-referenced and learning focused versus ego-based feedback toward the learner (as in you need to work a little harder) then the results of those studies may have been quite different.
The impact in those studies was disproportionate to lower achieving students so common sense would dictate that if you received a low score and were told something to the effect of, “You need to work harder” or “This is a poor effort” that a student would likely want to stop learning. But a low score alongside a “now let’s work on” or “here’s what’s next” comment could produce a different response.
Myth 5: “Grades are arbitrary, meaningless, and subjective.”
Grades will be as meaningful or as meaningless as the adults make them; their existence is not the issue. Grades will be meaningful when they are representative of a gradation of quality derived from clear criteria articulated in advance. What some call subjective is really professional judgment. Judging quality against the articulated learning goals and criteria is our expertise at work.
Pure objectivity is the real myth. Teachers decide what to assess, what not to assess, the question stems or prompts, the number of questions, the format, the length, etc. We use our expertise to decide what sampling of learning provides the clearest picture. It is an erroneous goal to think one can eliminate all teacher choice or judgment from the assessment process. During one of our recent #ATAssessment chats on Twitter, Ken O’Connor reminded participants that the late, great Grant Wiggins often said: (1) We shouldn’t use subjective pejoratively and (2) The issue isn’t subjective or objective; the issue is whether our professional judgments are credible and defensible.
Myth 6: “Students should determine their own grades; they know better than us.”
Students should definitely be brought inside the process of grade determination; even asked to participate and understand how evidence is synthesized. But the teacher is the final arbiter of student learning; that is our expertise at work. This claim might sound like student empowerment but it marginalizes teacher expertise. Are we really saying a student’s first experience is greater than a teacher’s total experience? Again, bring them inside the process, give them the full experience, but don’t diminish your expertise while doing so.
This does not have to be a zero-sum game; more student involvement need not lead to less teacher involvement. This is about expansion within the process to include students along every step of the way; however, our training, expertise, and experience matter in terms of accurately determining student proficiency. Students and parents are not the only users of assessment evidence. Many important decisions both in and out of school depend on the accuracy of what is reported about student learning which means teacher must remain disproportionately involved in the summative process.
Combating these myths is important because there continues to be an oversimplified narrative that vilifies summative assessment as all things evil when it comes to our assessment practices. That mindset, assertion, or narrative is not credible. Not to mention, it’s naïve and really does reveal a lack of understanding of how a balanced assessment system operates within a classroom.
The overall point here is that we need grounded, honest, and reasoned conversations about summative assessment that are anchored in the research, not some performative label or hollow assertion that we defend at all costs through clever turns of phrases and quibbles over semantics.
Black, P. (2013). Formative and summative aspects of assessment: Theoretical and research foundations
in the context of pedagogy. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 167–178). Thousand Oaks, CA: SAGE.
Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task- involving
and ego-involving evaluation on interest and performance. British Journal of Educational
Guskey, T., 2019. Grades versus Feedback: What does the research really tell us?.
[Blog] Thomas R. Guskey & Associates, Available at:
Kluger, A., & DeNisi, A. (1996). The effects of feedback interventions on performance: A
historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.
This is a guest post written by Nina Pak Lui and Colin Madland
Assessment is at the heart of formal learning environments. Assessment practices in K-12 contexts have been the subject of significant research, especially since the late 20th century. However, the assessment practices and beliefs of higher education instructors have not been researched to nearly the same degree. This likely stems from the relative lack of preparation university instructors outside of Schools of Education receive in pedagogy and assessment (Massey, 2020). This has led to the current situation in which higher education has much to learn from K-12. In this post, we outline the problem that exists between modern assessment and pedagogical practices in higher education, and provide two ways assessment practices can shift in higher education.
It is helpful to think about assessment in terms of a model. In Knowing what Students Know, Pellegrino, et al. (2001) provide the accessible model of the “assessment triangle”, a modified version of which is shown below. The assessment triangle comprises three interdependent elements: (1) a cognitive model of the domain, which can be understood for our purposes as learning standards; (2) an instrument or process used to gather evidence of proficiency, and; (3) an interpretation of the evidence of learning. High quality assessment practices require that each of these three elements are in alignment with each other. For example, if the learning standard specifies that learners will be able to critically analyze historical texts, and the instrument used to gather evidence asks learners to identify a correct answer, there is misalignment between those two elements and the interpretation will lack validity.
In the mid- to late-20th century, assessment practices and pedagogy in higher education were in quite close alignment. The prevailing theory of learning was that of behaviourism as popularized by BF Skinner who argued that learning is maximized when learners receive immediate, positive feedback when they supply the correct answer to a question. This led to pedagogical practices that prioritized breaking down concepts into smaller and smaller ideas and having learners memorize the correct answers. Accordingly, assessment practices prioritized instruments filled with selected-response items requiring examinees to recognize correct answers.
Over time, however, our understanding of the cognitive processes involved with learning have evolved. We now recognize that learning is a complex social process and that knowledge is constructed through social interactions. As such, the characteristics of pedagogy in K-12 and increasingly in higher education have shifted away from rote memorization and moved towards the 21st century goals of collaboration, critical thinking, analysis, creativity and life-long learning. Unfortunately, however, Shepard (2000) and Lipnevich et al. (2020) point out that assessment practices in higher ed remain stuck in the behaviourist views of the mid-20th century with a heavy emphasis on high-stakes selected-response tests.
For me, Nina, the stages of Chappuis & Stiggins’ Assessment Development Model (ADM) and key principles of Standards Based Learning (SBL) significantly shifted my assessment practices to reflect modern assessment theory and aims of 21st century learning. To illustrate, a “Then and Now” reflection below shows two ways assessment practices can shift in higher education:
1. From using predominantly selected-response methods, toward implementing performance-based and personal communication methods that are better aligned and reflective of course learning standards.
2. From students as passive participants in the assessment process, toward students as active users of assessment as a learning opportunity.
Then and Now
I used to stick to common types of assessment instruments used in higher education. Although learning can be experienced and demonstrated in multiple ways, I was hesitant to take pedagogical risks in the early years of teaching in higher education. Looking back, my lack of assessment literacy and my preconceived assumptions of what assessment practices should look and sound like in higher education were barriers to effective teaching and student learning. Although course learning standards were present in syllabi, I used to plan activities and assessment tasks before identifying priority standards and clarifying proficiency. Wiggins & McTighe (2011) call this the “Twin Sins” of traditional planning.
Then I learned that clearly knowing what is being assessed and choosing the optimal method depends foremost on the kinds of learning being assessed (Chappuis & Stiggins, 2019). In the planning and development stages of ADM and SBL, priority course learning standards are identified, and the underpinning learning targets are clarified with and for students (Chappuis & Stiggins, 2019; Schimmer et al., 2018). Unpacking learning standards and clarifying proficiency allows instructors to thoughtfully consider how they will summatively and formatively assess student learning (Schimmer et al., 2018; White, 2017). This process helps me select appropriate assessment methods and design assessment instruments aligned with proficiency of the learning standards. Before any assessment instruments are used, I take into consideration potential bias and barriers, and critique the overall assessment for quality (Chappuis & Stiggins, 2109). As a result of intentional planning and sound development, I am able to gather information – evidence of learning – to make formative and summative decisions based on interpretations of student learning with greater validity. Chappuis & Stiggins (2019) suggest that if there is no accuracy, there is no way to know if there has been a gain in knowledge, ability, or understanding.
Now I realize that course learning standards are cognitively complex. Students critically analyze, synthesize, make judgments, gain empathy and self-knowledge, transfer, co-create, and apply course learning in meaningful and transformative ways. These aims reflect 21st century learning goals of higher education. Wiggins & McTighe (2011) point out that knowing facts in order to recall them is superficial learning that can be quickly forgotten, whereas the ability to connect facts and create meaning is deeper learning or enduring understanding. In my current practice, assessment methods and instruments are designed for students to demonstrate higher-order thinking and meaning-making (Pak Lui & Skelding, 2021). Students continue to demonstrate their reasoning and creative abilities through written expression. They also engage in free inquiry which gives them the opportunity to choose their own questions related to the course that are of deep personal interest to them. Students communicate their learning through performance-based and personal communication assessment methods. In a free inquiry, instead of prescribing what the authentic piece should be, students choose the creative mediums and share their learning publicly (MacKenzie, 2016).
Here are a couple of examples from my practice, as recounted in a recent book chapter (Pak Lui and Skelding, 2021):
A former student investigated how to destigmatize mental health in education and had the bravery to include their own mental health journey in their authentic piece. They shared a raw and honest four-stanza poem and accompanied it with related and provoking images in the form of a photo essay. There was not a dry eye in the classroom; the community of learners were drawn into their peer’s learning at an intellectual and emotional level. Another example is of a student who inquired about the standardization of assessment in education. They too combined their research findings and unpacked their own educational experiences with high-stakes assessment by writing and performing a musical rap. The lyrics, rhythm and physical expression of the rap illustrated their key inferences and implications of the urgent need for assessment reform in education.
What I noticed as a result of using assessment methods that are a good match for assessing cognitively complex learning standards (such as written response, performance assessment, or personal communication) was an increased ability as an instructor to interpret evidence of learning. I have greater confidence that the inferences I make accurately reflect achievement of intended learning. Additionally, increasing the value of and use of formative assessment practices shifted students from being passive participants in the learning process to students being active users of assessment results as a learning opportunity. Students regularly receive feedback, and they are given time to act on feedback. Moreover, their involvement as self-assessors of their own learning leads to greater awareness of strengths and areas for improvement and growth before evaluation. As my own assessment practices shift and evolve, I notice teaching and learning becoming a genuine partnership. Students and I are able to develop relational trust, and we are more confident in taking risks in pedagogy together (Pak Lui & Skelding, 2021). It is my hope for students to see that my assessment practices have clear purpose, align to course learning standards, and provide necessary support to move their learning forward. According to White (2019), “without continuous formative assessment built into the classroom, creativity would suffer, risk-taking would lack purpose, and products students create would be meaningless” (p. 33).
COVID-19 provides an opportunity for many university instructors to re-examine both pedagogy and assessment practices in higher education. As we look forward to establishing a new normal, research-based shifts in assessment practices can be a way for 21st century learners to experience a high quality education.
Nina Pak Lui is an Assistant Professor of Education at Trinity Western University in Langley, British Columbia. She studies and teaches curriculum design and assessment for learning. In 2020, she won the Provost Teaching and Innovation Award. You can find her on Twitter @npaklui.
Colin Madland is a PhD candidate in Educational Technology at the University of Victoria in British Columbia where he is studying approaches to assessment in higher education. You can find him at https://cmad.land and on Twitter @colinmadland.
Chappuis, J. & Stiggins, R. (2019). Classroom assessment for student learning: Doing it right – Using it well (3rd ed.). Pearson Education.
Lipnevich, A. A., Guskey, T. R., Murano, D. M., & Smith, J. K. (2020). What do grades mean? Variation in grading criteria in American college and university courses. Assessment in Education: Principles, Policy & Practice, 27(5), 480–500. https://doi.org/10/ghjw3k
Massey, K. D., DeLuca, C., & LaPointe-McEwan, D. (2020). Assessment literacy in college teaching: Empirical evidence on the role and effectiveness of a faculty training course. To Improve the Academy, 39(1). https://doi.org/10/gj5ngz
MacKenzie, T. (2016). Dive into inquiry: Amplify learning and empower student voice.
McTighe, J. and Wiggins, G. (2011). The understanding by design guide to creating high-quality units. Association for Supervision and Curriculum Development.
Pak Lui, N. & Skelding, J. (2021). An emergent course design framework for imaginative pedagogy and assessment in higher education. In Cummings, J. & Fayed, I. (Eds.), Teaching in the post COVID-19 era. [In Print Stage]. Springer Publishing.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. National Academies Press. https://doi.org/10.17226/10019
Schimmer, T., Hillman, T., and Stalets, M. (2018). Standards based learning in action: Moving from theory to practice. Solution Tree Press.
Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. https://doi.org/10/cw9jwc
White, K. (2017). Softening the edges: Assessment practices that honor K to 12
teachers and learners. Solution Tree Press.
White, K. (2019). Unlocked: Assessment as the key to everyday creativity in the
classroom. Solution Tree Press.
Let me start by saying the most obvious statement. The past year and a half has been incredibly hard. For everyone. The summer for me is usually a time for reflection, for finishing incomplete to-do lists, and for getting excited about the next year. I do not mind admitting that the last one was pretty hard for me this year. Watching (and re-watching) some episodes of Ted Lasso has helped a little. Reading some of my favorite authors has helped a lot. I found myself revisiting Essential Assessment: Six Tenets for Bringing Hope, Efficacy, and Achievement to the Classroom this summer and revelling again in the authors’ brilliant and elegant ways of describing assessment (and not just because they’re three of my favorite people!)
So I found myself reading the Accurate Interpretation chapter of Essential Assessment, partially to look for some nuggets to share with the educators in my own district. I came across a sentence that is rather perfect for now, “Educators who believe all students can learn deliberately adopt a tone of influence and possibility as a means to promote learning, especially in the toughest situations.” (p. 67) This sentence seems perfectly suited for the 2021-2022 school year and beyond.
Keeping the focus
As educators, we are always working to keep the focus on the things that we can control. There are so many things that can impact a student’s success and many of them do not have anything to do with us. But the educators that truly believe in their hearts that all students can learn are deeply focused on the many, many things that we can control.
One incredibly powerful category of the things we can control is how we use the information that we gain from our assessments. Are we using the data that we have to help us change our actions which we can control, or to blame our students or situations that we cannot control? Do we talk about that data in ways that validate our own influence as educators and reveal the possibilities in how we can respond? Do we see data as a way to build our self-efficacy and our collective efficacy or just another challenge that cannot be met?
In data lies opportunity
In this school year, we should all be looking for ways to talk about our data that communicates how we can leverage that data to influence our actions in creating opportunities for our students. Our language should reflect both our belief that all students can learn and that we are committed to do the things to make that happen.
One of the biggest lessons that I have learned through the years is that there is nothing wrong with starting small. Find an area in the data and work together to address it. We often beat ourselves up for not doing everything all at once and perfectly. Give yourself and your teams permission to take one thing at a time to build knowledge and confidence.
This same perfect and elegant phrase, a tone of influence and possibility, should be applied to how we talk about learning with our students. We have all been inundated with deficit messages about how students and their learning has been impacted by the pandemic. I am encouraged that many of those messages are now focusing on acceleration and not just remediation. We need to continue to make sure that our language emphasizes the strengths in our students as well as the opportunities we are planning to address any concerns. Our assessment data should help students see where they are in relation to that learning goal and our actions should help students see that there is a way for them to reach that goal.
There is much that we can control and one of the most significant things we can control is our language and our reactions. If we move forward with a belief that we can use the information we gain about our students to create better possibilities for our students, the impacts will go far beyond our own psyche. We can also use a quote from another of my idols, Ted Lasso, “Doing the right thing is never the wrong thing,” which feels like it was custom-made for educators today as well.
Erkens, C., Schimmer, T., Dimich Vagle, N. 2017. Essential assessment: Six tenets for bringing hope, efficacy, and achievement to the classroom. Solution Tree Press.
Educators have been inundated with news articles and media posts focused on the amount of “learning loss” that students have experienced since the requirement to close on-campus learning in March of 2020. While some schools were able to fully return to onsite instruction for the ‘20-’21 school year, others were required to remain fully virtual and many offered hybrid approaches to learning. Even those who were able to return to a face-to-face environment experienced times of entire school or class quarantines, emergency returns to virtual learning, and staff shortages due to the COVID-19 pandemic. With those challenges came the necessity for educators to learn to teach their subject areas across many platforms while also taking on the task of investigating new technologies and addressing safety concerns for themselves, their families and their students. Are there gaps in student learning? Absolutely. But certainly not for a lack of blood, sweat, and millions of tears by every educator in our field. Read more
Responding to Trauma and Responding to Evidence – Both Needed, Both Possible
As regular readers of the STAC blog posts offered by my colleagues and me, you won’t be surprised to read the next sentence. Assessment is one of the most stress-inducing activities educators put students through. Perhaps some of you might even have some uncomfortable reactions when you recall some of your own test experiences. I should qualify this with the notion that I’m talking about assessment done poorly – the type of assessment I define as a number chase, instead of effective assessment which I believe is an evidence chase. But first, let me connect the dots to the title of this post.
In our newly released book Trauma-Sensitive Instruction: Creating a Safe and Predictable Classroom Environment, John Eller and I share a definition of trauma that really stopped us in our conversations because it was so powerful. The definition is from the work of Kathleen Fitzgerald Rice and Betsy McAlister Groves (2005) and states: “Trauma is an exceptional experience in which powerful and dangerous events overwhelm a person’s capacity to cope” (p. 3). The current stress we are all experiencing (and to varying degrees) brought on by the health pandemic far outweighs the stress induced by ineffective assessment practice. However, the combination of these two – poor assessment practice and additional trauma from the pandemic – may combine to negatively impact students to the point that their progress and academic growth might never recover in their remaining school years. It doesn’t have to be that way.
In my work with teachers, I’ll often hear that the constant stress and lack of stability sets up students for difficulties in calming down in order to feel safe, learn, and give their best when it comes time to performing on assessments. Teachers, then, would be wise to invest in assessment design that does not depend on a “one shot, achieve or fail to” test. Instead, formative assessment – the practice before the performance – must be a part of the evidence gathering, not just for students experiencing trauma, but for all students.
The impact of the pandemic was not evenly distributed nor evenly felt. For some of our students (and colleagues) it reinvigorated past traumatic experiences almost incapacitating any opportunity for progress. For some students, the extended trauma exposure resulted in what Jim Sporleder and Heather Forbes (2016) refer to as toxic stress. Toxic stress can lead to issues that can impair students’ normal development and success in the classroom, including their ability to focus and respond appropriately to teacher requests. The potential for assessment to be inaccurate or incomplete is very high. In Trauma-sensitive Instruction we offer many scenarios like this one:
“Laura, a seventh-grade student, lives in a home where her father drinks excessively and comes home drunk. When he gets home, he is both verbally and physically abusive to Laura’s mom and any of the children he sees. Laura normally knows that when he comes home, it’s a good idea to stay out of his way and try to be invisible. She usually withdraws from the situation and tries not to cause a lot of issues.
In her classes, Laura uses similar behavior. Even though she may not understand what she is learning, she is reluctant to ask questions or get clarification. When working in groups, Laura contributes little to the conversation and goes along with the ideas of the group. She is reluctant to make eye contact with people (adults and peers) and appears to be disconnected and isolated.”
Relationships are critical
How might the teacher respond to Laura’s actions while also committing to gathering good evidence to assist her on her educational journey? If Laura is disengaged and disconnected, her teacher may not be able to assess what she knows. Somehow, her teacher has to be able to reduce her stress to be able to get an accurate read on her progress.
Again, the focus on why we assess comes into play. One of the powerful outcomes of a fair an equitable assessment process is the development of a positive relationship between teacher and student. The more assessment is viewed as an opportunity to demonstrate learning and to progress from “not yet” to “proficient”, the greater the view that teacher and student are on a journey together. In our research for Trauma-sensitive Instruction one of the keys that emerged to help buffer against adversity is having warm, positive relationships, which can prompt the release of anti-stress hormones. The choice to have assessment as a stress inducer versus a stress buffer comes down again to how the evidence is used by both the teacher and the student. If teachers can help the student see how assessment data can help them learn, it may cause less of a stressful reaction. If the student thanks that assessment is being used only to label or sort them, it will not be seen as positive or productive, and they certainly won’t capitulate.
We also know that the trauma that came with the health pandemic did not arrive on every doorstep equally. The Center for Infectious Disease Research and Policy (CIDRAP) suggests that “COVID-19 exposures were significantly different across race and household income strata, with Black, Latino, and low-income families reporting higher rates of COVID-19–related stressors, which they attributed to systemic racism and structural inequities…” The trauma-aware schools project further suggests, “Symptoms resulting from trauma can directly impact a student’s ability to learn. In the classroom setting, this can lead to poor behavior, which can result in reduced instructional time and missed opportunities to learn.” It’s important then, that educators avoid overemphasizing the importance of tests and exercise caution to avoid overemphasizing the consequences of failure. The messages educators use to communicate about tests matter, and efforts should be made to reduce students’ anxiety and increase students’ self-efficacy beliefs. The message should focus on the role of assessments as a measure of students’ knowledge and ability at this moment.
Know what to look for–and how to react
One of the key body reactions to trauma occurs as a result of the fight-or-flight response. Educators often see this reaction during test time with those students who arrive and seem to “power down” immediately upon receiving their tests. They may quickly do as much as they can and turn in an incomplete exam or they may just put their name at the top and stop there. When this level of trauma is occurring, the body may be releasing cortisol which keeps the body on alert and primed to respond to the threat. It is important to note here that while the body is primed to respond to the threat, the control center of the brain, the pre-frontal cortex, is shut down. This is the place for logic and reasoning, key skills needed for assessments.
In a paper that focused on high-stakes testing, the authors (Jennifer A. Heissel, Emma K. Adam, Jennifer L. Doleac, David N. Figlio, Jonathan Meer) found that “Students whose cortisol noticeably spiked or dipped tended to perform worse than expected on the state test, controlling for past grades and test scores.” So, if summative tests unfairly penalize students who are experiencing high levels of trauma, it might be reasonable to conclude those tests aren’t generating the evidence we need them to, and they might not be aligned with the formative evidence we already have. The authors go on to state “A potential contributor to socioeconomic disparities in academic performance is the difference in the level of stress experienced by students outside of school.” This means as educators we have to be mindful that students will react differently to similar traumatic experiences and may need different kinds of support from us.
Let me summarize by going back to the title of this post. It’s important for educators to recognize the need to pair trauma-informed teaching with assessment processes to ensure we are not adding more trauma to our students’ lives. Strategies to consider include communicating the purpose for assessment, providing a calm and predictable classroom environment, building and leveraging positive relationships with students, and recognizing when students are under stress and helping them to relax in order to make assessment a natural part of their learning journey. These and other trauma-informed practices will not only help them do better, but will also help them build the resilience they need to be productive and well-rounded adults. Adults who have the capacity to break the trauma cycle for their own children. By the way, these practices, when fully implemented, will benefit ALL learners not only those whose lives have been impacted by trauma.
Testing, Stress, and Performance: How Students Respond Physiologically to High-Stakes Testing
Sporleder, J., & Forbes, H. T. (2016). The trauma-informed school: A step-by-step implementation guide for administrators and school personnel. Boulder, CO: Beyond Consequences Institute.