Blog


Making Learning Matter

I have two adult children who have spent multiple years immersed in post-secondary education. Their experiences in this phase of schooling have impacted my own thinking about many aspects of assessment. Most recently, I have been thinking about the purpose assessment holds in communicating the importance and relevance of what is being learned.

 In my assessment work, I have long considered relevance as part of strong assessment design. Humans simply perform better when experiences matter to them; when the tasks they are engaged in hold strong meaning in terms of interest and aspiration. Considering authentic audiences and engaging purposes when crafting assessment experiences is part of effective assessment architecture.

 However, my concept of relevance expanded when my eldest child entered medicine. In this professional college, assessment is a large part of her daily experience. She is assessed on her understanding of anatomy and pathology, as well as her skill in engaging with patients and colleagues and analyzing diagnostic information. Her assessment experiences connect directly to the work she will be engaged in after graduation, and this makes total sense. However, even understanding the logic of this alignment between assessment design and real-life experience, the notion of relevance became even more clear to both of us following her first term in the college.

Read more


Accurate Interpretation: Think Big, Start Small

Effectively using the data that we gain from our assessments is always important, and perhaps never more so than right now. There is a reason that accurate interpretation is a tenet in the Solution Tree Assessment Center model, and it is certainly worth taking the time to explore. There are a few definitions of the word “interpret”; some focus on more artistic endeavors, while many others focus on the idea of explaining something. As educators, we must interpret things each and every day—from whether we will be able to accomplish everything in our lesson plan to whether our students are really understanding what we want them to know. We should strive to draw informed inferences in our work, recognizing that doing this requires professional knowledge, skill, and ongoing effort. Read more


Back to the Basics

Educators across the country are sharing how this school year was far more difficult than the previous two years during the pandemic. There have been many pivots (I know, I know . . . that is like a four-letter word), many shifts, and many concerns raised as students return to school and socialize with peers they have not seen for a long time. This was a year like no other. As it comes to an end, educators have an opportunity to take a breath and reflect on what worked well and areas in which to seek growth. There is also an opportunity to think about going back to the basics with assessment practices. The pace of the year had many teachers juggling way too many responsibilities; summer brings time to reflect and opportunities for collaboration. This time allows teams to dig into the skills and knowledge students struggled with the most and design formative and summative assessment practices that align with the standards. Read more


Building Up or Breaking Down: How Assessment Impacts a Culture of Learning

“Whether we plan it or not, culture will happen. Why not create the culture we want?”

—Carmine Gallo, The Storyteller’s Secret

 

Have you ever started a new book and just . . . lost interest? Have you ever started a book and found yourself so enthralled that you could hardly put it down? Each school year, educators have the opportunity to write a new story—and the beginning of that story is critical. No matter the setting (face-to-face, virtual, blended), many educators begin with a similar focus: creating a culture of learning. Time dedicated to this work varies. Some educators feel the pressure of beginning content and spend minimal time focused on culture. Some believe the work of culture never truly ends. Regardless of where you fall on this spectrum, do you know the impact your assessment practices have on the culture you are trying to create? Read more


Listening to Our Learners

“Feedback is honesty. Don’t just tell me ‘good job’ when I didn’t.” —Middle years student

My colleagues and I work with systems across North America who are undergoing assessment reform. Educators and leaders alike are asking themselves how to shift their assessment practices, when to do it, and what it will entail. The questions generated in a single coaching session illuminate the complexity of this shift. Teachers are wondering how assessment should be designed, which symbol (if any) to attach to products and performances, and how to respond to assessment evidence in ways that will advance learning. This work is both significant and challenging, and no one is taking it lightly. However, in the quest to “get it right,” adults often forget a key source of wisdom and insight available to us every single day. Perhaps we see this source as a receptor of our refined assessment system, rather than as a collaborative partner in its design. Whatever the reason, maybe it is time we turned to this source—our students—and consulted them on decisions we are making.

Read more


Teacher Decision-Making Matters: The Influence of Teacher Choice on Student Learning

Each day, we make choices. Some decisions yield good outcomes, and some do not. Whatever the result, we know these decisions are the ones we made; we considered the situation and chose. However, this sense of agency in our choices may be illusory. Researchers Banaji and Greenwald found that “our rationality is often no match for our automatic preferences” (Pg. 42, 2016). Researchers find that we make decisions less from our conscious processing in other studies (Maitland & Sammartino, 2015). Instead, much of what factors into our choices result from quicker, more automatic processes that are less available in our conscious thinking (Blumenthal-Barby, 2016). These automatic processes are what is commonly known as heuristics.

At their core, heuristics are “judgments of likelihood in the absences of deliberative intent” (Fischhoff et. al, 2002. p. 5). They are a kind of natural assessment that can influence the judgment of a person, place, or thing without being used deliberately or strategically (Blumenthal-Barby, 2016). For example, someone may pass a broken-down building as they walk to work. Upon seeing the building and without the time to thoroughly investigate the reason for its condition, they will rely on heuristics and conclude that fire destroyed the building and that it is unsafe. When someone does this, they make generalized judgments about the building without thoroughly investigating its history, which, as you are probably thinking, can sometimes be inaccurate. Read more


Six Myths of Summative Assessment

Despite decades of research on sound assessment practices, misunderstandings and myths still abound. In particular, the summative purpose of assessment continues to be an aspect where opinions, philosophies, and outright falsehoods can take on a life of their own and hijack an otherwise thoughtful discourse about the most effective and efficient processes.

Assessment is merely the means of gathering of information about student learning (Black, 2013). We either use that evidence formatively through the prioritization of feedback and the identification of next steps in learning, or we use it summatively through the prioritization of verifying the degree to which the students have met the intended learning goals. Remember, it is the use of assessment evidence that distinguishes the formative form the summative.

The level of hyperbole that surrounds summative assessment, especially on social media, must stop. It’s not helpful, it’s often performative, and is even sometimes cynically motivated to simply attract followers, likes, and retweets. Outlined below are my responses to six of the most common myths about summative assessment. These aren’t the only myths, of course, but these are the six most common that seem to perpetuate and the six that we have to undercut if we are to have authentic, substantive, and meaningful conversations about summative assessment.

Myth 1: “Summative assessment has no place in our 21st century education system”

While the format and substance of assessments can evolve, the need to summarize the degree to which students have met the learning goals (independent of what those goals are) and report to others (e.g., parents) will always be a necessary of any education system in any century. Whether it’s content, skills, or 21st century competencies, the requirement to report will be ever-present.

However, it’s not just about being required; we should welcome the opportunity to report on student successes because it’s important that parents and even our larger community or the general public understand the impact we’re having on our students. If we started looking at the reporting process as a collective opportunity to demonstrate how effective we’ve been at fulfilling our mission then a different mindset altogether about summative assessment may emerge. It’s easy to become both insular and hyperbolic about summative assessment but using assessment evidence for the summative purpose is part of a balanced assessment system. Cynical caricatures of summative assessment detract from meaningful dialogue.

Myth 2: “Summative assessments are really just formative assessments we choose to count toward grade determination.”

Summative assessment often involves the repacking of standards for the purpose of reaching the full cognitive complexity of the learning. Summative assessment is not just the sum of the carefully selected parts; it’s the whole in its totality where the underpinnings are contextualized.

A collection of ingredients is not a meal. It’s a meal when all of those ingredients are thoughtfully combined. The ingredients are necessary to isolate in preparation; we need to know what ingredients are necessary and their quantity. But it’s not a meal until the ingredients are purposefully combined to make a whole.

Unpacking standards to identify granular underpinnings is necessary to create a learning progression toward success. We unpack standards for teaching (formative assessment) but we repack standards for grading (summative assessment). Isolated skills are not the same thing as a synthesized demonstration of learning. Reaching the full cognitive complexity of the standards often involves the combination of skills in a more authentic application, so again, pull apart for instruction, but pull back together for grading.

Myth 3: “Summative assessment is a culminating test or project at the end of the learning.”

While it can be, summative assessment is really a moment in time where a teacher examines the preponderance of evidence to determine the degree to which the students have met the learning goals or standard; it need not be limited to an epic, high stakes event at the end. It can be a culminating test or project as those would provide more recent evidence, but since we know some students need longer to learn, there always needs to be a pathway to recovery in that these culminating events don’t become disproportionately pressure packed and one-shot deals.

Thinking of assessment as a verb often helps. We have, understandably, come to see assessment as a noun – and they often are – but it is crucial that teachers expand their understanding of assessment to know that all of the evidence examined along the way also matters; evidence is evidence. Examining all of the evidence to determine student proficiency along a few gradations of quality (i.e., a rubric) is not only a valid process, but is one that should be embraced.

Myth 4: “Give students a grade and the learning stops.”

This causal relationship has never been established in the research. While it is true that grades and scores can interfere with a student’s willingness to keep learning, that reaction is not automatic. The nuances of whether the feedback was directed to the learning or the learner matters. Avraham Kluger & Angelo DeNisi (1996) emphasized the importance of student responses to feedback as the litmus test for determining whether feedback was effective.

There are no perfect feedback strategies but there are more favorable responses. If we provide a formative score alongside feedback, and the students reengage with the learning and attempts to increase their proficiency then, as the expression goes, no harm, no foul. If they disengage from the learning then clearly there is an issue to be addressed. But again, despite the many forceful assertions made on social media and in other forums, that relationship is not causal.

Again, context and nuance matters, especially when it comes to the quality of feedback. Remember, when it comes to feedback, substance matters more than form. Tom Guskey (2019) submits that had the Ruth Butler (1988) study, the one so widely cited to support this assertion that grades stop learning, examined the impact of grades that were criterion-referenced and learning focused versus ego-based feedback toward the learner (as in you need to work a little harder) then the results of those studies may have been quite different.

The impact in those studies was disproportionate to lower achieving students so common sense would dictate that if you received a low score and were told something to the effect of, “You need to work harder” or “This is a poor effort” that a student would likely want to stop learning. But a low score alongside a “now let’s work on” or “here’s what’s next” comment could produce a different response.

Myth 5: “Grades are arbitrary, meaningless, and subjective.”

Grades will be as meaningful or as meaningless as the adults make them; their existence is not the issue. Grades will be meaningful when they are representative of a gradation of quality derived from clear criteria articulated in advance. What some call subjective is really professional judgment. Judging quality against the articulated learning goals and criteria is our expertise at work.

Pure objectivity is the real myth. Teachers decide what to assess, what not to assess, the question stems or prompts, the number of questions, the format, the length, etc. We use our expertise to decide what sampling of learning provides the clearest picture. It is an erroneous goal to think one can eliminate all teacher choice or judgment from the assessment process. During one of our recent #ATAssessment chats on Twitter, Ken O’Connor reminded participants that the late, great Grant Wiggins often said: (1) We shouldn’t use subjective pejoratively and (2) The issue isn’t subjective or objective; the issue is whether our professional judgments are credible and defensible.

Myth 6: “Students should determine their own grades; they know better than us.”

Students should definitely be brought inside the process of grade determination; even asked to participate and understand how evidence is synthesized. But the teacher is the final arbiter of student learning; that is our expertise at work. This claim might sound like student empowerment but it marginalizes teacher expertise. Are we really saying a student’s first experience is greater than a teacher’s total experience? Again, bring them inside the process, give them the full experience, but don’t diminish your expertise while doing so.
This does not have to be a zero-sum game; more student involvement need not lead to less teacher involvement. This is about expansion within the process to include students along every step of the way; however, our training, expertise, and experience matter in terms of accurately determining student proficiency. Students and parents are not the only users of assessment evidence. Many important decisions both in and out of school depend on the accuracy of what is reported about student learning which means teacher must remain disproportionately involved in the summative process.

Combating these myths is important because there continues to be an oversimplified narrative that vilifies summative assessment as all things evil when it comes to our assessment practices. That mindset, assertion, or narrative is not credible. Not to mention, it’s naïve and really does reveal a lack of understanding of how a balanced assessment system operates within a classroom.

The overall point here is that we need grounded, honest, and reasoned conversations about summative assessment that are anchored in the research, not some performative label or hollow assertion that we defend at all costs through clever turns of phrases and quibbles over semantics.

Black, P. (2013). Formative and summative aspects of assessment: Theoretical and research foundations
in the context of pedagogy. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 167–178). Thousand Oaks, CA: SAGE.

Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task- involving
and ego-involving evaluation on interest and performance. British Journal of Educational
Psychology,58(1), 1-14.

Guskey, T., 2019. Grades versus Feedback: What does the research really tell us?.
[Blog] Thomas R. Guskey & Associates, Available at: [Accessed 30 Nov. 2021].

Kluger, A., & DeNisi, A. (1996). The effects of feedback interventions on performance: A
historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.



A Tone of Influence and Possibility

Let me start by saying the most obvious statement. The past year and a half has been incredibly hard. For everyone. The summer for me is usually a time for reflection, for finishing incomplete to-do lists, and for getting excited about the next year. I do not mind admitting that the last one was pretty hard for me this year. Watching (and re-watching) some episodes of Ted Lasso has helped a little. Reading some of my favorite authors has helped a lot. I found myself revisiting Essential Assessment: Six Tenets for Bringing Hope, Efficacy, and Achievement to the Classroom this summer and revelling again in the authors’ brilliant and elegant ways of describing assessment (and not just because they’re three of my favorite people!)

So I found myself reading the Accurate Interpretation chapter of Essential Assessment, partially to look for some nuggets to share with the educators in my own district. I came across a sentence that is rather perfect for now, “Educators who believe all students can learn deliberately adopt a tone of influence and possibility as a means to promote learning, especially in the toughest situations.” (p. 67) This sentence seems perfectly suited for the 2021-2022 school year and beyond.

Keeping the focus
As educators, we are always working to keep the focus on the things that we can control. There are so many things that can impact a student’s success and many of them do not have anything to do with us. But the educators that truly believe in their hearts that all students can learn are deeply focused on the many, many things that we can control.

One incredibly powerful category of the things we can control is how we use the information that we gain from our assessments. Are we using the data that we have to help us change our actions which we can control, or to blame our students or situations that we cannot control? Do we talk about that data in ways that validate our own influence as educators and reveal the possibilities in how we can respond? Do we see data as a way to build our self-efficacy and our collective efficacy or just another challenge that cannot be met?

In data lies opportunity
In this school year, we should all be looking for ways to talk about our data that communicates how we can leverage that data to influence our actions in creating opportunities for our students. Our language should reflect both our belief that all students can learn and that we are committed to do the things to make that happen.

One of the biggest lessons that I have learned through the years is that there is nothing wrong with starting small. Find an area in the data and work together to address it. We often beat ourselves up for not doing everything all at once and perfectly. Give yourself and your teams permission to take one thing at a time to build knowledge and confidence.

This same perfect and elegant phrase, a tone of influence and possibility, should be applied to how we talk about learning with our students. We have all been inundated with deficit messages about how students and their learning has been impacted by the pandemic. I am encouraged that many of those messages are now focusing on acceleration and not just remediation. We need to continue to make sure that our language emphasizes the strengths in our students as well as the opportunities we are planning to address any concerns. Our assessment data should help students see where they are in relation to that learning goal and our actions should help students see that there is a way for them to reach that goal.

Far-reaching impact
There is much that we can control and one of the most significant things we can control is our language and our reactions. If we move forward with a belief that we can use the information we gain about our students to create better possibilities for our students, the impacts will go far beyond our own psyche. We can also use a quote from another of my idols, Ted Lasso, “Doing the right thing is never the wrong thing,” which feels like it was custom-made for educators today as well.

Erkens, C., Schimmer, T., Dimich Vagle, N. 2017. Essential assessment: Six tenets for bringing hope, efficacy, and achievement to the classroom. Solution Tree Press.


Winning After a Loss

Educators have been inundated with news articles and media posts focused on the amount of “learning loss” that students have experienced since the requirement to close on-campus learning in March of 2020. While some schools were able to fully return to onsite instruction for the ‘20-’21 school year, others were required to remain fully virtual and many offered hybrid approaches to learning. Even those who were able to return to a face-to-face environment experienced times of entire school or class quarantines, emergency returns to virtual learning, and staff shortages due to the COVID-19 pandemic. With those challenges came the necessity for educators to learn to teach their subject areas across many platforms while also taking on the task of investigating new technologies and addressing safety concerns for themselves, their families and their students. Are there gaps in student learning? Absolutely. But certainly not for a lack of blood, sweat, and millions of tears by every educator in our field. Read more