Tagged: assessment structure


Six Myths of Summative Assessment

Despite decades of research on sound assessment practices, misunderstandings and myths still abound. In particular, the summative purpose of assessment continues to be an aspect where opinions, philosophies, and outright falsehoods can take on a life of their own and hijack an otherwise thoughtful discourse about the most effective and efficient processes.

Assessment is merely the means of gathering of information about student learning (Black, 2013). We either use that evidence formatively through the prioritization of feedback and the identification of next steps in learning, or we use it summatively through the prioritization of verifying the degree to which the students have met the intended learning goals. Remember, it is the use of assessment evidence that distinguishes the formative form the summative.

The level of hyperbole that surrounds summative assessment, especially on social media, must stop. It’s not helpful, it’s often performative, and is even sometimes cynically motivated to simply attract followers, likes, and retweets. Outlined below are my responses to six of the most common myths about summative assessment. These aren’t the only myths, of course, but these are the six most common that seem to perpetuate and the six that we have to undercut if we are to have authentic, substantive, and meaningful conversations about summative assessment.

Myth 1: “Summative assessment has no place in our 21st century education system”

While the format and substance of assessments can evolve, the need to summarize the degree to which students have met the learning goals (independent of what those goals are) and report to others (e.g., parents) will always be a necessary of any education system in any century. Whether it’s content, skills, or 21st century competencies, the requirement to report will be ever-present.

However, it’s not just about being required; we should welcome the opportunity to report on student successes because it’s important that parents and even our larger community or the general public understand the impact we’re having on our students. If we started looking at the reporting process as a collective opportunity to demonstrate how effective we’ve been at fulfilling our mission then a different mindset altogether about summative assessment may emerge. It’s easy to become both insular and hyperbolic about summative assessment but using assessment evidence for the summative purpose is part of a balanced assessment system. Cynical caricatures of summative assessment detract from meaningful dialogue.

Myth 2: “Summative assessments are really just formative assessments we choose to count toward grade determination.”

Summative assessment often involves the repacking of standards for the purpose of reaching the full cognitive complexity of the learning. Summative assessment is not just the sum of the carefully selected parts; it’s the whole in its totality where the underpinnings are contextualized.

A collection of ingredients is not a meal. It’s a meal when all of those ingredients are thoughtfully combined. The ingredients are necessary to isolate in preparation; we need to know what ingredients are necessary and their quantity. But it’s not a meal until the ingredients are purposefully combined to make a whole.

Unpacking standards to identify granular underpinnings is necessary to create a learning progression toward success. We unpack standards for teaching (formative assessment) but we repack standards for grading (summative assessment). Isolated skills are not the same thing as a synthesized demonstration of learning. Reaching the full cognitive complexity of the standards often involves the combination of skills in a more authentic application, so again, pull apart for instruction, but pull back together for grading.

Myth 3: “Summative assessment is a culminating test or project at the end of the learning.”

While it can be, summative assessment is really a moment in time where a teacher examines the preponderance of evidence to determine the degree to which the students have met the learning goals or standard; it need not be limited to an epic, high stakes event at the end. It can be a culminating test or project as those would provide more recent evidence, but since we know some students need longer to learn, there always needs to be a pathway to recovery in that these culminating events don’t become disproportionately pressure packed and one-shot deals.

Thinking of assessment as a verb often helps. We have, understandably, come to see assessment as a noun – and they often are – but it is crucial that teachers expand their understanding of assessment to know that all of the evidence examined along the way also matters; evidence is evidence. Examining all of the evidence to determine student proficiency along a few gradations of quality (i.e., a rubric) is not only a valid process, but is one that should be embraced.

Myth 4: “Give students a grade and the learning stops.”

This causal relationship has never been established in the research. While it is true that grades and scores can interfere with a student’s willingness to keep learning, that reaction is not automatic. The nuances of whether the feedback was directed to the learning or the learner matters. Avraham Kluger & Angelo DeNisi (1996) emphasized the importance of student responses to feedback as the litmus test for determining whether feedback was effective.

There are no perfect feedback strategies but there are more favorable responses. If we provide a formative score alongside feedback, and the students reengage with the learning and attempts to increase their proficiency then, as the expression goes, no harm, no foul. If they disengage from the learning then clearly there is an issue to be addressed. But again, despite the many forceful assertions made on social media and in other forums, that relationship is not causal.

Again, context and nuance matters, especially when it comes to the quality of feedback. Remember, when it comes to feedback, substance matters more than form. Tom Guskey (2019) submits that had the Ruth Butler (1988) study, the one so widely cited to support this assertion that grades stop learning, examined the impact of grades that were criterion-referenced and learning focused versus ego-based feedback toward the learner (as in you need to work a little harder) then the results of those studies may have been quite different.

The impact in those studies was disproportionate to lower achieving students so common sense would dictate that if you received a low score and were told something to the effect of, “You need to work harder” or “This is a poor effort” that a student would likely want to stop learning. But a low score alongside a “now let’s work on” or “here’s what’s next” comment could produce a different response.

Myth 5: “Grades are arbitrary, meaningless, and subjective.”

Grades will be as meaningful or as meaningless as the adults make them; their existence is not the issue. Grades will be meaningful when they are representative of a gradation of quality derived from clear criteria articulated in advance. What some call subjective is really professional judgment. Judging quality against the articulated learning goals and criteria is our expertise at work.

Pure objectivity is the real myth. Teachers decide what to assess, what not to assess, the question stems or prompts, the number of questions, the format, the length, etc. We use our expertise to decide what sampling of learning provides the clearest picture. It is an erroneous goal to think one can eliminate all teacher choice or judgment from the assessment process. During one of our recent #ATAssessment chats on Twitter, Ken O’Connor reminded participants that the late, great Grant Wiggins often said: (1) We shouldn’t use subjective pejoratively and (2) The issue isn’t subjective or objective; the issue is whether our professional judgments are credible and defensible.

Myth 6: “Students should determine their own grades; they know better than us.”

Students should definitely be brought inside the process of grade determination; even asked to participate and understand how evidence is synthesized. But the teacher is the final arbiter of student learning; that is our expertise at work. This claim might sound like student empowerment but it marginalizes teacher expertise. Are we really saying a student’s first experience is greater than a teacher’s total experience? Again, bring them inside the process, give them the full experience, but don’t diminish your expertise while doing so.
This does not have to be a zero-sum game; more student involvement need not lead to less teacher involvement. This is about expansion within the process to include students along every step of the way; however, our training, expertise, and experience matter in terms of accurately determining student proficiency. Students and parents are not the only users of assessment evidence. Many important decisions both in and out of school depend on the accuracy of what is reported about student learning which means teacher must remain disproportionately involved in the summative process.

Combating these myths is important because there continues to be an oversimplified narrative that vilifies summative assessment as all things evil when it comes to our assessment practices. That mindset, assertion, or narrative is not credible. Not to mention, it’s naïve and really does reveal a lack of understanding of how a balanced assessment system operates within a classroom.

The overall point here is that we need grounded, honest, and reasoned conversations about summative assessment that are anchored in the research, not some performative label or hollow assertion that we defend at all costs through clever turns of phrases and quibbles over semantics.

Black, P. (2013). Formative and summative aspects of assessment: Theoretical and research foundations
in the context of pedagogy. In J. H. McMillan (Ed.), SAGE handbook of research on classroom assessment (pp. 167–178). Thousand Oaks, CA: SAGE.

Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task- involving
and ego-involving evaluation on interest and performance. British Journal of Educational
Psychology,58(1), 1-14.

Guskey, T., 2019. Grades versus Feedback: What does the research really tell us?.
[Blog] Thomas R. Guskey & Associates, Available at: [Accessed 30 Nov. 2021].

Kluger, A., & DeNisi, A. (1996). The effects of feedback interventions on performance: A
historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284.



A Tone of Influence and Possibility

Let me start by saying the most obvious statement. The past year and a half has been incredibly hard. For everyone. The summer for me is usually a time for reflection, for finishing incomplete to-do lists, and for getting excited about the next year. I do not mind admitting that the last one was pretty hard for me this year. Watching (and re-watching) some episodes of Ted Lasso has helped a little. Reading some of my favorite authors has helped a lot. I found myself revisiting Essential Assessment: Six Tenets for Bringing Hope, Efficacy, and Achievement to the Classroom this summer and revelling again in the authors’ brilliant and elegant ways of describing assessment (and not just because they’re three of my favorite people!)

So I found myself reading the Accurate Interpretation chapter of Essential Assessment, partially to look for some nuggets to share with the educators in my own district. I came across a sentence that is rather perfect for now, “Educators who believe all students can learn deliberately adopt a tone of influence and possibility as a means to promote learning, especially in the toughest situations.” (p. 67) This sentence seems perfectly suited for the 2021-2022 school year and beyond.

Keeping the focus
As educators, we are always working to keep the focus on the things that we can control. There are so many things that can impact a student’s success and many of them do not have anything to do with us. But the educators that truly believe in their hearts that all students can learn are deeply focused on the many, many things that we can control.

One incredibly powerful category of the things we can control is how we use the information that we gain from our assessments. Are we using the data that we have to help us change our actions which we can control, or to blame our students or situations that we cannot control? Do we talk about that data in ways that validate our own influence as educators and reveal the possibilities in how we can respond? Do we see data as a way to build our self-efficacy and our collective efficacy or just another challenge that cannot be met?

In data lies opportunity
In this school year, we should all be looking for ways to talk about our data that communicates how we can leverage that data to influence our actions in creating opportunities for our students. Our language should reflect both our belief that all students can learn and that we are committed to do the things to make that happen.

One of the biggest lessons that I have learned through the years is that there is nothing wrong with starting small. Find an area in the data and work together to address it. We often beat ourselves up for not doing everything all at once and perfectly. Give yourself and your teams permission to take one thing at a time to build knowledge and confidence.

This same perfect and elegant phrase, a tone of influence and possibility, should be applied to how we talk about learning with our students. We have all been inundated with deficit messages about how students and their learning has been impacted by the pandemic. I am encouraged that many of those messages are now focusing on acceleration and not just remediation. We need to continue to make sure that our language emphasizes the strengths in our students as well as the opportunities we are planning to address any concerns. Our assessment data should help students see where they are in relation to that learning goal and our actions should help students see that there is a way for them to reach that goal.

Far-reaching impact
There is much that we can control and one of the most significant things we can control is our language and our reactions. If we move forward with a belief that we can use the information we gain about our students to create better possibilities for our students, the impacts will go far beyond our own psyche. We can also use a quote from another of my idols, Ted Lasso, “Doing the right thing is never the wrong thing,” which feels like it was custom-made for educators today as well.

Erkens, C., Schimmer, T., Dimich Vagle, N. 2017. Essential assessment: Six tenets for bringing hope, efficacy, and achievement to the classroom. Solution Tree Press.


Responding to Trauma and Responding to Evidence – Both Needed, Both Possible

Responding to Trauma and Responding to Evidence – Both Needed, Both Possible

As regular readers of the STAC blog posts offered by my colleagues and me, you won’t be surprised to read the next sentence. Assessment is one of the most stress-inducing activities educators put students through. Perhaps some of you might even have some uncomfortable reactions when you recall some of your own test experiences. I should qualify this with the notion that I’m talking about assessment done poorly – the type of assessment I define as a number chase, instead of effective assessment which I believe is an evidence chase. But first, let me connect the dots to the title of this post.

In our newly released book Trauma-Sensitive Instruction: Creating a Safe and Predictable Classroom Environment, John Eller and I share a definition of trauma that really stopped us in our conversations because it was so powerful. The definition is from the work of Kathleen Fitzgerald Rice and Betsy McAlister Groves (2005) and states: “Trauma is an exceptional experience in which powerful and dangerous events overwhelm a person’s capacity to cope” (p. 3). The current stress we are all experiencing (and to varying degrees) brought on by the health pandemic far outweighs the stress induced by ineffective assessment practice. However, the combination of these two – poor assessment practice and additional trauma from the pandemic – may combine to negatively impact students to the point that their progress and academic growth might never recover in their remaining school years. It doesn’t have to be that way.

In my work with teachers, I’ll often hear that the constant stress and lack of stability sets up students for difficulties in calming down in order to feel safe, learn, and give their best when it comes time to performing on assessments. Teachers, then, would be wise to invest in assessment design that does not depend on a “one shot, achieve or fail to” test. Instead, formative assessment – the practice before the performance – must be a part of the evidence gathering, not just for students experiencing trauma, but for all students.

The impact of the pandemic was not evenly distributed nor evenly felt. For some of our students (and colleagues) it reinvigorated past traumatic experiences almost incapacitating any opportunity for progress. For some students, the extended trauma exposure resulted in what Jim Sporleder and Heather Forbes (2016) refer to as toxic stress. Toxic stress can lead to issues that can impair students’ normal development and success in the classroom, including their ability to focus and respond appropriately to teacher requests. The potential for assessment to be inaccurate or incomplete is very high. In Trauma-sensitive Instruction we offer many scenarios like this one:

“Laura, a seventh-grade student, lives in a home where her father drinks excessively and comes home drunk. When he gets home, he is both verbally and physically abusive to Laura’s mom and any of the children he sees. Laura normally knows that when he comes home, it’s a good idea to stay out of his way and try to be invisible. She usually withdraws from the situation and tries not to cause a lot of issues.

In her classes, Laura uses similar behavior. Even though she may not understand what she is learning, she is reluctant to ask questions or get clarification. When working in groups, Laura contributes little to the conversation and goes along with the ideas of the group. She is reluctant to make eye contact with people (adults and peers) and appears to be disconnected and isolated.”

Relationships are critical
How might the teacher respond to Laura’s actions while also committing to gathering good evidence to assist her on her educational journey? If Laura is disengaged and disconnected, her teacher may not be able to assess what she knows. Somehow, her teacher has to be able to reduce her stress to be able to get an accurate read on her progress.

Again, the focus on why we assess comes into play. One of the powerful outcomes of a fair an equitable assessment process is the development of a positive relationship between teacher and student. The more assessment is viewed as an opportunity to demonstrate learning and to progress from “not yet” to “proficient”, the greater the view that teacher and student are on a journey together. In our research for Trauma-sensitive Instruction one of the keys that emerged to help buffer against adversity is having warm, positive relationships, which can prompt the release of anti-stress hormones. The choice to have assessment as a stress inducer versus a stress buffer comes down again to how the evidence is used by both the teacher and the student. If teachers can help the student see how assessment data can help them learn, it may cause less of a stressful reaction. If the student thanks that assessment is being used only to label or sort them, it will not be seen as positive or productive, and they certainly won’t capitulate.

We also know that the trauma that came with the health pandemic did not arrive on every doorstep equally. The Center for Infectious Disease Research and Policy (CIDRAP) suggests that “COVID-19 exposures were significantly different across race and household income strata, with Black, Latino, and low-income families reporting higher rates of COVID-19–related stressors, which they attributed to systemic racism and structural inequities…” The trauma-aware schools project further suggests, “Symptoms resulting from trauma can directly impact a student’s ability to learn. In the classroom setting, this can lead to poor behavior, which can result in reduced instructional time and missed opportunities to learn.” It’s important then, that educators avoid overemphasizing the importance of tests and exercise caution to avoid overemphasizing the consequences of failure. The messages educators use to communicate about tests matter, and efforts should be made to reduce students’ anxiety and increase students’ self-efficacy beliefs. The message should focus on the role of assessments as a measure of students’ knowledge and ability at this moment.

Know what to look for–and how to react
One of the key body reactions to trauma occurs as a result of the fight-or-flight response. Educators often see this reaction during test time with those students who arrive and seem to “power down” immediately upon receiving their tests. They may quickly do as much as they can and turn in an incomplete exam or they may just put their name at the top and stop there. When this level of trauma is occurring, the body may be releasing cortisol which keeps the body on alert and primed to respond to the threat. It is important to note here that while the body is primed to respond to the threat, the control center of the brain, the pre-frontal cortex, is shut down. This is the place for logic and reasoning, key skills needed for assessments.

In a paper that focused on high-stakes testing, the authors (Jennifer A. Heissel, Emma K. Adam, Jennifer L. Doleac, David N. Figlio, Jonathan Meer) found that “Students whose cortisol noticeably spiked or dipped tended to perform worse than expected on the state test, controlling for past grades and test scores.” So, if summative tests unfairly penalize students who are experiencing high levels of trauma, it might be reasonable to conclude those tests aren’t generating the evidence we need them to, and they might not be aligned with the formative evidence we already have. The authors go on to state “A potential contributor to socioeconomic disparities in academic performance is the difference in the level of stress experienced by students outside of school.” This means as educators we have to be mindful that students will react differently to similar traumatic experiences and may need different kinds of support from us.

Let me summarize by going back to the title of this post. It’s important for educators to recognize the need to pair trauma-informed teaching with assessment processes to ensure we are not adding more trauma to our students’ lives. Strategies to consider include communicating the purpose for assessment, providing a calm and predictable classroom environment, building and leveraging positive relationships with students, and recognizing when students are under stress and helping them to relax in order to make assessment a natural part of their learning journey. These and other trauma-informed practices will not only help them do better, but will also help them build the resilience they need to be productive and well-rounded adults. Adults who have the capacity to break the trauma cycle for their own children. By the way, these practices, when fully implemented, will benefit ALL learners not only those whose lives have been impacted by trauma.

https://www.cidrap.umn.edu/news-perspective/2021/04/covid-studies-note-online-learning-stress-fewer-cases-schools-protocols

https://traumaawareschools.org/impact

Testing, Stress, and Performance: How Students Respond Physiologically to High-Stakes Testing
https://justicetechlab.github.io/jdoleac-website/research/HADFM_TestingStress.pdf

Sporleder, J., & Forbes, H. T. (2016). The trauma-informed school: A step-by-step implementation guide for administrators and school personnel. Boulder, CO: Beyond Consequences Institute.


Shift Away from Learning Loss and Focus on Relevant Assessments that Emphasize Creativity and Critical Thinking

There is a buzz in education circles and districts raising concerns about the significant learning loss affecting students after the absence of in-person learning and the multiple shifts between learning models. It is important to shift away from this idea of learning loss, as it focuses on the deficits of students and does not recognize the strengths and assets students developed during the pandemic. The Spencer Foundation and The Learning Policy Institute created six principles to consider when addressing this concept of learning loss. One pillar identified is to focus on creative inquiry forms of learning. This was further explained as:

“Learning environments can engage in disciplinary and interdisciplinary inquiries through a variety of contexts and by using artistic forms of expression. All students deserve the opportunity to be treated as creative thinkers and makers. Learning should be an opportunity for play and authentic meaning-making, with the focus on how students are learning instead of drilling content in isolation. Children should be inspired, their curiosity encouraged, and their dreams fed. Research demonstrates that such environments provide rich opportunities for deep learning. Such forms of learning can support meeting multiple learning goals and create opportunities for students to imagine and contribute to thriving post-pandemic worlds that serve themselves and their communities” (Bang et al., 2021).

Refocusing
This report started the wheels turning in my brain. Summer is the time for educators to step away, refresh, and reflect. It is also the time when intentional and rich collaborative conversations take place around curriculum and assessment among teaching teams. They may be formal or informal conversations but what is so powerful is the creativity that comes from these collective voices coming together.

What if this understanding of creative inquiry was shared with teams in advance of planning together? Would this guide teams to think differently about formative and summative assessments? Would it spark new ways to allow students to show their knowledge? Instead of defaulting to traditional assessment practices such as multiple choice, a short essay, or matching, the sky’s the limit in the ways that teams can collaborate to create assessments that are meaningful, relevant to their students’ own lived experiences, and allow students to show what they know. Stephanie Woldem, a brilliant math teacher in a south Minneapolis urban high school, asked students to analyze tables of data showing numbers of homicides, assaults, and arrests. She asked students to find out which police precincts had the most positive interactions with police compared to arrests and crimes. (Star Tribune, Mattos, 2015). Interactions between the community and police were a growing concern. Woldem was very aware of these concerns among the students and created a relevant learning task with an assessment in her freshman algebra class. Policing impacted many of the young scholars in the community around the school. Woldem knew that having a deeper understanding of the data and helping the students make meaning of the data was far more impactful than practicing a few algebra problems.

Bringing lessons to life
The teachers at this school also asked students to participate in data tours, using relevant city data and students created questions to dig into the data. Often they came away with more questions than answers when analyzing the information. What this teacher learned is that in a subject matter such as math where students struggled to make meaning and find relevance, math jumped alive for these students. The teacher found a way to use content and assessments to motivate, engage and inspire the students to become critical thinkers using the data in ways they have not done before.

Through understanding what is going on in the lives of her students and the community in which the students live, the teacher helped them develop the skills of critical and creative thinking. The three great assessment gurus which taught me much of what I know about assessments today, Cassie Erkens, Tom Schimmer and Nicole Dimich, wrote a book called Growing Tomorrow’s Citizens. In the text they outlined critical competencies necessary for success in a changing world. Two of these tenets that Woldem also used in her math classroom were critical thinking and creative thinking. While the authors argue that all seven are critical for deeper and more meaningful learning for student’s today, I want to focus on critical thinking and creative thinking when developing assessments for students.

Reimagining assessments
The authors identify the skill of critical thinking and to also consider the dispositions or how to help students behave as critical thinkers. Students need opportunities to engage in practicing this skill prior to providing evidence of their learning through a formative or summative assessment. As did the math teacher, “it is preferred that learners emerge as partners and key decision makers in their own experiences, allowing them to create relevance while exploring those areas and topics that naturally pique their curiosity” (Erkens et al., p.77). Inquiry and project based learning can be valuable examples of helping learners to practice these skills. As teachers working in collaborative teams in the summer, how can teams reimagine a unit summative assessment to build in critical thinking skills in a way that is relevant for the students in their school community?

The second of seven critical competencies is creativity. Erkens et al., state that “creativity is the backbone of innovation, and humanity thrives on innovation; indeed, it is what improves the quality of living across all age and continents” (144). Creativity may be difficult to measure and far too often students do not have an opportunity to imagine, invent or develop original ideas. Schools are looking for right and wrong to prepare students for standardized tests, which also do not assess creativity. Thus, teaching this critical competency is left behind. A mentor once said to me what is measured is what identifies the values of an organization. If the school is only looking at standardized tests, then students often would not have experiences to produce or create and collaborate to develop new solutions. However, if educators were to ask employers what are the top skills they are looking for in employees, often creativity and developing new, innovative solutions rises to the top. Schools are not preparing students well for the skills they need in our ever-changing world. Again, as educators work together to develop new curriculum and assessments in the summer, consider embedding teaching the skills of creativity into units of study. Allow students an opportunity to practice these skills. In fact, work together to create new opportunities for students as you get your own creative juices flowing. One recommendation from Erkens et al., is to teach the creative process. Melissa Purtee came up with this visual to assist in helping educators understand the creative process.

Inspiration design creation reflection and presentation flowchart

(Purtee, 2017).

One middle school classroom in a STEM school had a specific design course. The students were asked to design musical instruments that were required to make a noise at a certain level of frequency. The students spent time walking through each step of the creative design process. They kept journals documenting and reflecting on each stage of the design process. This was also used as a formative assessment for the teacher. The teacher read a few journal entries each day to get a grasp of how students were making sense of each step in relation to their instrument design. In the end, the students were asked to play the instrument. If the criteria was not met, the teacher provided feedback and the students were able to have another attempt, as failure and revision is a necessary part of the design process. However it is not necessary to have a separate course to teach creativity and this can be embedded into any course or subject matter. To further understand the creative process, visit go.SolutionTree.com/assessment for a free reproducible of a table called” Instructional Questions for Teaching the Creative Process”.

Growing tomorrow’s citizens is important to engage in relevant and meaningful assessments that tap into creativity and critical thinking. This aids in motivation and engagement for the learners when they find assessments where they can make personal connections to prior knowledge, new learning, and develop new skills. With an emphasis on creativity and critical thinking with assessment practices, this will aide in the acceleration of learning instead of focusing on this concept of learning loss. The time is now to engage in critical and creative thinking while collaborating with your own teams during the summer to develop authentic and relevant assessments that are meaningful for students.

Matos, Alejandro (2015, December 6). “South High teachers illustrate inequities through math.” The Star Tribune. www.startribune.com
Bang, Megan, et al. The Learning Policy Institute, 2021, Summer Learning and Beyond: Opportunities for Creating Equity.
Erkens, C. et al. Growing Tomorrow’s Citizens in Today’s Classrooms: Assessing 7 Critical Competencies. Solution Tree Press. 2019.
Purtee, M. “The Essential Framework for Teaching Creativity.” The Art of Education. 2017, theartofeducation.edu/2017/08/15/essential-framework-teaching-creativity/.


Documenting Learning over Time: Portfolios and Data Notebooks

Documenting Learning over Time: Portfolios and Data Notebooks

Portfolios and data notebooks have been around a long time. I remember bringing home scrapbooks in June, after another year of elementary school, filled with glued-in samples of worksheets and drawings—artifacts of a year spent learning. I recall, many years later, opening my portfolio during a final summative conference in a university studio art class, and pulling out samples of work that represented the skills and knowledge I had developed throughout the course. Even more years later, after I had taught for some time, I recollect asking my students to chart their skills in recalling French vocabulary on multiple bar graph templates I had handed out at the beginning of a unit. These graphs were then placed in a dossier for reference. Each of these examples speaks to the act of documenting learning by collecting artifacts and data in a single place where they can be easily accessed and serve their intended purpose.

What is interesting about each of the examples above is that the intended purpose varied in each context. My elementary scrapbook was simply a collection of artifacts representing skills we had been developing or things I had chosen to create. It served as a kind of curated (largely by my teacher) album that I could share with my parents and then place in a box in our basement. My art portfolio was a catalyst for reflection and evaluation at the end of my studio art class. The individual pieces contained within served as a way to make a case for my growth and development in critical artistic skills. Sadly, this portfolio has also been relegated to my basement, gathering dust. I still feel tremendous emotional attachment to the artwork within but it has served its purpose. The data sets I invited my students to create in French class served the purpose of documenting growth and supporting conversations about how my students might improve further. The data the learners collected and graphed was intended to be a temporary “current state,” with new data added each time they attempted new strategies and spent time practicing.

The years we spend in educational contexts represent a vast array of experiences. Children and youth spend a tremendous proportion of their days in classrooms and schools (face-to-face or virtual) and the learning they experience is certainly worthy of documentation. Their educational stories deserve representation. The great thing about data notebooks and portfolios is that we can document the learning journey and we can use the documentation as a catalyst for reflection, analysis, goal setting, and growth. We now know that these collections of artifacts and data can serve a purpose beyond becoming an album or a capstone collection that sits in a basement—they can begin new learning conversations. Read more


The 4×2 of Student Self-Reflection

The story self-reflection within the context of my own career is a good news/bad news story. First, the good news. While there is much I’m not proud of when I reflect on the early part of my career – especially from an assessment and grading perspective (e.g. zeros, no reassessment) – the one thing I did do is have my students engage in some self-reflection.

The bad news? Well, I didn’t engage my students in self-reflection very often and when I did, it was awful; I didn’t know what I was doing, there was no structure to it, and it, more often than not, ended up being a waste of time.

Self-reflection is essential to the development of our students’ metacognitive awareness that allows them to plan, monitor, and assess their learning and themselves, as learners (Clark, 2012; Jones, 2007). Metacognition entails both the knowledge and regulation of one’s cognition (Pintrich, 2002). Through the processes of self-judgment and self-reaction (Zimmerman, 2011), self-reflection plays an essential role in the cyclical process of metacognitive awareness before, during, and after the learning. Read more


Uncovering Implicit Bias in Assessment, Feedback and Grading

Education is a noble profession. It is a profession that aims to cultivate diverse thinkers and aspires to nurture personal growth. It is a profession that can lift humanity’s spirits and help humankind strive to be the best version of itself—the “great equalizer of the conditions of men,” as Horace Mann famously stated in the 19th century. However, even with great people, a worthy goal, and an admirable vision, the opposite can often be the case.

Unfortunately, education can also be the great unequalizer, where personal biases can inform practice and policy development, stifle student growth, enforce discriminatory policies, and even socially isolate students. According to some researchers, implicit educator biases may contribute to a racial achievement gap; precisely, the negative impact of teacher assumptions on students’ ability based on race, culture, or values (van den Bergh et al. 2010). Unknown to an educator, these personal biases may create imagery of an ideal student, which is often seen through a white privilege lens because of society’s tendency toward whiteness, distorting our interactions with students of color.

Without social awareness and continuous self-monitoring, educators may let their implicit bias become an influential factor in their pedagogy, influencing everything from assessment to grading. This blog will discuss how personal biases can appear in our teaching and learning practices if educators are not diligent. I will focus on three of the more considerable teaching and learning modes: assessment, feedback, and grading.

Assessment Bias

Without attention, teachers may create assessments that reflect their values and experiences and ignore those of their students. The teacher may use language that they are more familiar with in their queries or create prompts influenced by their personal experiences. Ultimately students may find it hard to relate to the questions—potentially leaving some students unable to perform to the best of their abilities. For assessments to be less biased, a teacher must consider all backgrounds, ethnicities, genders, and identities to ensure that their lived experience isn’t the only one represented on an assessment.

By being more introspective when developing assessments, teachers can lessen the chance they produce a personally irrelevant assessment for their students. The impact of this irrelevancy could be low student performance, disinterest in the task, and even apathy toward school.

To help teachers be more aware of their biases when creating assessments, they can ask themselves the following questions as they develop questions:

  1. Does the assessment give the sense that the teacher has unwavering support and is a partner in a student’s success?
  2. Are the questions seeking to understand the student or judge them?
  3. Does the teacher draw on the students’ life situations, interests, and curiosities when creating problems/prompts? – Adapted from Tomlinson (2015).

Microaggressions in Feedback 

Suppose we ignore our implicit biases when speaking with students. In that case, we run the risk of putting both parties into what social psychologist Albert Bandura calls “a downward course of mutual discouragement” (Bandura 1997, 234). A student’s reaction to deficit-based feedback may result in the teacher reacting in kind. Once this cycle starts, a student’s self-belief is now at risk.

Microaggressions and subtle discriminations can exist in the feedback process, and when they do, they may severely limit feedback acceptance.

Teachers can limit bias in their feedback to use the student’s thinking to grow the student. For example, let’s look at the following examples:

Example A

Feedback that Uses Teacher Thinking: You didn’t include [these details] about [person] in your essay. Try [these words].

Feedback that Uses Student Thinking: Tell me more about these words [here]. I am interested to know why you think [this word] didn’t work instead? Oh, okay, that would work. You should add what you just said to your paragraph, and it was perfect.

Example B

Feedback that Uses Teacher Thinking: When I write, I try and think about [detail]. Remember when I taught you the three-step process? No? The one that is in your textbook? That’s the most effective process.

Feedback that Uses Student Thinking: What did you think about when you wrote [this]? Seems like that interests you? Yeah, I can see you are passionate about [that]. What are the first three things you did when you wrote [this]? That is an interesting place to start. Could I convince you to start here? No? Okay, that makes sense. Have to make this work for you.

Example C

Feedback that Uses Teacher Thinking: In this graph, I would start [here] because this information is important. Has anyone heard of the [rhyme name] to remember the key features of a graph? No? Oh, this helped me a lot.

Feedback that Uses Student Thinking: In this graph, what information did you think was essential for you to begin this problem? I’m surprised to hear you say that because yesterday you said something different, what changed? Interesting, I saw you smile as you were talking. Why? Yeah, I agree you are getting this concept more. Are you using any strategies to help you learn this? Yes. Well, [that strategy] is undoubtedly helping you.

These scenarios are fictional, but the point here is teachers should always be aware of their language. Otherwise, they can inadvertently make the student feel like an unequal and devalued student in the class and even the school. In short, words matter.

Race Bias in Grading Practices

Teachers must judge student performance fairly and accurately. It is our professional duty. Inaccurate judgments have the potential not only to alter grades but could negatively affect teacher-student relationships, distort a student’s self-concept, or reduce opportunities to learn (Cohen and Steele 2002). One factor that can lead to a misrepresentation of a grade is teacher race and ethnicity bias. A student’s racial or ethnic group, socioeconomic class, or gender can substantially bias a teacher’s judgment of student performance. Any internalized racial biases can activate stereotypes and lead teachers to utilize discriminatory performance evaluations (Wood and Graham 2010).

For example, several studies found substantial differences in students’ performance judgments from various racial subgroups when the teacher subconsciously subscribed to the general stereotype that African American and Latino students generally don’t perform as well as their White and Asian counterparts (Ready and Wright 2011).

Different minority statuses can affect teacher perceptions in performance evaluation, leading to inaccurate grades, potentially harming students’ perception of their academic experience. (Ogbu and Simons 1998). In other words, students may feel like school is insignificant, unsupportive, or even harmful.

To help lessen the likelihood that implicit personal bias influences the grading process, teachers can democratize the grading process. They can use learning evidence instead of points and employ a modal interpretation of gradebook scores instead of averages. They can use a skills-focused curriculum instead of a content-focused one. Perhaps most important, they can involve the student in the grading process by infusing more self-evaluation moments into their instruction.

School leaders should explore ways to evaluate pedagogical practices through a racial and equity lens and observe classroom interactions between teachers and students. School leaders should also continue training on white privilege and its influence over the status quo, and teachers should evaluate student performance to judge its fairness and accuracy.

The Work Ahead

Although we may feel like we are objective and rational people, we all have biases.  We all have values, beliefs, and assumptions that help us make sense of what is happening in our lives and guide our interactions with others. For the most part, these values, beliefs, and assumptions guide us in positive and productive directions, but the interplay between these same values and social interactions can produce implicit biases that distort our decisions, perspectives, and actions. We must notice, monitor, and manage these distortions to achieve a goal of racial equity in school and life. If we don’t, we are at risk of our unconscious biases harming our pedagogy, our relationships with students, and our perception of their needs.

 

Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: W. H. Freeman & Co, Publishers.

Cohen, G. L., & Steele, C. M. (2002). A barrier of mistrust: How negative stereotypes affect cross-race mentoring. In J. Aronson (Ed.), Improving academic achievement: Impact of psychological factors on education (pp. 303–327). San Diego, CA: Academic.

Ogbu, J. U., & Simons, H. D. (1998). Voluntary and involuntary minorities: A cultural-ecological theory of school performance with some implications for education. Anthropology and Education Quarterly, 29(2), 155-188.

Ready, D.D., & Wright, D. L. (2011). Accuracy and Inaccuracy in Teachers’ Perceptions of Young Children’s Cognitive Abilities: The Role of Child Background and Classroom Context. https://doi.org/10.3102/0002831210374874

Tenenbaum, H. R., & Ruck, M. D. (2007). Are teachers’ expectations different for racial minorities than for European American students? A meta-analysis. Journal of Educational Psychology, 99(2), 253–273.

“Teaching Up” – Carol Ann Tomlinson: Teaching for Excellence in Academically Diverse Classrooms (2015).

van den Bergh, L., Denessen, E., Hornstra, L., Voeten, M., & Holland, R.W., (2010). The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. American Educational Research Journal, 47, 497–527.

Wood, D., & Graham, S. (2010). “Why race matters: social context and achievement motivation in African American youth.” In Urdan, T. and Karabenick, S. (Eds.) The Decade Ahead: Applications and Contexts of Motivation and Achievement (Advances in Motivation and Achievement, Vol. 16 Part B) (pp. 175-209). Emerald Group Publishing Limited, Bingley.


Data as a Flashlight: Using the Evidence to Guide the Journey (Yours and Theirs)

My granddaughter was struggling with the latest topic in her grade 3 math class and her recent assessment result validated that she did not fully understand the learning target of patterns and the equations that supported them. Determining patterns is not always an easy process as this example would indicate:

2, 6, 3, 9, 6, 18, 15…

With a big test coming up, my daughter-in-law reached out to me to help get my granddaughter past the block and gain some confidence in her ability to master the concept. We connected online a few times over the days before the test and worked through a lot of questions and strategies. As she grasped the concepts and different ways to get to the solution, I could see her confidence soar. By the time we concluded all of the practice and she routinely got every solution, she was excited to demonstrate her skills on the assessment. As I write this post it’s been well over a week since the assessment was completed and my granddaughter has not received any feedback.  Read more


What type of questions do we put on tests? Better yet, why?

While working recently with a high school mathematics team to write quality common assessments, I asked the teachers to bring in their previously used unit tests. They had already been giving common assessments for about three years as collaborative teams, so their unit assessments were in agreement. However, I noticed that every assessment item was multiple choice on every exam throughout the department.

When asking the algebra team about the reasoning behind only using multiple-choice items, I was told it was necessary in order to quickly analyze the data as a team and give results to students. When I asked what teachers or students did with the results, I was met with silence. When I asked how teachers and students learned from the common misconceptions shown on the exam—again, silence. Read more