Effectively using the data that we gain from our assessments is always important, and perhaps never more so than right now. There is a reason that accurate interpretation is a tenet in the Solution Tree Assessment Center model, and it is certainly worth taking the time to explore. There are a few definitions of the word “interpret”; some focus on more artistic endeavors, while many others focus on the idea of explaining something. As educators, we must interpret things each and every day—from whether we will be able to accomplish everything in our lesson plan to whether our students are really understanding what we want them to know. We should strive to draw informed inferences in our work, recognizing that doing this requires professional knowledge, skill, and ongoing effort. Read more
Tagged: assessment feedback
Building Up or Breaking Down: How Assessment Impacts a Culture of Learning
“Whether we plan it or not, culture will happen. Why not create the culture we want?”
—Carmine Gallo, The Storyteller’s Secret
Have you ever started a new book and just . . . lost interest? Have you ever started a book and found yourself so enthralled that you could hardly put it down? Each school year, educators have the opportunity to write a new story—and the beginning of that story is critical. No matter the setting (face-to-face, virtual, blended), many educators begin with a similar focus: creating a culture of learning. Time dedicated to this work varies. Some educators feel the pressure of beginning content and spend minimal time focused on culture. Some believe the work of culture never truly ends. Regardless of where you fall on this spectrum, do you know the impact your assessment practices have on the culture you are trying to create? Read more
Listening to Our Learners
“Feedback is honesty. Don’t just tell me ‘good job’ when I didn’t.” —Middle years student
My colleagues and I work with systems across North America who are undergoing assessment reform. Educators and leaders alike are asking themselves how to shift their assessment practices, when to do it, and what it will entail. The questions generated in a single coaching session illuminate the complexity of this shift. Teachers are wondering how assessment should be designed, which symbol (if any) to attach to products and performances, and how to respond to assessment evidence in ways that will advance learning. This work is both significant and challenging, and no one is taking it lightly. However, in the quest to “get it right,” adults often forget a key source of wisdom and insight available to us every single day. Perhaps we see this source as a receptor of our refined assessment system, rather than as a collaborative partner in its design. Whatever the reason, maybe it is time we turned to this source—our students—and consulted them on decisions we are making.
Assessment CAN SHIFT in Higher Education
This is a guest post written by Nina Pak Lui and Colin Madland
Assessment is at the heart of formal learning environments. Assessment practices in K-12 contexts have been the subject of significant research, especially since the late 20th century. However, the assessment practices and beliefs of higher education instructors have not been researched to nearly the same degree. This likely stems from the relative lack of preparation university instructors outside of Schools of Education receive in pedagogy and assessment (Massey, 2020). This has led to the current situation in which higher education has much to learn from K-12. In this post, we outline the problem that exists between modern assessment and pedagogical practices in higher education, and provide two ways assessment practices can shift in higher education.
Background
It is helpful to think about assessment in terms of a model. In Knowing what Students Know, Pellegrino, et al. (2001) provide the accessible model of the “assessment triangle”, a modified version of which is shown below. The assessment triangle comprises three interdependent elements: (1) a cognitive model of the domain, which can be understood for our purposes as learning standards; (2) an instrument or process used to gather evidence of proficiency, and; (3) an interpretation of the evidence of learning. High quality assessment practices require that each of these three elements are in alignment with each other. For example, if the learning standard specifies that learners will be able to critically analyze historical texts, and the instrument used to gather evidence asks learners to identify a correct answer, there is misalignment between those two elements and the interpretation will lack validity.
In the mid- to late-20th century, assessment practices and pedagogy in higher education were in quite close alignment. The prevailing theory of learning was that of behaviourism as popularized by BF Skinner who argued that learning is maximized when learners receive immediate, positive feedback when they supply the correct answer to a question. This led to pedagogical practices that prioritized breaking down concepts into smaller and smaller ideas and having learners memorize the correct answers. Accordingly, assessment practices prioritized instruments filled with selected-response items requiring examinees to recognize correct answers.
Over time, however, our understanding of the cognitive processes involved with learning have evolved. We now recognize that learning is a complex social process and that knowledge is constructed through social interactions. As such, the characteristics of pedagogy in K-12 and increasingly in higher education have shifted away from rote memorization and moved towards the 21st century goals of collaboration, critical thinking, analysis, creativity and life-long learning. Unfortunately, however, Shepard (2000) and Lipnevich et al. (2020) point out that assessment practices in higher ed remain stuck in the behaviourist views of the mid-20th century with a heavy emphasis on high-stakes selected-response tests.
For me, Nina, the stages of Chappuis & Stiggins’ Assessment Development Model (ADM) and key principles of Standards Based Learning (SBL) significantly shifted my assessment practices to reflect modern assessment theory and aims of 21st century learning. To illustrate, a “Then and Now” reflection below shows two ways assessment practices can shift in higher education:
1. From using predominantly selected-response methods, toward implementing performance-based and personal communication methods that are better aligned and reflective of course learning standards.
2. From students as passive participants in the assessment process, toward students as active users of assessment as a learning opportunity.
Then and Now
I used to stick to common types of assessment instruments used in higher education. Although learning can be experienced and demonstrated in multiple ways, I was hesitant to take pedagogical risks in the early years of teaching in higher education. Looking back, my lack of assessment literacy and my preconceived assumptions of what assessment practices should look and sound like in higher education were barriers to effective teaching and student learning. Although course learning standards were present in syllabi, I used to plan activities and assessment tasks before identifying priority standards and clarifying proficiency. Wiggins & McTighe (2011) call this the “Twin Sins” of traditional planning.
Then I learned that clearly knowing what is being assessed and choosing the optimal method depends foremost on the kinds of learning being assessed (Chappuis & Stiggins, 2019). In the planning and development stages of ADM and SBL, priority course learning standards are identified, and the underpinning learning targets are clarified with and for students (Chappuis & Stiggins, 2019; Schimmer et al., 2018). Unpacking learning standards and clarifying proficiency allows instructors to thoughtfully consider how they will summatively and formatively assess student learning (Schimmer et al., 2018; White, 2017). This process helps me select appropriate assessment methods and design assessment instruments aligned with proficiency of the learning standards. Before any assessment instruments are used, I take into consideration potential bias and barriers, and critique the overall assessment for quality (Chappuis & Stiggins, 2109). As a result of intentional planning and sound development, I am able to gather information – evidence of learning – to make formative and summative decisions based on interpretations of student learning with greater validity. Chappuis & Stiggins (2019) suggest that if there is no accuracy, there is no way to know if there has been a gain in knowledge, ability, or understanding.
Now I realize that course learning standards are cognitively complex. Students critically analyze, synthesize, make judgments, gain empathy and self-knowledge, transfer, co-create, and apply course learning in meaningful and transformative ways. These aims reflect 21st century learning goals of higher education. Wiggins & McTighe (2011) point out that knowing facts in order to recall them is superficial learning that can be quickly forgotten, whereas the ability to connect facts and create meaning is deeper learning or enduring understanding. In my current practice, assessment methods and instruments are designed for students to demonstrate higher-order thinking and meaning-making (Pak Lui & Skelding, 2021). Students continue to demonstrate their reasoning and creative abilities through written expression. They also engage in free inquiry which gives them the opportunity to choose their own questions related to the course that are of deep personal interest to them. Students communicate their learning through performance-based and personal communication assessment methods. In a free inquiry, instead of prescribing what the authentic piece should be, students choose the creative mediums and share their learning publicly (MacKenzie, 2016).
Here are a couple of examples from my practice, as recounted in a recent book chapter (Pak Lui and Skelding, 2021):
A former student investigated how to destigmatize mental health in education and had the bravery to include their own mental health journey in their authentic piece. They shared a raw and honest four-stanza poem and accompanied it with related and provoking images in the form of a photo essay. There was not a dry eye in the classroom; the community of learners were drawn into their peer’s learning at an intellectual and emotional level. Another example is of a student who inquired about the standardization of assessment in education. They too combined their research findings and unpacked their own educational experiences with high-stakes assessment by writing and performing a musical rap. The lyrics, rhythm and physical expression of the rap illustrated their key inferences and implications of the urgent need for assessment reform in education.
What I noticed as a result of using assessment methods that are a good match for assessing cognitively complex learning standards (such as written response, performance assessment, or personal communication) was an increased ability as an instructor to interpret evidence of learning. I have greater confidence that the inferences I make accurately reflect achievement of intended learning. Additionally, increasing the value of and use of formative assessment practices shifted students from being passive participants in the learning process to students being active users of assessment results as a learning opportunity. Students regularly receive feedback, and they are given time to act on feedback. Moreover, their involvement as self-assessors of their own learning leads to greater awareness of strengths and areas for improvement and growth before evaluation. As my own assessment practices shift and evolve, I notice teaching and learning becoming a genuine partnership. Students and I are able to develop relational trust, and we are more confident in taking risks in pedagogy together (Pak Lui & Skelding, 2021). It is my hope for students to see that my assessment practices have clear purpose, align to course learning standards, and provide necessary support to move their learning forward. According to White (2019), “without continuous formative assessment built into the classroom, creativity would suffer, risk-taking would lack purpose, and products students create would be meaningless” (p. 33).
Concluding Thoughts
COVID-19 provides an opportunity for many university instructors to re-examine both pedagogy and assessment practices in higher education. As we look forward to establishing a new normal, research-based shifts in assessment practices can be a way for 21st century learners to experience a high quality education.
Nina Pak Lui is an Assistant Professor of Education at Trinity Western University in Langley, British Columbia. She studies and teaches curriculum design and assessment for learning. In 2020, she won the Provost Teaching and Innovation Award. You can find her on Twitter @npaklui.
Colin Madland is a PhD candidate in Educational Technology at the University of Victoria in British Columbia where he is studying approaches to assessment in higher education. You can find him at https://cmad.land and on Twitter @colinmadland.
Chappuis, J. & Stiggins, R. (2019). Classroom assessment for student learning: Doing it right – Using it well (3rd ed.). Pearson Education.
Lipnevich, A. A., Guskey, T. R., Murano, D. M., & Smith, J. K. (2020). What do grades mean? Variation in grading criteria in American college and university courses. Assessment in Education: Principles, Policy & Practice, 27(5), 480–500. https://doi.org/10/ghjw3k
Massey, K. D., DeLuca, C., & LaPointe-McEwan, D. (2020). Assessment literacy in college teaching: Empirical evidence on the role and effectiveness of a faculty training course. To Improve the Academy, 39(1). https://doi.org/10/gj5ngz
MacKenzie, T. (2016). Dive into inquiry: Amplify learning and empower student voice.
EdTechTeam Press.
McTighe, J. and Wiggins, G. (2011). The understanding by design guide to creating high-quality units. Association for Supervision and Curriculum Development.
Pak Lui, N. & Skelding, J. (2021). An emergent course design framework for imaginative pedagogy and assessment in higher education. In Cummings, J. & Fayed, I. (Eds.), Teaching in the post COVID-19 era. [In Print Stage]. Springer Publishing.
Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. National Academies Press. https://doi.org/10.17226/10019
Schimmer, T., Hillman, T., and Stalets, M. (2018). Standards based learning in action: Moving from theory to practice. Solution Tree Press.
Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4–14. https://doi.org/10/cw9jwc
White, K. (2017). Softening the edges: Assessment practices that honor K to 12
teachers and learners. Solution Tree Press.
White, K. (2019). Unlocked: Assessment as the key to everyday creativity in the
classroom. Solution Tree Press.
Uncovering Implicit Bias in Assessment, Feedback and Grading
Education is a noble profession. It is a profession that aims to cultivate diverse thinkers and aspires to nurture personal growth. It is a profession that can lift humanity’s spirits and help humankind strive to be the best version of itself—the “great equalizer of the conditions of men,” as Horace Mann famously stated in the 19th century. However, even with great people, a worthy goal, and an admirable vision, the opposite can often be the case.
Unfortunately, education can also be the great unequalizer, where personal biases can inform practice and policy development, stifle student growth, enforce discriminatory policies, and even socially isolate students. According to some researchers, implicit educator biases may contribute to a racial achievement gap; precisely, the negative impact of teacher assumptions on students’ ability based on race, culture, or values (van den Bergh et al. 2010). Unknown to an educator, these personal biases may create imagery of an ideal student, which is often seen through a white privilege lens because of society’s tendency toward whiteness, distorting our interactions with students of color.
Without social awareness and continuous self-monitoring, educators may let their implicit bias become an influential factor in their pedagogy, influencing everything from assessment to grading. This blog will discuss how personal biases can appear in our teaching and learning practices if educators are not diligent. I will focus on three of the more considerable teaching and learning modes: assessment, feedback, and grading.
Assessment Bias
Without attention, teachers may create assessments that reflect their values and experiences and ignore those of their students. The teacher may use language that they are more familiar with in their queries or create prompts influenced by their personal experiences. Ultimately students may find it hard to relate to the questions—potentially leaving some students unable to perform to the best of their abilities. For assessments to be less biased, a teacher must consider all backgrounds, ethnicities, genders, and identities to ensure that their lived experience isn’t the only one represented on an assessment.
By being more introspective when developing assessments, teachers can lessen the chance they produce a personally irrelevant assessment for their students. The impact of this irrelevancy could be low student performance, disinterest in the task, and even apathy toward school.
To help teachers be more aware of their biases when creating assessments, they can ask themselves the following questions as they develop questions:
- Does the assessment give the sense that the teacher has unwavering support and is a partner in a student’s success?
- Are the questions seeking to understand the student or judge them?
- Does the teacher draw on the students’ life situations, interests, and curiosities when creating problems/prompts? – Adapted from Tomlinson (2015).
Microaggressions in Feedback
Suppose we ignore our implicit biases when speaking with students. In that case, we run the risk of putting both parties into what social psychologist Albert Bandura calls “a downward course of mutual discouragement” (Bandura 1997, 234). A student’s reaction to deficit-based feedback may result in the teacher reacting in kind. Once this cycle starts, a student’s self-belief is now at risk.
Microaggressions and subtle discriminations can exist in the feedback process, and when they do, they may severely limit feedback acceptance.
Teachers can limit bias in their feedback to use the student’s thinking to grow the student. For example, let’s look at the following examples:
Example A
Feedback that Uses Teacher Thinking: You didn’t include [these details] about [person] in your essay. Try [these words].
Feedback that Uses Student Thinking: Tell me more about these words [here]. I am interested to know why you think [this word] didn’t work instead? Oh, okay, that would work. You should add what you just said to your paragraph, and it was perfect.
Example B
Feedback that Uses Teacher Thinking: When I write, I try and think about [detail]. Remember when I taught you the three-step process? No? The one that is in your textbook? That’s the most effective process.
Feedback that Uses Student Thinking: What did you think about when you wrote [this]? Seems like that interests you? Yeah, I can see you are passionate about [that]. What are the first three things you did when you wrote [this]? That is an interesting place to start. Could I convince you to start here? No? Okay, that makes sense. Have to make this work for you.
Example C
Feedback that Uses Teacher Thinking: In this graph, I would start [here] because this information is important. Has anyone heard of the [rhyme name] to remember the key features of a graph? No? Oh, this helped me a lot.
Feedback that Uses Student Thinking: In this graph, what information did you think was essential for you to begin this problem? I’m surprised to hear you say that because yesterday you said something different, what changed? Interesting, I saw you smile as you were talking. Why? Yeah, I agree you are getting this concept more. Are you using any strategies to help you learn this? Yes. Well, [that strategy] is undoubtedly helping you.
These scenarios are fictional, but the point here is teachers should always be aware of their language. Otherwise, they can inadvertently make the student feel like an unequal and devalued student in the class and even the school. In short, words matter.
Race Bias in Grading Practices
Teachers must judge student performance fairly and accurately. It is our professional duty. Inaccurate judgments have the potential not only to alter grades but could negatively affect teacher-student relationships, distort a student’s self-concept, or reduce opportunities to learn (Cohen and Steele 2002). One factor that can lead to a misrepresentation of a grade is teacher race and ethnicity bias. A student’s racial or ethnic group, socioeconomic class, or gender can substantially bias a teacher’s judgment of student performance. Any internalized racial biases can activate stereotypes and lead teachers to utilize discriminatory performance evaluations (Wood and Graham 2010).
For example, several studies found substantial differences in students’ performance judgments from various racial subgroups when the teacher subconsciously subscribed to the general stereotype that African American and Latino students generally don’t perform as well as their White and Asian counterparts (Ready and Wright 2011).
Different minority statuses can affect teacher perceptions in performance evaluation, leading to inaccurate grades, potentially harming students’ perception of their academic experience. (Ogbu and Simons 1998). In other words, students may feel like school is insignificant, unsupportive, or even harmful.
To help lessen the likelihood that implicit personal bias influences the grading process, teachers can democratize the grading process. They can use learning evidence instead of points and employ a modal interpretation of gradebook scores instead of averages. They can use a skills-focused curriculum instead of a content-focused one. Perhaps most important, they can involve the student in the grading process by infusing more self-evaluation moments into their instruction.
School leaders should explore ways to evaluate pedagogical practices through a racial and equity lens and observe classroom interactions between teachers and students. School leaders should also continue training on white privilege and its influence over the status quo, and teachers should evaluate student performance to judge its fairness and accuracy.
The Work Ahead
Although we may feel like we are objective and rational people, we all have biases. We all have values, beliefs, and assumptions that help us make sense of what is happening in our lives and guide our interactions with others. For the most part, these values, beliefs, and assumptions guide us in positive and productive directions, but the interplay between these same values and social interactions can produce implicit biases that distort our decisions, perspectives, and actions. We must notice, monitor, and manage these distortions to achieve a goal of racial equity in school and life. If we don’t, we are at risk of our unconscious biases harming our pedagogy, our relationships with students, and our perception of their needs.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: W. H. Freeman & Co, Publishers.
Cohen, G. L., & Steele, C. M. (2002). A barrier of mistrust: How negative stereotypes affect cross-race mentoring. In J. Aronson (Ed.), Improving academic achievement: Impact of psychological factors on education (pp. 303–327). San Diego, CA: Academic.
Ogbu, J. U., & Simons, H. D. (1998). Voluntary and involuntary minorities: A cultural-ecological theory of school performance with some implications for education. Anthropology and Education Quarterly, 29(2), 155-188.
Ready, D.D., & Wright, D. L. (2011). Accuracy and Inaccuracy in Teachers’ Perceptions of Young Children’s Cognitive Abilities: The Role of Child Background and Classroom Context. https://doi.org/10.3102/0002831210374874
Tenenbaum, H. R., & Ruck, M. D. (2007). Are teachers’ expectations different for racial minorities than for European American students? A meta-analysis. Journal of Educational Psychology, 99(2), 253–273.
“Teaching Up” – Carol Ann Tomlinson: Teaching for Excellence in Academically Diverse Classrooms (2015).
van den Bergh, L., Denessen, E., Hornstra, L., Voeten, M., & Holland, R.W., (2010). The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. American Educational Research Journal, 47, 497–527.
Wood, D., & Graham, S. (2010). “Why race matters: social context and achievement motivation in African American youth.” In Urdan, T. and Karabenick, S. (Eds.) The Decade Ahead: Applications and Contexts of Motivation and Achievement (Advances in Motivation and Achievement, Vol. 16 Part B) (pp. 175-209). Emerald Group Publishing Limited, Bingley.
Data as a Flashlight: Using the Evidence to Guide the Journey (Yours and Theirs)
My granddaughter was struggling with the latest topic in her grade 3 math class and her recent assessment result validated that she did not fully understand the learning target of patterns and the equations that supported them. Determining patterns is not always an easy process as this example would indicate:
2, 6, 3, 9, 6, 18, 15…
With a big test coming up, my daughter-in-law reached out to me to help get my granddaughter past the block and gain some confidence in her ability to master the concept. We connected online a few times over the days before the test and worked through a lot of questions and strategies. As she grasped the concepts and different ways to get to the solution, I could see her confidence soar. By the time we concluded all of the practice and she routinely got every solution, she was excited to demonstrate her skills on the assessment. As I write this post it’s been well over a week since the assessment was completed and my granddaughter has not received any feedback. Read more
Learn from Assessments?
Consider an assessment you or your collaborative team recently gave in a grade level or course.
- What did you do with the results?
- What did students do with the results?
- How did the students and the teacher(s) learn from the evidence of student learning?
Strength-Based Assessment Practices Increase Achievement and Confidence
Assessment that provides information on students’ learning strengths builds confidence and increases achievement.
Too often, students get feedback on all they are doing wrong or their deficits. Assessment, at its best, provides information to students on their strengths. When learners gain insight into what they know and can do, it builds their confidence. Strength-based feedback signals to students that you see their potential and that you believe in them. Read more
Just a Little Push: Five Ideas for Intervention and Feedback
When you are unsure, not feeling confident, and scared no one will like you, it can be hard to get started. My son, Chase, is a fifth grader, and we signed him up for a basketball camp. He went alone and didn’t know anyone. As can be predicted, he was very nervous. Read more
Teaching Better, One Assessment at a Time
Have you ever made soup and had it end up being too salty? Or you realize it needs more flavor? Or, somehow, even though you followed the recipe to a tee, it just didn’t quite turn out like you had hoped? Read more