“Teacher collaboration in strong professional learning communities improves the quality and equity of student learning, promotes discussions that are grounded in evidence and analysis rather than opinion, and fosters collective responsibility for student success.”
McLaughlin & Talbert (2006)
It’s one thing to administer a common assessment. It’s quite another to sit down with your colleagues and discuss the results. There is great potential for fear to play a role in our ability to engage in meaningful, enriching dialogue regarding what level students are learning. Although educators have been clear about what they want students to know and be able to do and what level they want them to do it, fear of what the evidence will reveal becomes a real concern for many teachers and teams. Rather than mobilize a team in the spirit of collective responsibility for all students, this practice—without the right layers of support—can quickly debilitate individuals or teams from exhibiting a productive mindset. Stomachs may turn as teachers consider the potential outcomes and reflect on individual contribution to the overall success of the team: How did my students perform? How did other students perform? Will I have the best scores—or the worst scores—of the team? What will my teammates think? What will we do with our results? Will my principal see them? The anxiety that a data conversation can cause for some team members could have such an impact that it hinders the conversation from ever happening in the first place.
Here are two fundamental assumptions for our conversation today:
- Always remember that there are faces—both student AND adult—behind the numbers and data that is being discussed.
- Which data educators talk about is far less significant in building trust and fostering healthy collaborative relationships than how they discuss data and why they are discussing it.
However, discussing evidence of student learning isn’t as simple as one team member stating, “Ok everyone, let’s sit down and look at some data!” Ferriter and Graham (2010), Wellman and Lipton (2004), Erkens (2016), and Bailey & Jakicic (2012), among others, have introduced structured data conversation protocols to help teams remove emotion from the discussion and establish a genuine focus on what the information is telling them about what students are learning.
No matter which process you utilize, let’s briefly consider five “lenses” for approaching data dialogues (more specific details, tools, and templates will be available for each of the steps listed):
- Choosing the right frame. Teams who are preparing to discuss evidence will want to honor the intentional planning that has created an opportunity to reflect upon student performance. Teams can quickly see how many students met the expected level of performance, how many are approaching that level of performance, and how many are really struggling to demonstrate an understanding of the skills and concepts being monitored. But now, kick that practice up a notch. Many teams have dynamic and passionate conversations about what students know and do not know, yet the actual pieces of student work are missing from the conversation! Using student work as evidence allows more thorough error analysis to identify misconceptions and recognize patterns in student thinking. Commit to reviewing both individual evidence and collective team data prior to the meeting and generate both observations and questions about student performance. With student work under review, teachers can meaningfully contribute to future team discussions.
- Zoom In. Now comes the fun part! It’s time to analyze the data. Teachers can collaboratively calculate their students’ collective performance, using highlighters or electronic formatting tools to document student performance. Some teams enter their data into a common data spreadsheet prior to the meeting for ease of organization and display. No matter your team’s preference, I strongly advocate for calculating both the percentage and the actual number of students who performed at each level. Depending on how many students are in the group, a percentage could quickly misrepresent the amount of corrective instruction needed or, on the other hand, inflate the level of proficiency satisfied via this assessment of learning. This practice becomes even more valuable when looking at levels of learning by student demographics such as gender or race. Knowing exactly how many student faces are represented in that data – and not just a percentage – keeps the focus right where it should be.
- Zoom Out. After reflecting as individuals on the student evidence and collaboratively organizing the data in order to highlight patterns and trends regarding student learning, look back at the observations and questions you prepared prior to the meeting. Jot down a few observations you have from your data review as well as a few questions you wonder about. This process brings each team member’s voice to the conversation yet honors collective thinking as the individual ideas are then clustered together to generate common themes as well as common wonderings. Teachers can begin the process of crafting summary statements to represent their common ideas. Knowing that not all statements are created equal, work together to assign a level of priority as well as a level of satisfaction to what is being discussed.
- For example, a common observation may have been “55% of all 8th grade females are proficient as compared to 86% of 8th grade males.” The team discusses this observation and agrees that they are not satisfied with this statement about their student performance. They assign it a level 1 relative to their satisfaction. They also believe this gender gap to be of great significance to their team, so they assign the level of importance as a 4. A statement such as this, with low satisfaction and high importance, may emerge as a priority in the team’s next instructional steps around this standard.
- Panoramic. Your team analysis has now revealed some points of pride as well as some opportunities for improvement. Continuing with the assertion that not all parts of data analysis have equal weight or value relative to a team’s next steps, take time to place each statement into one of three categories: sustain, monitor, or improve. Evaluating all the evidence that surrounds you will assist with future strategic planning and eliminate initiative overload. The focus becomes clear via these three categories:
- Statements in the sustain category represent points of pride that teams should celebrate and replicate. This level of success should continue to be attained by students.
- Statements in the monitor category represent areas that teams have been working on. These statements may represent areas of team focus, team goals, or components that the team has collaboratively discussed and is actively seeking new learning in this area. With some simple adjustments to instructional delivery, alignment of learning activities with assessment tasks, or utilizing a different curricular resource, teams could achieve even better results.
- Statements in the improve category indicate areas where the data isn’t where it needs to be. These statements suggest more than a “tweak,” and will likely require teachers to do things differently with their assessments or instructional delivery model in order to realize the desired results.
- View from Above. Building trust and fostering healthy relationships among teams fosters the courage to discuss results in a way that promotes reflection and results in action. Consider the following questions to promote transparency in future collaboration and implementation of effective practices:
- Which student groups did we move to mastery? Which student groups do we still need to move to mastery?
- How will we organize our corrective instruction for those students who continue to demonstrate difficulty? Which member(s) of the team is best suited to provide that instruction?
- How will we organize our instruction for those students who are ready to advance beyond mastery? Which member(s) of the team is best suited to provide that instruction?
- What additional information or resources do we need now as a result of our evidence-based discussion?
The strength of the relationships on your team and the trust you have in each other will impact that very first data conversation you have. Even the strongest teams will struggle. In order to build a collective understanding of students’ current level of performance, teachers must leverage those relationships in order to separate individual emotions from the team’s focus on the results of their professional practice. When this evidence is used to reveal what students have been able to do as a result of instruction, teachers can better determine their collective effectiveness as well as begin to evaluate individual performance relative to the team’s success.
Bailey, K., and Jakicic, C. (2011). Common Formative Assessment: A Toolkit for Professional Learning Communities at Work™. Bloomington, IN: Solution Tree Press.
Erkens, C. (2016). Collaborative Common Assessments. Bloomington, IN: Solution Tree Press.
Graham, P., and Ferriter, B. (2010). Building a Professional Learning Community at Work™: A Guide to the First Year. Bloomington, IN: Solution Tree Press.
McLaughlin, M., & Talbert, J. (2006). Building school-based teacher learning communities: Professional strategies to improve student achievement. New York: Teachers College Press.
Wellman, B., & Lipton, L. (2004). Data-driven dialogue: A facilitator’s guide to collaborative inquiry. Sherman, CT: Mira Via.