Pick a Quote

Seems to me that many schools and districts are asking questions about assessment in mathematics.  So, I thought I would share a few quotes that might get you to think and reflect on your views about what it means to assess, why there might be a focus on assessment, and what our goals and ideals might look like.  I want you to take a look at the following quotes.  Pick 1 or 2 that stands out to you:

Slide1Slide2Slide3Slide4Slide5Slide6Slide7Slide8Slide9Slide10Slide11Slide12Slide13Slide14Slide15Slide16Slide17Slide18Slide19


A few things to reflect on as you think about the quotes above:

  • Which quotes caught your eye?  Did you pick one(s) that confirm things you already believe or perhaps ones that you hadn’t spent much time thinking about before?
  • Some of the above quotes speak to “assessment” while others speak to evaluation practices.  Do you know the difference?
  • Take a look again at the list of quotes and find one that challenges your thinking.  I’ve probably written about the topic somewhere.  Take a look in the Links to read more about that topic.
  • Why do you think so many discuss assessment as a focus in mathematics?  Maybe Linda Gojak’s article Are We Obsessed with Assessment? might provide some ideas.
  • Instead of talking in generalities about topics like assessment, maybe we need to start thinking about better questions to ask, or thinking deeper about what is mathematically important, or understanding how mathematics develops!

Please pick a quote that stands out for you and share your thoughts about it.

Leave a reply here or on Twitter (@MarkChubb3)

 

Unintended Messages

I read an interesting article by Yong Zhao the other day entitled What Works Can Hurt: Side Effects in Education where he discussed A simple reality that exists in schools and districts all over. Basically, he gives the analogy of education being like the field of medicine (Yes, I know this is an overused comparison, but let’s go with it for a minute).  Yong paints the picture of how careful drug and medicine companies have become in warning “customers” of both the benefits of using a specific drug and the potential side-effects that might result because of its use.

However, Yong continues to explain that the general public has not been given the same cautionary messages for any educational decision or program:

“This program helps improve your students’ reading scores, but it may make them hate reading forever.” No such information is given to teachers or school principals.
“This practice can help your children become a better student, but it may make her less creative.” No parent has been given information about effects and side effects of practices in schools.

Simply put, in education, we tend to discuss the benefits of any program or practice without thinking through how this might affect our students’ well-being in other areas.  The issue here might come as a direct result of teachers, schools and systems narrowing their focus to measure results without considering what is being measured and why, what is not being measured and why, and what the short and long term effects might be of this focus!


Let’s explore a few possible scenarios:

Practice:

In order to help students see the developmental nature of mathematical ideas, some teachers organize their discussions about their problems by starting to share the simplest ideas first then move toward more and more complicated samples.  The idea here is that students with simple or less efficient ideas can make connections with other ideas that will follow.

Unintended Side Effects:

Some students in this class might come to notice that their ideas or thinking is always called upon first, or always used as the model for others to learn from.  Either situation might cause this child to realize that they are or are not a “math person”.  Patterns in our decisions can lead students into the false belief that we value some students’ ideas over the rest.  We need to tailor our decisions and feedback based on what is important mathematically, and based on the students’ peronal needs.


Practice:

In order to meet the needs of a variety of students, teachers / schools / districts organize students by ability.  This can look like streaming (tracking), setting (regrouping of students for a specific subject), or within class ability grouping.

Unintended Side Effects:

A focus on sorting students by their potential moves the focus from helping our students learn, to determining if they are in the right group.  It can become easy as an educator to notice a student who is struggling and assume the issue is that they are not in the right group instead of focusing on a variety of learning opportunities that will help all students be successful.  If the focus remains on making sure students are grouped properly, it can become much more difficult for us to learn and develop new techniques!  To our students, being sorted can either help motivate, or dissuade students from believing they are capable!  Basically, sorting students leads both educators and students to develop fixed mindsets.  Instead of sorting students, understanding what differentiated instruction can look like in a mixed-ability class can help us move all of our students forward, while helping everyone develop a healthy relationship with mathematics.


Practice:

A common practice for some teachers involves working with small groups of students at a time with targeted needs.  Many see that this practice can help their students gain more confidence in specific areas of need.

Unintended Side Effects:

Sitting, working with students in small groups as a regular practice means that the teacher is not present during the learning that happens with the rest of the students.  Some students can become over reliant on the teacher in this scenario and tend to not work as diligently during times when not directly supervised.  If we want patient problem solvers, we need to provide our students with more opportunities for them to figure things out for themselves.


Practice:

Some teachers teach through direct instruction (standing in front of the class, or via slideshow notes, or videos) as their regular means of helping students learn new material.  Many realize it is quicker and easier for a teacher to just tell their students something.

Unintended Side Effects:

Students come to see mathematics as subject where memory and rules are what is valued and what is needed.  When confronted with novel problems, students are far less likely to find an entry point or to make sense of the problem because their teacher hadn’t told them how to do it yet.  These students are also far more likely to rely on memory instead of using mathematical reasoning or sense making strategies.  While direct instruction might be easier and quicker for students to learn things, it is also more likely these students will forget.  If we want our students to develop deep understanding of the material, we need them to help provide experiences where they will make sense of the material.  They need to construct their understanding through thinking and reasoning and by making mistakes followed by more thinking and reasoning.


Practice:

Many “diagnostic” assessments resources help us understand why students who are really struggling to access the mathematics are having issues.  They are designed to help us know specifically where a student is struggling and hopefully they offer next steps for teachers to use.  However, many teachers use these resources with their whole group – even with those who might not be struggling.  The belief here is that we should attempt  to find needs for everyone.

Unintended Side Effects:

When the intention of teachers is to find students’ weaknesses, we start to look at our students from a deficit model.  We start to see “Gaps” in understanding instead of partial understandings.  Teachers start to see themselves as the person helping to “fix” students, instead of providing experiences that will help build students’ understandings.  Students also come to see the subject as one where “mastering” a concept is a short-term goal, instead of the goal being mathematical reasoning and deep understanding of the concepts.  Instead of starting with what our students CAN’T do and DON’T know, we might want to start by providing our students with experiences where they can reason and think and learn through problem solving situations.  Here we can create situations where students learn WITH and FROM each other through rich tasks and problems.


Our Decisions:

Yong Zhao’s article – What Works Can Hurt: Side Effects in Education – is titled really well.  The problem is that some of the practices and programs that can prove to have great results in specific areas, might actually be harmful in other ways.  Because of this, I believe we need consider the benefits, limitations and unintended messages of any product and of any practice… especially if this is a school or system focus.

As a school or a system, this means that we need to be really thoughtful about what we are measuring and why.  Whatever we measure, we need to understand how much weight it has in telling us and our students what we are focused on, and what we value.  Like the saying goes, we measure what we value, and we value what we measure.  For instance:

  • If we measure fact retrieval, what are the unintended side effects?  What does this tell our students math is all about?  Who does this tell us math is for?
  • If we measure via multiple choice or fill-in-the-blank questions as a common practice, what are the unintended side effects?  What does this tell our students math is all about?  How reliable is this information?
  • If we measure items from last year’s standards (expectations), what are the unintended side effects?  Will we spend our classroom time giving experiences from prior grades, help build our students’ understanding of current topics?
  • If we only value standardized measurements, what are the unintended side effects?  Will we see classrooms where development of mathematics is the focus, or “answer getting” strategies?  What will our students think we value?

Some things to reflect on
  • Think about what it is like to be a student in your class for a moment.  What is it like to learn mathematics every day?  Would you want to learn mathematics in your class every day?  What would your students say you value?
  • Think about the students in front of you for a minute.  Who is good at math?  What makes you believe they are good at math?  How are we building up those that don’t see themselves as mathematicians?
  • Consider what your school and your district ask you to measure.  Which of the 5 strands of mathematics proficiency do these measurements focus on?  Which ones have been given less attention?  How can we help make sure we are not narrowing our focus and excluding some of the things that really matter?

baba

As always, I encourage you to leave a message here or on Twitter (@markchubb3)!

Which one has a bigger area?

Many grade 3 teachers in my district, after taking part in some professional development recently (provided by @teatherboard), have tried the same task relating to area.  I’d like to share the task with you and discuss some generalities we can consider for any topic in any grade.


The task:

As an introductory activity to area, students were provided with two images and asked which of the two shapes had the largest area.

Captureb.JPG

A variety of tools and manipulatives were handy, as always, for students to use to help them make sense of the problem.


Student ideas

Given very little direction and lots of time to think about how to solve this problem, we saw a wide range of student thinking.  Take a look at a few:

Some students used circles to help them find area.  What does this say about what they understand?  What issues do you see with this approach though?

Some students used shapes to cover the outline of each shape (perimeter).  Will they be able to find the shape with the greater area?  Is this strategy always / sometimes / never going to work?  What does this strategy say about what they understand?

IMG_4411
Example 3

Some students used identical shapes to cover the inside of each figure.

And some students used different shapes to cover the figures.

ccc
Example 7
IMG_4444
Example 8
C-rods, difference
Example 9

Notice that example 9 here includes different units in both figures, but has reorganized them underneath to show the difference (can you tell which line represents which figure?).


Building Meaningful Conversations

Each of the samples above show the thinking, reasoning and understanding that the students brought to our math class.  They were given a very difficult task and were asked to use their reasoning skills to find an answer and prove it.  In the end, students were split between which figure had the greater area (some believing they were equal, many believing that one of the two was larger).  In the end, students had very different numerical answers as to how much larger or smaller the figures were from each other.  These discrepancies set the stage for a powerful learning opportunity!

For example, asking questions that get at the big ideas of measurement are now possible because of this problem:

“How is it possible some of us believe the left figure has a larger area and some of us believe that the right figure is larger?”

“Has example 8 (scroll up to take a closer look) proven that they both have the same area?”

“Why did example 9 use two pictures?  It looks like many of the cuisenaire rods are missing in the second picture?  What did you think they did here?”

In the end, the conversations should bring about important information for us to understand:

  • We need comparable units if we are to compare 2 or more figures together.  This could mean using same-sized units (like examples 1, 4, 5 & 6 above), or corresponding units (like example 8 above), or units that can be reorganized and appropriately compared (like example 9).
  • If we want to determine the area numerically, we need to use the same-sized piece exclusively.
  • The smaller the unit we use, the more of them we will need to use.
  • It is difficult to find the exact area of figures with rounded parts using the tools we have.  So, our measurements are not precise.

Some generalizations we can make here to help us with any topic in any grade

When our students are being introduced to a new topic, it is always beneficial to start with their ideas first.  This way we can see the ideas they come to us with and engage in rich discussions during the lesson close that helps our students build understanding together.  It is here in the discussions that we can bridge the thinking our students currently have with the thinking needed to understand the concepts you want them to leave with.  In the example above, the students entered this year with many experiences using non-standard measurements, and this year, most of their experiences will be using standard measurements.  However, instead of starting to teach this year’s standards, we need to help our students make some connections, and see the need to learn something new.  Considering what the first few days look like in any unit is essential to make sure our students are adequately prepared to learn something new!  (More on this here: What does day one look like?)

To me, this is what formative assessment should look like in mathematics!  Setting up experiences that will challenge our students, listening and observing our students as they work and think… all to build conversations that will help our students make sense of the “big ideas” or key understandings we will need to learn in the upcoming lessons.  When we view formative assessment as a way to learn more about our students’ thinking, and as a way to bridge their thinking with where we are going, we tend to see our students through an asset lens (what they DO understand) instead of their through the deficit lens (i.e., gaps in understanding… “they can’t”…, “didn’t they learn this last year…?).  When we see our students through an asset lens, we tend to believe they are capable, and our students see themselves and the subject in a much more positive light!

Let’s take a closer look at the features of this lesson:

  • Little to no instruction was given – we wanted to learn about our students’ thinking, not see if they can follow directions
  • The problem was open enough to have multiple possible strategies and offer multiple possible entry points (low floor – high ceiling)
  • Asking students to prove something opens up many possibilities for rich discussions
  • Students needed to begin by using their reasoning skills, not procedural knowledge…
  • Coming up with a response involved students doing and thinking… but the real learning happened afterward – during the consolidation phase

A belief I have is that the deeper we understand the big ideas behind the math our students are learning, the more likely we will know what experiences our students need first!


A few things to reflect on:

  • How often do you give tasks hoping students will solve it a specific way?  And how often you give tasks that allow your students to show you their current thinking?  Which of these approaches do you value?
  • What do your students expect math class to be like on the first few days of a new topic/concept?  Do they expect marks and quizzes?  Or explanations, notes and lessons?  Or problems where students think and share, and eventually come to understand the mathematics deeply through rich discussions?  Is there a disconnect between what you believe is best, and what your students expect?
  • I’ve painted the picture here of formative assessment as a way to help us learn about how our students think – and not about gathering marks, grouping students, filling gaps.  What does formative assessment look like in your classroom?  Are there expectations put on you from others as to what formative assessment should look like?  How might the ideas here agree with or challenge your beliefs or the expectations put upon you?
  • Time is always a concern.  Is there value in building/constructing the learning together as a class, or is covering the curriculum standards good enough?  How might these two differ?  How would you like your students to experience mathematics?

As always, I’d love to hear your thoughts.  Leave a reply here on Twitter (@MarkChubb3)

Starting where our students are….. with THEIR thoughts

A common trend in education is to give students a diagnostic in order for us to know where to start. While I agree we should be starting where our students are, I think this can look very different in each classroom.  Does starting where our students are mean we give a test to determine ability levels, then program based on these differences?  Personally, I don’t think so.

Giving out a test or quiz at the beginning of instruction isn’t the ideal way of learning about our students.  Seeing the product of someone’s thinking often isn’t helpful in seeing HOW that child thinks (Read, What does “assessment drive instruction mean to you” for more on this). Instead, I offer an alternative- starting with a diagnostic task!  Here is an example of a diagnostic task given this week:

Taken from Van de Walle’s Teaching Student Centered Mathematics

This lesson is broken down into 4 parts.  Below are summaries of each:


Part 1 – Tell 1 or 2 interesting things about your shape

Start off in groups of 4.  One student picks up a shape and says something (or 2) interesting about that shape.


Here you will notice how students think about shapes. Will they describe the shape as “looking like a mountain” or “it’s an hourglass” (visualization is level 1 on Van Hiele’s levels of Geometric thought)… or will they describe attributes of that shape (this is level 2 according to Van Hiele)?

As the teacher, we listen to the things our students talk about so we will know how to organize the conversation later.


Part 2 – Pick 2 shapes.  Tell something similar or different about the 2 shapes.

Students randomly pick 2 shapes and either tell the group one thing similar or different about the two shapes. Each person offers their thoughts before 2 new shapes are picked.

Students who might have offered level 1 comments a minute ago will now need to consider thinking about attributes. Again, as the teacher, we listen for the attributes our students understand (i.e., number of sides, right angles, symmetry, number of vertices, number of pairs of parallel sides, angles….), and which attributes our students might be informally describing (i.e., using phrases like “corners”, or using gestures when attempting to describe something they haven’t learned yet).  See chart below for a better description of Van Hiele’s levels:

Van Hiele’s chart shared by NCTM

At this time, it is ideal to hold conversations with the whole group about any disagreements that might exist.  For example, the pairs of shapes above created disagreements about number of sides and number of vertices.  When we have disagreements, we need to bring these forward to the group so we can learn together.


Part 3 – Sorting using a “Target Shape”

Pick a “Target Shape”. Think about one of its attributes.  Sort the rest of the shapes based on the target shape.


The 2 groups above sorted their shapes based on different attributes. Can you figure out what their thinking is?  Were there any shapes that they might have disagreed upon?


Part 4 – Secret sort

Here, we want students to be able to think about shapes that share similar attributes (this can potentially lead our students into level 2 type thinking depending on our sort).  I suggest we provide shapes already sorted for our students, but sorted in a way that no group had just sorted the shapes. Ideally, this sort is something both in your standards and something you believe your students are ready to think about (based on the observations so far in this lesson).


In this lesson, we have noticed how our students think.  We could assess the level of Geometric thought they are currently using, or the attributes they are comfortable describing, or misconceptions that need to be addressed.  But, this lesson isn’t just about us gathering information, it is also about our students being actively engaged in the learning process!  We are intentionally helping our students make connections, reason and prove, learn/ revisit vocabulary, think deeper about specific attributes…


I’ve shared my thoughts about what I think day 1 should look like before for any given topic, and how we can use assessment to drive instruction, however, I wanted to write this blog about the specific topic of diagnostics.

In the above example, we listened to our students and used our understanding of our standards and developmental research to know where to start our conversations. As Van de Walle explains the purpose of formative assessment, we need to make our formative more like a streaming video, not just a test at the beginning!van-de-walle-streaming-video

If its formative, it needs to be ongoing… part of instruction… based on our observations, conversations, and the things students create…  This requires us to start with rich tasks that are open enough to allow everyone an entry point and for us to have a plan to move forward!

I’m reminded of Phil Daro’s quote:

daro-starting-point

For us to make these shifts, we need to consider our mindsets that also need to shift.  Statements like the following stand in the way of allowing our students to be actively engaged in the learning process starting with where they currently are:

  • My students aren’t ready for…
  • I need to start with the basics…
  • My students have gaps in their…
  • They don’t know the vocabulary yet…

These thoughts are counterproductive and lead to the Pygmalion effect (teacher beliefs about ability become students’ self-fulfilling prophecies).  When WE decide which students are ready for what tasks, I worry that we might be holding many of our students back!

If we want to know where to start our instruction, start where your students are in their understanding…with their own thoughts!!!!!  When we listen and observe our students first, we will know how to push their thinking!

How do you give feedback?

There seems to be a lot of research telling us how important feedback is to student performance, however, there’s little discussion about how we give this feedback and what the feedback actually looks like in mathematics. To start with, here are a few important points research says about feedback:

  • The timing of feedback is really important
  • The recipient of the feedback needs to do more work than the person giving the feedback
  • Students need opportunities to do something with the feedback
  • Feedback is not the same thing as giving advice

I will talk about each of these toward the end of this post.  First, I want to explain a piece about feedback that isn’t mentioned enough…  Providing students with feedback positions us and our students as learners.  Think about it for a second, when we “mark” things our attention starts with what students get right, but our attention moves quickly to trying to spot errors. Basically, when marking, we are looking for deficits. On the other hand, when we are giving feedback, we instead look for our students’ actual thinking.  We notice things as almost right, we notice misconceptions or overgeneralization…then think about how to help our students move forward.  When giving feedback, we are looking for our students strengths and readiness.  Asset thinking is FAR more productive, FAR more healthy, FAR more meaningful than grades!


Feedback Doesn’t Just Happen at the End!

Let’s take an example of a lesson involving creating, identifying, and extending linear growing patterns.  This is the 4th day in a series of lessons from a wonderful resource called From Patterns to Algebra.  Today, the students here were asked to create their own design that follows the pattern given to them on their card.

IMG_1395
Their pattern card read: Output number = Input number x3+2
IMG_1350
Their pattern card read:  Output number = Input number x7
IMG_1347
Their pattern card read:  Output number = Input number x4

 

IMG_1175
Their pattern card read:  Output number = Input number x3+1
IMG_1355
Their pattern card read:  Output number = Input number x8+2
IMG_1361
Their pattern card read: Output number = Input number x5+2

Once students made their designs, they were instructed to place their card upside down on their desk, and to circulate around the room quietly looking at others’ patterns.  Once they believed they knew the “pattern rule” they were allowed to check to see if they were correct by flipping over the card.

After several minutes of quiet thinking, and rotating around the room, the teacher stopped everyone and led the class in a lesson close that involved rich discussions about specific samples around the room.  Here is a brief explanation of this close:

Teacher:  Everyone think of a pattern that was really easy to tell what the pattern rule was.  Everyone point to one.  (Class walks over to the last picture above – picture 6).  What makes this pattern easy for others to recognize the pattern rule?  (Students respond and engage in dialogue about the shapes, colours, orientation, groupings…).

Teacher:  Can anyone tell the class what the 10th position would look like?  Turn to your partner and describe what you would see.  (Students share with neighbor, then with the class)

Teacher:  Think of one of the patterns around the room that might have been more difficult for you to figure out.  Point to one you want to talk about with the class.  (Students point to many different ones around the room.  The class visits several and engages in discussions about each.  Students notice some patterns are harder to count… some patterns follow the right number of tiles – but don’t follow a geometric pattern, some patterns don’t reflect the pattern listed on the card.  Each of these noticings are given time to discuss, in an environment that is about learning… not producing.  Everyone understands that mistakes are part of the learning process here and are eager to take their new knowledge and apply it.

The teacher then asks students to go back to their desks and gives each student a new card.  The instructions are similar, except, now she asks students to make it in a way that will help others recognize the patterns easily.

The process of creating, walking around the room silently, then discussing happens a second time.

To end the class, the teacher hands out an exit card asking students to articulate why some patterns are easier than others to recognize.  Examples were expected from students.


At the beginning of this post I shared 4 points from research about feedback.  I want to briefly talk about each:

The timing of feedback is really important

Feedback is best when it happens during the learning.  While I can see when it would be appropriate for us to collect items and write feedback for students, having the feedback happen in-the-moment is ideal!   Dan Meyer reminds us that instant feedback isn’t ideal.  Students need enough time to think about what they did right/wrong… what needs to be corrected.  On the other hand, having students submit items, then us giving them back a week later isn’t ideal either!  Having this time to think and receive feedback DURING the learning experience is ideal.  In the example above, feedback happened several times:

  1. As students walked around looking at patterns.  After they thought they knew the pattern, they peeked at the card.
  2. As students discuss several samples they are given time to give each other feedback about which patterns make sense… which ones visually represented the numeric value… which patterns could help us predict future visuals/values
  3. Afterward once the teacher collected the exit cards.

The recipient of the feedback needs to do more work than the person giving the feedback

Often we as teachers spend too much time writing detailed notes offering pieces of wisdom.  While this is often helpful, it isn’t a feasible thing to do on a daily basis. In fact, us doing all of the thinking doesn’t equate to students improving!  In the example above, students were expected to notice patterns that made sense to them, they engaged in conversations about the patterns.  Each student had to recognize how to make their pattern better because of the conversations.  The work of the feedback belonged, for the most part, within each student.

Students need opportunities to do something with the feedback

Once students receive feedback, they need to use that feedback to continue to improve.  In the above example, the students had an opportunity to create new patterns after the discussions.  After viewing the 2nd creations and seeing the exit cards, verbal or written feedback could be given to those that would benefit from it.


Feedback is not the same thing as giving advice

This last piece is an interesting one.  Feedback, by definition, is about seeing how well you have come to achieving your goal.  It is about what you did, not about what you need to do next.  “I noticed that you have switched the multiplicative and additive pieces in each of your patterns” is feedback.  “I am not sure what the next position would look like because I don’t see a pattern here” is feedback.  “The additive parts need to remain constant in each position” is not feedback… it is advice (or feedforward).

In the example above, the discussions allowed for ample time for feedback to happen.  If students were still struggling, it is appropriate to give direct advice.  But I’m not sure students would have understood any advice, or retained WHY they needed to take advice if we offered it too soon.


So I leave you with some final questions for you:

  • When do your students receive feedback?  How often?
  • Who gives your students their feedback?
  • Is it written?  Or verbal?
  • Which of these do you see as the most practical?  Meaningful for your students?  Productive?
  • How do you make time for feedback?
  • Who is doing the majority of the work… the person giving or the person receiving the feedback?
  • Do your students engage in tasks that allow for multiple opportunities for feedback to happen naturally?

PS.   Did you notice which of the students’ examples above had made an error.  What feedback would you give?  How would they receive this feedback?

 

 

 

Who makes the biggest impact?

A few years ago I had the opportunity to listen to Damian Cooper (expert on assessment and evaluation here in Ontario). He shared with us an analogy talking to us about the Olympic athletes that had just competed in Sochi.  He asked us to think specifically about the Olympic Ice Skaters…

He asked us, who we thought made the biggest difference in the skaters’ careers:  The scoring judges or their coaches?


Think about this for a second…  An ice skater trying to become the best at their sport has many influences on their life…  But who makes the biggest difference?  The scoring judges along the way, or their coaches?  Or is it a mix of both???


Damian told us something like this:

The scoring judge tells the skater how well they did… However, the skater already knows if they did well or not.  The scoring judge just CONFIRMS if they did well or not.  In fact, many skaters might be turned off of skating because of low scores!  The scoring judge is about COMPETITION.  Being accurate about the right score is their goal.

On the other hand, the coach’s role is only to help the skater improve. They watch, give feedback, ask them to repeat necessary steps… The coach knows exactly what you are good at, and where you need help. They know what to say when you do well, and how to get you to pick yourself up. Their goal is for you to become the very best you can be!  They want you to succeed!


In the everyday busyness of teaching, I think we often confuse the terms “assessment” with “evaluation”   Evaluating is about marking, levelling, grading… While the word assessment comes from the Latin “Assidere” which means “to sit beside”.  Assessment is kind of like learning about our students’ thinking processes, seeing how deeply they understand something…   These two things, while related, are very different processes!

assidere


I have shared this analogy with a number of teachers.   While most agree with the premise, many of us recognize that our job requires us to be the scoring judges… and while I understand the reality of our roles and responsibilities as teachers, I believe that if we want to make a difference, we need to be focusing on the right things.  Take a look at Marian Small’s explanation of this below.  I wonder if the focus in our schools is on the “big” stuff, or the “little” stuff?  Take a look:

https://player.vimeo.com/video/136761933?color=a185ac&title=0&byline=0&portrait=0

Marian Small – It’s About Learning from LearnTeachLead on Vimeo.


Thinking again to Damian’s analogy of the ice skaters, I can’t help but think about one issue that wasn’t discussed.  We talked about what made the best skaters, even better, but I often spend much of my thoughts with those who struggle.  Most of our classrooms have a mix of students who are motivated to do well, and those who either don’t believe they can be successful, or don’t care if they are achieving.

If we focus our attention on scoring, rating, judging… basically providing tasks and then marking them… I believe we will likely be sending our struggling students messages that math isn’t for them.  On the other hand, if we focus on providing experiences where our students can learn, and we can observe them as they learn, then use our assessments to provide feedback or know which experiences we need to do next, we will send messages to our students that we will all improve.


Hopefully this sounds a lot like the Growth Mindset messages you have been hearing about!

Take a quick look at the video above where Jo Boaler shows us the results of a study comparing marks vs feedback vs marks & feedback.


So, how do you provide your students with the feedback they need to learn and grow?

How do you provide opportunities for your students to try things, to explore, make sense of things in an environment that is about learning, not performing?

What does it mean for you to provide feedback?  Is it only written?

How do you use these learning opportunities to provide feedback on your own teaching?


As  always, I try to ask a few questions to help us reflect on our own beliefs.  Hopefully we can continue the conversation here or on Twitter.

 

What does “Assessment Drives Learning” mean to you?

There are so many “head nod” phrases in education.  You know, the kind of phrases we talk about and all of us easily agree upon that whatever the thing is we are talking about is a good thing.  For instance, someone says that “assessment should drive the learning” in our classroom, and we all easily accept that this is a good practice.  Yet, everyone is likely to have a completely different vision as to what is meant by the phrase.

In this post, I want to illustrate 3 very different ways our assessments can drive our instruction, and how these practices lead to very different learning opportunities for our students.

Assessment Drives Learning

Unit Sized Assessments

Some teachers start their year or their unit with a test to find out the skills their students need or struggle with.  These little tests (sometimes not so little) typically consist of a number of short, closed questions.  The idea here is that if we can find out where our students struggle, we will be able to better determine how to spend our time.

But let’s take a look at exactly how we do this.  The type of questions, the format of the test and the content involved not only have an effect on how our students view the subject and themselves as learners of math, they also have a dramatic effect on the direction of learning in our classrooms.  

For example, do the questions on the test refer to the types of questions you worked on last year, according to previous Standards, or are they based on the things you are about to learn this year (this year’s Standards)?  If you provide questions that are 1 grade below, your assessment data will tell you that your students struggle with last years’ topics… and your instruction for the next few days will likely be to try to fill in the gaps from last year.  On the other hand, if you ask questions that are based on this year’s content, most of your students will likely do very poorly, and your data will tell you to teach the stuff you would have anyway without giving the test at all.  Either way, the messages our students receive are about their deficits… and our instruction for the next few days will likely relate to the things we just told our students they aren’t good at.  I can’t help but wonder how our students who struggle feel when given these messages.  Day 1 and they already see themselves as behind.

I also can’t help but wonder if this is helpful even for their skills anyway?  As Daro points out below, when this is our main view of assessment guiding our instruction, we often end up providing experiences for our students that continue to keep those who struggle struggling.

Assessment Drives Learning (2)


Daily Assessments

On the other hand, many teachers view assessment guiding their practice through the use of daily assessment practices like math journals, exit cards or other ways of collecting information while the learning is still happening.  It is really important to note that these forms of assessment can look very different from teacher to teacher, or from lesson to lesson.  In my post titled Exit Cards: What do yours look like?  I shared 4 different types of information we often collect between lessons.  I really think the type of information we collect says a lot about our own beliefs and our reflections on this evidence will likely form the type of experiences we have the next day.

When we use assessments like these regularly, we are probably more likely to stay on track with our curriculum Standards, however, what we do with this the information the next day will completely depend on the type of information we collect.


In-the-Moment Assessments

A third way to think of “assessment driving instruction” is to think of the in-the-moment decisions we make.  For example, classrooms that teach THROUGH problem solving will likely use instructional practices that help us use in-the-moment assessment decisions.  Take for example The 5 Practices: for Orchestrating Productive Mathematics Discussions (linked here is a free copy of the book).  Here are the 5 Practices and brief explanation about how each might be useful as part of the assessment of our students.

1. Anticipating
• Do the problem yourself.
• What are students likely to produce?
• Which problems will most likely be the most useful in addressing the mathematics?

The first practice helps us prepare for WHAT we will be noticing.  Being prepared for the problem ahead is a really important place to start.
2. Monitoring
• Listen, observe students as they work
• Keep track of students’ thinking
• Ask questions of students to get them back on track or to think more deeply (without rescuing or funneling information)

The second practice helps us notice how students are thinking, what representations they might be using.  The observations and conversations we make here can be very powerful pieces of assessment data for us!

3. Selecting
• What do you want to highlight?
• Purposefully select those that will advance mathematical ideas of the group.

The third practice asks us to assess each of the students’ work, and determine which samples will be beneficial for the class.  Using our observations and conversations from practice 2, we can now make informed decisions.
4. Sequencing
• In what order do you want to present the student work samples?  (Typically only a few share)
• Do you want the most common to start first? Would you present misconceptions first?  Or would you start with the simplest sample first?
• How will the learning from the first solution help us better understand the next solution?• Here we ask students specific questions, or ask the group to ask specific questions, we might ask students what they notice from their work…

The 4th practice asks us to sequence a few student samples in order to construct a conversation that will help all of our students understand the mathematics that can be learned from the problem.  This requires us to use our understanding of the mathematics our students are learning in relation to previous learning and where the concepts will eventually lead (a developmental continuum or landscape or trajectory is useful here)
5. Connecting
• Craft questions or allow for students to discuss the mathematics being learned to make the mathematics visible (this isn’t about sharing how you did the problem, but learning what math we can learn from the problem).
• Compare and contrast 2 or 3 students’ work – what are the mathematical relationships?  We often state how great it is that we are different, but it is really important to show how the math each student is doing connects!

In the 5th and final practice, we orchestrate the conversation to help our class make connections between concepts, representations, strategies, big ideas…  Our role here is to assess where the conversation should go based on the conversations, observations and products we have seen so far.


So, I’m left wondering which of these 3 views of “assessment driving learning” makes the most sense?  Which one is going to help me keep on track?  Which one will help my students see themselves as capable mathematicians?  Which one will help my students learn the mathematics we are learning?

Whether we look at data from a unit, or from the day, or throughout each step in a lesson, Daro has 2 quotes that have helped form my opinion on the topic:

Assessment Drives Learning (3)

I can’t help but think that when we look for gaps in our students’ learning, we are going to find them.  When our focus in on these gaps, our instruction is likely more skills oriented, more procedural…. Our view of our students becomes about what they CAN’T do.  And our students’ view of themselves and the subject diminishes.

Assessment Drives Learning (4)

“Need names a sled to low expectations”.  I believe when we boil down mathematics into the tiniest pieces then attempt to provide students with exactly the things they need, we lose out on the richness of the subject, we rob our students of the experiences that are empowering, we deny them the opportunity to think and engage in real discourse, or become interested and invested in what they are learning.  If our goal is to constantly find needs, then spend our time filling these needs, we are doing our students a huge disservice.

On the other hand, if we provide problems that offer every student access to the mathematics, and allow our students to answer in ways that makes sense to them, we open up the subject for everyone.  However, we still need to use our assessment data to drive our instruction.


As a little experiment, I wonder what it would look like if other subjects gave a skills test at the beginning of a unit to guide their instruction.  Humor me for a minute:

What if an English teacher used a spelling test as their assessment piece right before their unit on narratives?  Well, their assessment would likely tell them that the students’ deficits are in their spelling.  They couldn’t possibly start writing stories until their spelling improved!  What will their instruction look like for the next few days?  Lots of  memorization of spelling words… very little writing!

What if a Science teacher took a list of all of the vocabulary from a unit on Simple Machines and asked each student to match each term with its definition as their initial assessment?  What would this teacher figure our their students needed more of?  Obviously they would find that their students need more work with defining terms. What will their instruction look like for the next few days?  Lots of definitions and memorizing terms… very little experiments!

What if a physical education teacher gave a quiz on soccer positions, rules, terms to start a unit on playing soccer.  What would this teacher figure out?  Obviously they would find out that many of their students didn’t know as much about soccer as they expected.  What would their next few days look like?  Lots of reading of terms, rules, positions… very little physical activity!


A Few Things to Reflect on:
  • How do you see “assessment guiding instruction”?
  • Is there room for all 3 versions?
  • Which pieces of data are collected in your school by others?  Why?  Do you see thes as helpful?
  • Which one(s) do you use well?
  • Do you see any negative consequences from your assessment practices?
  • How do your students identify with mathematics?  Does this relate to your assessment practices?

Being reflective is so key in our job!  Hopefully I’ve given you something to think about here.


Please respond with a comment, especially if you disagree (respectfully).  I’d love to keep the conversation going.

Learning Goals… Success Criteria… and Creativity?

I think in the everyday life of being a teacher, we often talk about the word “grading” instead of more specific terms like assessment or evaluation  (these are very different things though).  I often hear conversations about assessment level 2 or level 4… and this makes me wonder about how often we confuse “assessment” with “evaluation”?

Assessment comes from the Latin “assidere” which literally translates to “sit with” or “sit beside”.  The process of assessment is about learning how our students think, how well they understand.  To do this, we need to observe students as they are thinking… listen as they are working collaboratively… ask them questions to both push their thinking and learn more about their thoughts.

assidere

Evaluation, on the other hand, is the process where we attach a value to our students’ understanding or thinking.  This can be done through levels, grades, or percents.

Personally, I believe we need to do far more assessing and far less evaluating if we want to make sure we are really helping our students learn mathematics, however, for this post I thought I would talk about evaluating and not assessing.


 

A group of teachers I work with were asked to create a rubric they would use if their students were making chocolate chip cookies as a little experiment.  Think about this task for a second.  If every student in your class were making chocolate chip cookies, and it was your responsibility to evaluate their cookies based on a rubric, what criteria would you use?  What would the rubric look like?

Some of the rubrics looked like this:

Rubric 3

What do you notice here?  It becomes easy to judge a cookie when we make the diameters clear… or judge a cookie based on the number of chocolate chips… or set a specific thickness… or find an exact amount for its sugar content (this last one might be harder by looking at the final product).

While I am aware that setting clear standards are important, making sure we communicate our learning goals with students, co-creating success criteria… and that these have been shown to increase student achievement, I can’t help but wonder how often we take away our students’ thinking and decision making when we do this before students have had time to explore their own thoughts first.

 

What if we didn’t tell our students what a good chocolate chip cookie looked like before we began trying things out?  Some might make things like this:

or this?

or this?

But what if we have students that want to make things like this:

or this?

or this?

Or this?

I think sometimes we want to explain everything SO CLEARLY so that everyone can be successful, but this can have the opposite effect.  Being really clear can take away from the thinking of our students.  Our rubrics need to allow for differences, but still hold high standards!  Ambiguity is completely OK in a rubric as long as we have parameters (saying 1 chip per bite limits what I can do).

What about the rubric below?  Is it helpful?  While the first rubric above showed exact specs that the cookies might include, this one is very vague.  So is this better or worse?

Rubric 4


 

As we dig deeper into what quality math education looks like, we need to think deeper about the evidence we will accept for the word “understanding”!

…and by the way, are we evaluating  the student’s ability to bake or their final product?  If we are assessing baking skills, shouldn’t we include the process of baking?  Is following a recipe indicative of a “level 4” or an “A”?  Or should the student be baking, using trial and error and developing their own skills?  Then co-creating success criteria from the samples made…

If we show students the exact thing our cookies should look like, then there really isn’t any thinking involved… students might be able to make a perfect batch of cookies, and then not make another batch until next year during the “cookie unit” and totally forget everything they did last year (I think this is currently what a lot of math classes looks like).

Learning isn’t about following rules though!  It’s about figuring things out and making sense of it in your own way, hearing others’ ideas after you have already had a try at it, learning after trying, being motivated to continue to perfect the thing you are trying to do.  We learn more from our failures, from constructing our understanding than we ever will from following directions!


Creativity happens in math when we give room for it.  Many don’t see math as being creative though… I wish they did!