Starting where our students are….. with THEIR thoughts

A common trend in education is to give students a diagnostic in order for us to know where to start. While I agree we should be starting where our students are, I think this can look very different in each classroom.  Does starting where our students are mean we give a test to determine ability levels, then program based on these differences?  Personally, I don’t think so.

Giving out a test or quiz at the beginning of instruction isn’t the ideal way of learning about our students.  Seeing the product of someone’s thinking often isn’t helpful in seeing HOW that child thinks (Read, What does “assessment drive instruction mean to you” for more on this). Instead, I offer an alternative- starting with a diagnostic task!  Here is an example of a diagnostic task given this week:

Taken from Van de Walle’s Teaching Student Centered Mathematics

This lesson is broken down into 4 parts.  Below are summaries of each:

Part 1 – Tell 1 or 2 interesting things about your shape

Start off in groups of 4.  One student picks up a shape and says something (or 2) interesting about that shape.

Here you will notice how students think about shapes. Will they describe the shape as “looking like a mountain” or “it’s an hourglass” (visualization is level 1 on Van Hiele’s levels of Geometric thought)… or will they describe attributes of that shape (this is level 2 according to Van Hiele)?

As the teacher, we listen to the things our students talk about so we will know how to organize the conversation later.

Part 2 – Pick 2 shapes.  Tell something similar or different about the 2 shapes.

Students randomly pick 2 shapes and either tell the group one thing similar or different about the two shapes. Each person offers their thoughts before 2 new shapes are picked.

Students who might have offered level 1 comments a minute ago will now need to consider thinking about attributes. Again, as the teacher, we listen for the attributes our students understand (i.e., number of sides, right angles, symmetry, number of vertices, number of pairs of parallel sides, angles….), and which attributes our students might be informally describing (i.e., using phrases like “corners”, or using gestures when attempting to describe something they haven’t learned yet).  See chart below for a better description of Van Hiele’s levels:

Van Hiele’s chart shared by NCTM

At this time, it is ideal to hold conversations with the whole group about any disagreements that might exist.  For example, the pairs of shapes above created disagreements about number of sides and number of vertices.  When we have disagreements, we need to bring these forward to the group so we can learn together.

Part 3 – Sorting using a “Target Shape”

Pick a “Target Shape”. Think about one of its attributes.  Sort the rest of the shapes based on the target shape.

The 2 groups above sorted their shapes based on different attributes. Can you figure out what their thinking is?  Were there any shapes that they might have disagreed upon?

Part 4 – Secret sort

Here, we want students to be able to think about shapes that share similar attributes (this can potentially lead our students into level 2 type thinking depending on our sort).  I suggest we provide shapes already sorted for our students, but sorted in a way that no group had just sorted the shapes. Ideally, this sort is something both in your standards and something you believe your students are ready to think about (based on the observations so far in this lesson).

In this lesson, we have noticed how our students think.  We could assess the level of Geometric thought they are currently using, or the attributes they are comfortable describing, or misconceptions that need to be addressed.  But, this lesson isn’t just about us gathering information, it is also about our students being actively engaged in the learning process!  We are intentionally helping our students make connections, reason and prove, learn/ revisit vocabulary, think deeper about specific attributes…

I’ve shared my thoughts about what I think day 1 should look like before for any given topic, and how we can use assessment to drive instruction, however, I wanted to write this blog about the specific topic of diagnostics.

In the above example, we listened to our students and used our understanding of our standards and developmental research to know where to start our conversations. As Van de Walle explains the purpose of formative assessment, we need to make our formative more like a streaming video, not just a test at the beginning!van-de-walle-streaming-video

If its formative, it needs to be ongoing… part of instruction… based on our observations, conversations, and the things students create…  This requires us to start with rich tasks that are open enough to allow everyone an entry point and for us to have a plan to move forward!

I’m reminded of Phil Daro’s quote:


For us to make these shifts, we need to consider our mindsets that also need to shift.  Statements like the following stand in the way of allowing our students to be actively engaged in the learning process starting with where they currently are:

  • My students aren’t ready for…
  • I need to start with the basics…
  • My students have gaps in their…
  • They don’t know the vocabulary yet…

These thoughts are counterproductive and lead to the Pygmalion effect (teacher beliefs about ability become students’ self-fulfilling prophecies).  When WE decide which students are ready for what tasks, I worry that we might be holding many of our students back!

If we want to know where to start our instruction, start where your students are in their understanding…with their own thoughts!!!!!  When we listen and observe our students first, we will know how to push their thinking!

How do you give feedback?

There seems to be a lot of research telling us how important feedback is to student performance, however, there’s little discussion about how we give this feedback and what the feedback actually looks like in mathematics. To start with, here are a few important points research says about feedback:

  • The timing of feedback is really important
  • The recipient of the feedback needs to do more work than the person giving the feedback
  • Students need opportunities to do something with the feedback
  • Feedback is not the same thing as giving advice

I will talk about each of these toward the end of this post.  First, I want to explain a piece about feedback that isn’t mentioned enough…  Providing students with feedback positions us and our students as learners.  Think about it for a second, when we “mark” things our attention starts with what students get right, but our attention moves quickly to trying to spot errors. Basically, when marking, we are looking for deficits. On the other hand, when we are giving feedback, we instead look for our students’ actual thinking.  We notice things as almost right, we notice misconceptions or overgeneralization…then think about how to help our students move forward.  When giving feedback, we are looking for our students strengths and readiness.  Asset thinking is FAR more productive, FAR more healthy, FAR more meaningful than grades!

Feedback Doesn’t Just Happen at the End!

Let’s take an example of a lesson involving creating, identifying, and extending linear growing patterns.  This is the 4th day in a series of lessons from a wonderful resource called From Patterns to Algebra.  Today, the students here were asked to create their own design that follows the pattern given to them on their card.

Their pattern card read: Output number = Input number x3+2
Their pattern card read:  Output number = Input number x7
Their pattern card read:  Output number = Input number x4


Their pattern card read:  Output number = Input number x3+1
Their pattern card read:  Output number = Input number x8+2
Their pattern card read: Output number = Input number x5+2

Once students made their designs, they were instructed to place their card upside down on their desk, and to circulate around the room quietly looking at others’ patterns.  Once they believed they knew the “pattern rule” they were allowed to check to see if they were correct by flipping over the card.

After several minutes of quiet thinking, and rotating around the room, the teacher stopped everyone and led the class in a lesson close that involved rich discussions about specific samples around the room.  Here is a brief explanation of this close:

Teacher:  Everyone think of a pattern that was really easy to tell what the pattern rule was.  Everyone point to one.  (Class walks over to the last picture above – picture 6).  What makes this pattern easy for others to recognize the pattern rule?  (Students respond and engage in dialogue about the shapes, colours, orientation, groupings…).

Teacher:  Can anyone tell the class what the 10th position would look like?  Turn to your partner and describe what you would see.  (Students share with neighbor, then with the class)

Teacher:  Think of one of the patterns around the room that might have been more difficult for you to figure out.  Point to one you want to talk about with the class.  (Students point to many different ones around the room.  The class visits several and engages in discussions about each.  Students notice some patterns are harder to count… some patterns follow the right number of tiles – but don’t follow a geometric pattern, some patterns don’t reflect the pattern listed on the card.  Each of these noticings are given time to discuss, in an environment that is about learning… not producing.  Everyone understands that mistakes are part of the learning process here and are eager to take their new knowledge and apply it.

The teacher then asks students to go back to their desks and gives each student a new card.  The instructions are similar, except, now she asks students to make it in a way that will help others recognize the patterns easily.

The process of creating, walking around the room silently, then discussing happens a second time.

To end the class, the teacher hands out an exit card asking students to articulate why some patterns are easier than others to recognize.  Examples were expected from students.

At the beginning of this post I shared 4 points from research about feedback.  I want to briefly talk about each:

The timing of feedback is really important

Feedback is best when it happens during the learning.  While I can see when it would be appropriate for us to collect items and write feedback for students, having the feedback happen in-the-moment is ideal!   Dan Meyer reminds us that instant feedback isn’t ideal.  Students need enough time to think about what they did right/wrong… what needs to be corrected.  On the other hand, having students submit items, then us giving them back a week later isn’t ideal either!  Having this time to think and receive feedback DURING the learning experience is ideal.  In the example above, feedback happened several times:

  1. As students walked around looking at patterns.  After they thought they knew the pattern, they peeked at the card.
  2. As students discuss several samples they are given time to give each other feedback about which patterns make sense… which ones visually represented the numeric value… which patterns could help us predict future visuals/values
  3. Afterward once the teacher collected the exit cards.

The recipient of the feedback needs to do more work than the person giving the feedback

Often we as teachers spend too much time writing detailed notes offering pieces of wisdom.  While this is often helpful, it isn’t a feasible thing to do on a daily basis. In fact, us doing all of the thinking doesn’t equate to students improving!  In the example above, students were expected to notice patterns that made sense to them, they engaged in conversations about the patterns.  Each student had to recognize how to make their pattern better because of the conversations.  The work of the feedback belonged, for the most part, within each student.

Students need opportunities to do something with the feedback

Once students receive feedback, they need to use that feedback to continue to improve.  In the above example, the students had an opportunity to create new patterns after the discussions.  After viewing the 2nd creations and seeing the exit cards, verbal or written feedback could be given to those that would benefit from it.

Feedback is not the same thing as giving advice

This last piece is an interesting one.  Feedback, by definition, is about seeing how well you have come to achieving your goal.  It is about what you did, not about what you need to do next.  “I noticed that you have switched the multiplicative and additive pieces in each of your patterns” is feedback.  “I am not sure what the next position would look like because I don’t see a pattern here” is feedback.  “The additive parts need to remain constant in each position” is not feedback… it is advice (or feedforward).

In the example above, the discussions allowed for ample time for feedback to happen.  If students were still struggling, it is appropriate to give direct advice.  But I’m not sure students would have understood any advice, or retained WHY they needed to take advice if we offered it too soon.

So I leave you with some final questions for you:

  • When do your students receive feedback?  How often?
  • Who gives your students their feedback?
  • Is it written?  Or verbal?
  • Which of these do you see as the most practical?  Meaningful for your students?  Productive?
  • How do you make time for feedback?
  • Who is doing the majority of the work… the person giving or the person receiving the feedback?
  • Do your students engage in tasks that allow for multiple opportunities for feedback to happen naturally?

PS.   Did you notice which of the students’ examples above had made an error.  What feedback would you give?  How would they receive this feedback?




Who makes the biggest impact?

A few years ago I had the opportunity to listen to Damian Cooper (expert on assessment and evaluation here in Ontario). He shared with us an analogy talking to us about the Olympic athletes that had just competed in Sochi.  He asked us to think specifically about the Olympic Ice Skaters…

He asked us, who we thought made the biggest difference in the skaters’ careers:  The scoring judges or their coaches?

Think about this for a second…  An ice skater trying to become the best at their sport has many influences on their life…  But who makes the biggest difference?  The scoring judges along the way, or their coaches?  Or is it a mix of both???

Damian told us something like this:

The scoring judge tells the skater how well they did… However, the skater already knows if they did well or not.  The scoring judge just CONFIRMS if they did well or not.  In fact, many skaters might be turned off of skating because of low scores!  The scoring judge is about COMPETITION.  Being accurate about the right score is their goal.

On the other hand, the coach’s role is only to help the skater improve. They watch, give feedback, ask them to repeat necessary steps… The coach knows exactly what you are good at, and where you need help. They know what to say when you do well, and how to get you to pick yourself up. Their goal is for you to become the very best you can be!  They want you to succeed!

In the everyday busyness of teaching, I think we often confuse the terms “assessment” with “evaluation”   Evaluating is about marking, levelling, grading… While the word assessment comes from the Latin “Assidere” which means “to sit beside”.  Assessment is kind of like learning about our students’ thinking processes, seeing how deeply they understand something…   These two things, while related, are very different processes!


I have shared this analogy with a number of teachers.   While most agree with the premise, many of us recognize that our job requires us to be the scoring judges… and while I understand the reality of our roles and responsibilities as teachers, I believe that if we want to make a difference, we need to be focusing on the right things.  Take a look at Marian Small’s explanation of this below.  I wonder if the focus in our schools is on the “big” stuff, or the “little” stuff?  Take a look:

Marian Small – It’s About Learning from LearnTeachLead on Vimeo.

Thinking again to Damian’s analogy of the ice skaters, I can’t help but think about one issue that wasn’t discussed.  We talked about what made the best skaters, even better, but I often spend much of my thoughts with those who struggle.  Most of our classrooms have a mix of students who are motivated to do well, and those who either don’t believe they can be successful, or don’t care if they are achieving.

If we focus our attention on scoring, rating, judging… basically providing tasks and then marking them… I believe we will likely be sending our struggling students messages that math isn’t for them.  On the other hand, if we focus on providing experiences where our students can learn, and we can observe them as they learn, then use our assessments to provide feedback or know which experiences we need to do next, we will send messages to our students that we will all improve.

Hopefully this sounds a lot like the Growth Mindset messages you have been hearing about!

Take a quick look at the video above where Jo Boaler shows us the results of a study comparing marks vs feedback vs marks & feedback.

So, how do you provide your students with the feedback they need to learn and grow?

How do you provide opportunities for your students to try things, to explore, make sense of things in an environment that is about learning, not performing?

What does it mean for you to provide feedback?  Is it only written?

How do you use these learning opportunities to provide feedback on your own teaching?

As  always, I try to ask a few questions to help us reflect on our own beliefs.  Hopefully we can continue the conversation here or on Twitter.


What does “Assessment Drives Learning” mean to you?

There are so many “head nod” phrases in education.  You know, the kind of phrases we talk about and all of us easily agree upon that whatever the thing is we are talking about is a good thing.  For instance, someone says that “assessment should drive the learning” in our classroom, and we all easily accept that this is a good practice.  Yet, everyone is likely to have a completely different vision as to what is meant by the phrase.

In this post, I want to illustrate 3 very different ways our assessments can drive our instruction, and how these practices lead to very different learning opportunities for our students.

Assessment Drives Learning

Unit Sized Assessments

Some teachers start their year or their unit with a test to find out the skills their students need or struggle with.  These little tests (sometimes not so little) typically consist of a number of short, closed questions.  The idea here is that if we can find out where our students struggle, we will be able to better determine how to spend our time.

But let’s take a look at exactly how we do this.  The type of questions, the format of the test and the content involved not only have an effect on how our students view the subject and themselves as learners of math, they also have a dramatic effect on the direction of learning in our classrooms.  

For example, do the questions on the test refer to the types of questions you worked on last year, according to previous Standards, or are they based on the things you are about to learn this year (this year’s Standards)?  If you provide questions that are 1 grade below, your assessment data will tell you that your students struggle with last years’ topics… and your instruction for the next few days will likely be to try to fill in the gaps from last year.  On the other hand, if you ask questions that are based on this year’s content, most of your students will likely do very poorly, and your data will tell you to teach the stuff you would have anyway without giving the test at all.  Either way, the messages our students receive are about their deficits… and our instruction for the next few days will likely relate to the things we just told our students they aren’t good at.  I can’t help but wonder how our students who struggle feel when given these messages.  Day 1 and they already see themselves as behind.

I also can’t help but wonder if this is helpful even for their skills anyway?  As Daro points out below, when this is our main view of assessment guiding our instruction, we often end up providing experiences for our students that continue to keep those who struggle struggling.

Assessment Drives Learning (2)

Daily Assessments

On the other hand, many teachers view assessment guiding their practice through the use of daily assessment practices like math journals, exit cards or other ways of collecting information while the learning is still happening.  It is really important to note that these forms of assessment can look very different from teacher to teacher, or from lesson to lesson.  In my post titled Exit Cards: What do yours look like?  I shared 4 different types of information we often collect between lessons.  I really think the type of information we collect says a lot about our own beliefs and our reflections on this evidence will likely form the type of experiences we have the next day.

When we use assessments like these regularly, we are probably more likely to stay on track with our curriculum Standards, however, what we do with this the information the next day will completely depend on the type of information we collect.

In-the-Moment Assessments

A third way to think of “assessment driving instruction” is to think of the in-the-moment decisions we make.  For example, classrooms that teach THROUGH problem solving will likely use instructional practices that help us use in-the-moment assessment decisions.  Take for example The 5 Practices: for Orchestrating Productive Mathematics Discussions (linked here is a free copy of the book).  Here are the 5 Practices and brief explanation about how each might be useful as part of the assessment of our students.

1. Anticipating
• Do the problem yourself.
• What are students likely to produce?
• Which problems will most likely be the most useful in addressing the mathematics?

The first practice helps us prepare for WHAT we will be noticing.  Being prepared for the problem ahead is a really important place to start.
2. Monitoring
• Listen, observe students as they work
• Keep track of students’ thinking
• Ask questions of students to get them back on track or to think more deeply (without rescuing or funneling information)

The second practice helps us notice how students are thinking, what representations they might be using.  The observations and conversations we make here can be very powerful pieces of assessment data for us!

3. Selecting
• What do you want to highlight?
• Purposefully select those that will advance mathematical ideas of the group.

The third practice asks us to assess each of the students’ work, and determine which samples will be beneficial for the class.  Using our observations and conversations from practice 2, we can now make informed decisions.
4. Sequencing
• In what order do you want to present the student work samples?  (Typically only a few share)
• Do you want the most common to start first? Would you present misconceptions first?  Or would you start with the simplest sample first?
• How will the learning from the first solution help us better understand the next solution?• Here we ask students specific questions, or ask the group to ask specific questions, we might ask students what they notice from their work…

The 4th practice asks us to sequence a few student samples in order to construct a conversation that will help all of our students understand the mathematics that can be learned from the problem.  This requires us to use our understanding of the mathematics our students are learning in relation to previous learning and where the concepts will eventually lead (a developmental continuum or landscape or trajectory is useful here)
5. Connecting
• Craft questions or allow for students to discuss the mathematics being learned to make the mathematics visible (this isn’t about sharing how you did the problem, but learning what math we can learn from the problem).
• Compare and contrast 2 or 3 students’ work – what are the mathematical relationships?  We often state how great it is that we are different, but it is really important to show how the math each student is doing connects!

In the 5th and final practice, we orchestrate the conversation to help our class make connections between concepts, representations, strategies, big ideas…  Our role here is to assess where the conversation should go based on the conversations, observations and products we have seen so far.

So, I’m left wondering which of these 3 views of “assessment driving learning” makes the most sense?  Which one is going to help me keep on track?  Which one will help my students see themselves as capable mathematicians?  Which one will help my students learn the mathematics we are learning?

Whether we look at data from a unit, or from the day, or throughout each step in a lesson, Daro has 2 quotes that have helped form my opinion on the topic:

Assessment Drives Learning (3)

I can’t help but think that when we look for gaps in our students’ learning, we are going to find them.  When our focus in on these gaps, our instruction is likely more skills oriented, more procedural…. Our view of our students becomes about what they CAN’T do.  And our students’ view of themselves and the subject diminishes.

Assessment Drives Learning (4)

“Need names a sled to low expectations”.  I believe when we boil down mathematics into the tiniest pieces then attempt to provide students with exactly the things they need, we lose out on the richness of the subject, we rob our students of the experiences that are empowering, we deny them the opportunity to think and engage in real discourse, or become interested and invested in what they are learning.  If our goal is to constantly find needs, then spend our time filling these needs, we are doing our students a huge disservice.

On the other hand, if we provide problems that offer every student access to the mathematics, and allow our students to answer in ways that makes sense to them, we open up the subject for everyone.  However, we still need to use our assessment data to drive our instruction.

As a little experiment, I wonder what it would look like if other subjects gave a skills test at the beginning of a unit to guide their instruction.  Humor me for a minute:

What if an English teacher used a spelling test as their assessment piece right before their unit on narratives?  Well, their assessment would likely tell them that the students’ deficits are in their spelling.  They couldn’t possibly start writing stories until their spelling improved!  What will their instruction look like for the next few days?  Lots of  memorizing of spelling words… very little writing!

What if a Science teacher took a list of all of the vocabulary from a unit on Simple Machines and asked each student to match each term with its definition as their initial assessment?  What would this teacher figure our their students needed more of?  Obviously they would find that their students need more work with defining terms. What will their instruction look like for the next few days?  Lots of definitions and memorizing terms… very little experiments!

What if a physical education teacher gave a quiz on soccer positions, rules, terms to start a unit on playing soccer.  What would this teacher figure out?  Obviously they would find out that many of their students didn’t know as much about soccer as they expected.  What would their next few days look like?  Lots of reading of terms, rules, positions… very little physical activity!

  • How do you see “assessment guiding instruction”?
  • Is there room for all 3 versions?
  • Which pieces of data are collected in your school by others?  Why?  Do you see this as helpful?
  • Which one(s) do you use well?
  • Do you see any negative consequences from your assessment practices?
  • How do your students identify with mathematics?  Does this relate to your assessment practices?

Being reflective is so key in our job!  Hopefully I’ve given you something to think about here.

Please respond with a comment, especially if you disagree (respectfully).  I’d love to keep the conversation going.

Learning Goals… Success Criteria… and Creativity?

I think in the everyday life of being a teacher, we often talk about the word “grading” instead of more specific terms like assessment or evaluation  (these are very different things though).  I often hear conversations about assessment level 2 or level 4… and this makes me wonder about how often we confuse “assessment” with “evaluation”?

Assessment comes from the Latin “assidere” which literally translates to “sit with” or “sit beside”.  The process of assessment is about learning how our students think, how well they understand.  To do this, we need to observe students as they are thinking… listen as they are working collaboratively… ask them questions to both push their thinking and learn more about their thoughts.


Evaluation, on the other hand, is the process where we attach a value to our students’ understanding or thinking.  This can be done through levels, grades, or percents.

Personally, I believe we need to do far more assessing and far less evaluating if we want to make sure we are really helping our students learn mathematics, however, for this post I thought I would talk about evaluating and not assessing.


A group of teachers I work with were asked to create a rubric they would use if their students were making chocolate chip cookies as a little experiment.  Think about this task for a second.  If every student in your class were making chocolate chip cookies, and it was your responsibility to evaluate their cookies based on a rubric, what criteria would you use?  What would the rubric look like?

Some of the rubrics looked like this:

Rubric 3

What do you notice here?  It becomes easy to judge a cookie when we make the diameters clear… or judge a cookie based on the number of chocolate chips… or set a specific thickness… or find an exact amount for its sugar content (this last one might be harder by looking at the final product).

While I am aware that setting clear standards are important, making sure we communicate our learning goals with students, co-creating success criteria… and that these have been shown to increase student achievement, I can’t help but wonder how often we take away our students’ thinking and decision making when we do this before students have had time to explore their own thoughts first.


What if we didn’t tell our students what a good chocolate chip cookie looked like before we began trying things out?  Some might make things like this:

or this?

or this?

But what if we have students that want to make things like this:

or this?

or this?

Or this?

I think sometimes we want to explain everything SO CLEARLY so that everyone can be successful, but this can have the opposite effect.  Being really clear can take away from the thinking of our students.  Our rubrics need to allow for differences, but still hold high standards!  Ambiguity is completely OK in a rubric as long as we have parameters (saying 1 chip per bite limits what I can do).

What about the rubric below?  Is it helpful?  While the first rubric above showed exact specs that the cookies might include, this one is very vague.  So is this better or worse?

Rubric 4


As we dig deeper into what quality math education looks like, we need to think deeper about the evidence we will accept for the word “understanding”!

…and by the way, are we evaluating  the student’s ability to bake or their final product?  If we are assessing baking skills, shouldn’t we include the process of baking?  Is following a recipe indicative of a “level 4” or an “A”?  Or should the student be baking, using trial and error and developing their own skills?  Then co-creating success criteria from the samples made…

If we show students the exact thing our cookies should look like, then there really isn’t any thinking involved… students might be able to make a perfect batch of cookies, and then not make another batch until next year during the “cookie unit” and totally forget everything they did last year (I think this is currently what a lot of math classes looks like).

Learning isn’t about following rules though!  It’s about figuring things out and making sense of it in your own way, hearing others’ ideas after you have already had a try at it, learning after trying, being motivated to continue to perfect the thing you are trying to do.  We learn more from our failures, from constructing our understanding than we ever will from following directions!

Creativity happens in math when we give room for it.  Many don’t see math as being creative though… I wish they did!