I think we can all agree that there are many different ways for our students to show what they know or understand, and that some problems ask for deeper understanding than others. In fact, many standardized math assessments, like PISA, aim to ask students questions at varying difficult levels (PISA uses 6 difficulty levels) to assess the same concept/skill. If we can learn one thing from assessments like these hopefully it is how to expect more of our students by going deeper… and in math class, this means asking better questions.

Robert Kaplinsky is a great example of an educator who has helped us better understand how to ask better questions. His work on Depth of Knowledge (DOK) has helped many teachers reflect on the questions they ask and has offered teachers examples of what higher DOK questions/problems look like.

In Ontario though we actually have an achievement chart that is aimed to help us think more about the types of questions/problems we expect our students be able to do. Basically, it is a rubric showing 4 levels of achievement across 4 categories. In Ontario it is expected that every teacher evaluate their students based on each the these categories. Many teachers, however, struggle to see the differences between these categories. Marian Small recently was the keynote speaker at OAME where she helped us think more about the categories by showing us how to delineate between the different categories of questions/problems:

**Knowledge****Understanding****Application****Thinking**

## Knowledge vs. Understanding

Below are a few of Marian Small’s examples of questions that are designed to help us see the difference between questions aimed at **knowledge** and questions aimed at **understanding**:

As you can see from the above examples, each of the **knowledge **questions ask students to provide a correct answer. However, each of the **understanding** questions require students to both get a correct answer AND be able to show that they understand some of the key relationships involved. Marian’s point in showing us these comparisons was to tell us that we need to spend much more time and attention making sure our students **understand** the math they are learning.

Each of the questions that asks students to show their **understanding **also help us see what **knowledge** our students have, but the other way around is not true!

Hopefully you can see the potential benefits of striving for **understanding**, but I do believe these shifts need to be deliberate. My recommendation to help us aim for **understanding** is to ask more questions that ask students to:

- Draw a visual representation to show why something works
- Provide an example that fits given criteria
- Explain when examples will or won’t work
- Make choices (i.e., which numbers, visual representations… will be best to show proof)
- show their understanding of key “Big Ideas” and relationships

## Application vs. Thinking

Below are a few examples that can help us delineate the differences between **application** and** thinking**:

These examples might be particularly important for us to think about. To begin with, **application** questions often use some or all of the following:

- use a context
- require students to use things they already should know
- provide a picture(s) or example(s) for students to see
- provide almost all of the information and ask the student to find what is missing

**Thinking** questions, on the other hand, are the basis for what Stein et. al called “Doing Mathematics“. In Marian’s presentation, she discussed with us that these types of questions are why those who enjoy mathematics like doing mathematics. Thinking and reasoning are at the heart of what mathematics is all about! **Thinking** questions typically require the student to:

- use non-algorithmic thinking
- make sense of the problem
- use relevant knowledge
- notice important features of the problem
- choose a possible solution path and possibly adjust if needed
- persevere to monitor their own progress

Let’s take a minute to compare questions aimed at **application **and questions aimed at **thinking**. **Application** questions, while quite helpful in learning mathematics concepts (contexts should be used AS students learn), they typically offer less depth than **thinking** questions. In each of the above **application** questions, a student could easily ignore the context and fall back on learned procedures. On the other hand, each of the **thinking** questions might require the student to make and test conjectures, using the same procedures repeatedly to find a possible solution.

Ideally, we need to spend more time where our students are thinking… more time discussing thinking questions… and focus more on the important relationships/connections that will arise through working on these problems.

## Final Thoughts

Somehow we need to find the right balance between using the 4 types of questions above, however, we need to recognize that most textbooks, most teacher-made assessments, and most online resources focus heavily (if not exclusively) on **knowledge** and occasionally **application**. The balance is way off!

Focusing on being able to monitor our own types of questions isn’t enough though. We need to recognize that relationships/connections between concepts/representations are at the heart of expecting more from our students. We need to know that thinking and reasoning are HOW our students should be learning. We need to confront practices that stand in the way of us moving toward **understanding** and **thinking**, and set aside resources that focus mainly on **knowledge** or **application**. If we want to make strides forward, we need to find resources that will help US understand the material deeper and provide us with good examples.

## Questions to Reflect on:

- What did your last quiz or test or exit card look like? What is your current balance of question types?
- What resources do you use? What balance do they have?
- Where do you go to find better
**Understanding**or**Thinking**questions? - What was the last problem you did that made you interested in solving it? What was it about that problem that made you interested? Likely it was a
**Thinking**question. What was it about that problem that made it interesting? - Much of the work related to filling gaps, intervention, assessment driving learning… points teachers toward students’ missing
**knowledge**. How can we focus our attention more toward**understanding**and**thinking**given this reality? - How can we better define “mastery” given the 4 categories above? Mastery must be seen as more than getting a bunch of simple knowledge questions correct!
- Who do you turn to to help you think more about the questions you ask? What professional relationships might be helpful for you?

If you haven’t already, please take a look at Marian Small’s entire presentation where she labels understanding and thinking as the “fundamentals of mathematics”

I’d love to continue the conversation about the questions we ask of our students. Leave a comment here or on Twitter @MarkChubb3

Thanks for writing this up. I feel like the more I learn about questioning, the more I’m curious. I think a big portion is also the gap between what we WANT to assess and what technology is capable of assessing.

LikeLike

I agree that this is an issue. Ideally WE would be the ones “assessing”. I like to point out the Latin root for assess means “to sit beside”. This is really different than marking or evaluating, it’s about understanding how our kids think. But this also means we need to ask teachers (who are crazy busy) to want to learn about their kids.

LikeLike

Yes, we are crazy busy but I what stops me is that I don’t always understand their answers. (even with their explanation.) I am always trying to pinpoint what I need to assess and create a true, accurate assessment. That is what is hard.

So, where do we go for better understanding and thinking questions?

Thanks; looking forward to your reply…

LikeLike

If we look at the knowledge vs understanding questions again, I hope that you can see that each understanding answer shouldn’t be very different between student to student. Hopefully you will see commonalities.

As for where to look for better questions, I would recommend Marian Small’s resources to help you get started. But ideally, you might want to think about how to have professional conversations with others to help you create your own questions too. Hopefully the criteria I set in the post might be helpful in doing this.

Feel free to ask me to help too. You can find me on Twitter @markchubb3

LikeLike

This is an excellent article! Thank you so much for writing and sharing!

LikeLike

I am glad you pointed said that using a context is only “sometimes” an indicator of application. In Ontario, the category of application is often confused with “word problem” or “in context” but here in Ontario that is not the case. In fact, it is easy write a word problem with a context that is nothing more than a knowledge question. One of the things that we use to help distinguish application questions when we write questions for the EQAO assessment is that for a question to be application, the student should be expected to use a tool without being told to use that tool. For example: “Determine the length of the longest side of that triangle using the Pythagorean theorem” would be a knowledge question but asking instead “determine the length of the missing side” is considered application (assuming the triangle is right angled) since the mathematical tool (Pythagorean Theorem) was not stated. So that is not a context but still application.

LikeLiked by 1 person