# Thank you, Archimedes!

I really like hydrostatics. Of course I like moving water even better, but even static water is great. And there are so many things to explore! If I was to teach hydrostatics any time soon, there are so many little teasers I would use.

For example this one:

A sailor is standing on the bottom step of a rope ladder, painting the outside of his ship. The bottom step is 50 cm above the water, the distance between steps is 30 cm. The flood is coming in, and the water is expected to rise by 1.5 m. How many steps will the sailor have to climb in order to keep his feet dry?

Or this one:

How much heavier will a trough in a ship lift get when a ship is inside?

A: the weight of the ship
B: the weight of all parts of the ship above the water line
C: not at all
D: I don’t know*

You might think that these are really easy questions, but then you might be surprised! Funnily enough I drafted this post weeks ago, and then last week a colleague of mine talked about how this was a really difficult question, so I had to post it now ;-)

Another question that he mentioned that students found really difficult is similar to this one:

If an anchor is dropped from a boat into a pond, what will happen to the water level?

A: It will rise
B: It will sink
C: Nothing
D: I don’t know

Answer to that one in this post

*Remember why we always include the “I don’t know” option? If not, check out some more posts on multiple choice questions under the MCQ-tag!

# What do I want from my students – sense-making or answer-making?

On different approaches to peer-instruction and why one might want to use them.

Having sat in many different lectures by many different professors over the last year, and having given feedback on the methods used in most of those lectures, I find myself wondering how we can define a standard or even best practice for using clickers. Even when professors go through the classical Mazur steps, there are so many different ways they interpret those! Do we, for example, make sure that the first vote is really an individual vote, so that no interaction happened between students before they have to make this very first decision? I have not seen that implemented at my university. But does that matter? And why would one decide for or against it? I would guess that in most cases I have observed there was really no conscious decision being made – things just happen to happen a certain way.

A paper that I liked a lot and which describes a framework for describing and capturing instructional choices is “Not all interactive engagement is the same: Variations in physics professors’ implementation of Peer Instruction” by Turpen and Finkelstein (2009). I don’t want to talk about their framework as such, but there are a couple of questions they ask that I think are a helpful basis for reflection on our own teaching practices. For example there are questions clustering around the topic of listening to students and using the information from their answers. For example “what do I want students to learn, and what do I want to learn from the students?” might seem basic at first, but it is really not. What do I want students to learn? No matter what it is, what this question implies is “is the clicker question I am about to ask going to help them in that endeavor?”. The clicker question might be just testing knowledge, or it might make students think about a specific concept which they might get an even better grasp of by reflecting on your question.

And what do I want to learn from my students? The initial reaction of people I have talked with over the last year or so is puzzlement at this question. Why would I want to learn anything from my students? I am there to teach, they are there to learn. But is there really any point in asking questions if you are not trying to learn from them? Maybe not “learn” as in “learn new content”, but learn about their grasp of the topic, their learning process, where they are at right now. Do I use clicker questions as a way to test their knowledge, to inform my next steps during the class, to help them get a deeper understanding of the topic, to make them discuss? Those are all worthwhile goals, for sure, but they are different. And any one clicker question might or might not be able to help with all of those goals.

I would really recommend you go read that paper. The authors are describing different instructional choices different instructors made, for example how they interact with students during the clicker questions. Did they leave the stage? Did they answer student questions? Did they discuss with students? (And yes, answering questions and discussing with students is not necessarily the same!). Even though there is not one single best practice to using clickers, it is definitely beneficial to reflect on different kinds of practice, or, at to at least become aware that there ARE different kinds of practice. Plenty to think about!

# How to ask multiple-choice questions when specifically wanting to test knowledge, comprehension or application

Multiple choice questions at different levels of Bloom’s taxonomy.

Let’s assume you are convinced that using ABCD-cards or clickers in your teaching is a good idea. But now you want to tailor your questions such as to specifically test for example knowledge, comprehension, application, analysis, synthesis or evaluation; the six educational goals described in Bloom’s taxonomy. How do you do that?

I was recently reading a paper on “the memorial consequences of multiple-choice testing” by Marsh et al. (2007), and while the focus of that paper is clearly elsewhere, they give a very nice example of one question tailored once to test knowledge (Bloom level 1) and once to test application (Bloom level 3).

For testing knowledge, they describe asking “What biological term describes an organism’s slow adjustment to new conditions?”. They give four possible answers: acclimation, gravitation, maturation, and migration. Then for testing application, they would ask “What biological term describes fish slowly adjusting to water temperature in a new tank?” and the possible answers for this question are the same as for the first question.

Even if you are not as struck by the beauty of this example as I was, you surely appreciate that this sent me on a literature search of examples how Bloom’s taxonomy can help design multiple choice questions. And indeed I found a great resource. I haven’t been able to track down the whole paper unfortunately, but the “Appendix C: MCQs and Bloom’s Taxonomy” of “Designing and Managing MCQs” by Carneson, Delpierre and Masters contains a wealth of examples. Rather than just repeating their examples, I am giving you my own examples inspired by theirs*. But theirs are certainly worth reading, too!

Bloom level 1: Knowledge

At this level, all that is asked is that students recall knowledge.

Example 1.1

Which of the following persons first explained the phenomenon of “westward intensification”?

1. Sverdrup
2. Munk
3. Nansen
4. Stommel
5. Coriolis

Example 1.2

In oceanography, which one of the following definitions describes the term “thermocline”?

1. An oceanographic region where a strong temperature change occurs
2. The depth range were temperature changes rapidly
3. The depth range where density changes rapidly
5. An isoline of constant temperature

Example 1.3

Molecular diffusivities depend on the property or substance being diffused. From low to high molecular diffusivities, which of the sequences below is correct?

1. Temperature > salt > sugar
2. Sugar > salt > temperature
3. temperature > salt == sugar
4. temperature > sugar > salt

Bloom level 2. Comprehension

At this level, understanding of knowledge is tested.

Example 2.1

Which of the following describes what an ADCP measures?

1. How quickly a sound signal is reflected by plankton in sea water
2. How the frequency of a reflected sound signal changes
3. How fast water is moving relative to the instrument
4. How the sound speed changes with depth in sea water

Bloom level 3: Application

Knowledge and comprehension of the knowledge are assumed, now it is about testing whether it can also be applied.

Example 3.1

What velocity will a shallow water wave have in 2.5 m deep water?

1. 1 m/s
2. 2 m/s
3. 5 m/s
4. 10 m/s

Example 3.2

Which instrument would you use to make measurements with if you wanted to calculate the volume transport of a current across a ridge?

1. CTD
3. ARGO float
4. Winkler titrator

This were only the first three Bloom-levels, but this post is long enough already, so I’ll stop here for now and get back to you with the others later.

Can you see using the Bloom taxonomy as a tool you would use when preparing multiple-choice questions?

If you are reading this post and think that it is helpful for your own teaching, I’d appreciate if you dropped me a quick line; this post specifically was actually more work than play to write. But if you find it helpful I’d be more than happy to continue with this kind of content. Just lemme know! :-)

* If these questions were used in class rather than as a way of testing, they should additionally contain the option “I don’t know”. Giving that option avoids wild guessing and gives you a clearer feedback on whether or not students know (or think they know) the answer. Makes the data a whole lot easier to interpret for you!

# Clickers

Remember my ABCD voting cards? Here is how the professionals do audience response.

Remember my post on ABCD voting cards (post 1, 2, 3 on the topic)?

I then introduced them as “low tech clickers”. Having never worked with actual clickers then, I really really liked the method. And I still think it’s a neat way of including and activating a larger group if you don’t have clickers available. But now that I have worked with actual clickers, I really can’t imagine going back to the paper version.

So what makes clicker that much better than voting cards?

Firstly – students are truly anonymous. With voting cards nobody but the instructor sees what students picked. But having the instructor see what you pick is still a big threshold. And to be honest – as the instructor, you do tend to remember where the correct answers are typically to be found, so it is totally fair that students hesitate to vote with voting cards.

Secondly – even though you as the instructor tend to get a visual impression of what the distribution of answers looked like, this is only a visual impression. The clicker software, however, keeps track of all the answers, so you can go back after your lecture and check the distributions. You can even go back a year later and compare cohorts. No such thing is possible with the voting cards unless you put in a huge effort and a lot of time.

Third – the distribution can be visualized in real time for the students to see. While with the voting cards I always tried to tell the students what I saw, this is not the same thing as seeing a bar diagram pop up and seeing that you are one out of two students who picked this one option.

If you read German – go here for inspiration. My colleague is great with all things clicker and I have learned so much from him! Most importantly (and I wish I had known this back when I used the voting cards): ALWAYS INCLUDE THE “I DON’T KNOW” OPTION. Especially when you make students to pick an answer (as I used to do) – if you don’t give them the “I don’t know” option, all you do is force them to guess, and that can really screw up your distribution as I recently found out. But more about that later…

P.S.: If I convinced you and you are now looking for alternatives to paper voting cards but can’t afford to buy a clicker system – don’t despair. I might write a post about it alternative solutions at some point, but if you want to get a couple of pointers before that post is up, just shoot me an email…