## Let’s do the wave! Longitudinal or transversal?

A simple visualization of two types of waves.

The FIFA world cup has been over for a while now, but I still need to share an idea I had watching one of the games when the audience got bored and started doing a wave around the stadion: this would be a great in-class demonstration of how waves do not transport matter! I usually show demos of waves travelling on ropes, but this could be much more fun – to see the shape of the wave travelling when clearly the students are not moving away from their spots.

Depending on how easy it is to calm that particular class down again you might even consider letting them do a longitudinal wave, too.

Have fun and let me know how it goes!

## How to make demos useful in teaching

Showing demonstrations might not be as effective as you think.

Since I was talking about the figures I bring with me to consultations yesterday, I thought I’d share another one with you. This one is about the effectiveness of showing demonstrations in the classroom.

As you might have noticed following this blog, I’m all for classroom demonstrations. In fact, my fondness for all those experiments is what led me to start this blog in the first place. But one question we should be asking ourselves is for what purpose we are using experiments in class: “Classroom demonstrations: Learning tools or entertainment?”. The answer is given in this 2004 article by Crouch et al., and it is one that should determine how exactly we use classroom demonstrations.

The study compares four student groups: a group that watched the demonstration, a group that was asked to make a prediction of the outcome and then make a prediction and then watched the demonstration, a group that was asked to make a prediction, watched the demonstration and then discussed it with their peers, and a control group that did not see the demonstration. All groups were given explanations by the instructor.

So how much did the groups that saw the demonstration learn compared to the control group? Interestingly, this varied between groups. Tested at the end of the semester without mentioning that a similar situation had been show in class, for the outcome, watching the demonstration led to a learning gain* of 0.15, predicting and then watching lead to a learning gain of 0.26 and predicting, watching and discussing lead to a learning gain of 0.34. For a correct explanation, this is even more interesting: watching the demonstration only lead to a learning gain of 0.09, predicting and watching to 0.36 and predicting, watching and discussing to 0.45.

So passively showing demonstrations without discussion is basically useless, whereas demonstrations that are accompanied by prediction and/or discussion lead to considerable learning gains, especially when it comes to not only remembering the correct outcome, but also the explanation. Which ties in with this post on the importance of reflection in learning.

Interestingly, in that study the time investment that led to the higher learning gains is small – just two extra minutes for the group that made the predictions, and 8 minutes for the group that made the predictions and then discussed the experiment in the end.

Since you are reading my blog I’ll assume that you don’t need to be convinced to show demonstrations in your teaching – but don’t these numbers convince you to not just show the demonstrations, but to tie them in by making students reflect on what they think will happen and then on why it did or did not happen? Assuming we are showing demonstrations as learning tools rather than (ok, in addition to) as entertainment – shouldn’t we be making sure we are doing it right?

* The learning gain is calculated as the ratio of the difference between the correct outcomes of the respective groups and the control group, and the correct outcome of the control group: (R-Rcontrol)/Rcontrol. For the actual numbers, please refer to the original article.

## Should we ask or should we tell?

Article by Freeman et al., 2014, “Active learning increases student performance in science, engineering, and mathematics”.

Following up on the difficulties in asking good questions described yesterday, I’m today presenting an article on the topic “should we ask or should we tell?”.

Spoiler alert – the title says it all: “Active learning increases student performance in science, engineering, and mathematics”. Nevertheless, the recent PNAS-article by Freeman et al. (2014) is really worth a read.

In their study, Freeman and colleagues meta-analyzed 225 studies that compared student learning outcomes across science, technology, engineering and mathmatics (STEM) disciplines depending on whether students were taught through lectures or through active learning formats. On average, examination scores increased by 6% under active learning scenarios, and students in classes with traditional lecture formats were 1.5 times more likely to fail than those in active learning classes.

These results hold for all STEM disciplines and through all class sizes, although it seems most effective for classes with less than 50 students. Active learning also seems to have a bigger effect on concept inventories than on traditional examinations.

One interesting point the authors raise in their discussion is whether for future research, traditional lecturing is still a good control, or whether active learning formats should be directly compared to each other.

Also, the impact of instructor behavior and of the amount of time spent on “active learning” are really interesting future research topics. In this study, even lectures with only as little as 10-15% of their time devoted to clicker questions counted as “active”, and even a small – and doable – change like that has a measurable effect.

I’m really happy I came across this study – really big data set (important at my work place!), rigorous analysis (always important of course) and especially Figure 1 is a great basis for discussion about the importance of active learning formats and it will go straight into the collection of slides I have on my whenever I go into a consultation.

Check out the study, it is really worth a read!

How do you ask questions that really make students think, and ultimately understand?

I’ve only been working at a center for teaching and learning for half a year, but still my thinking about teaching has completely transformed, and still is transforming. Which is actually really exciting! :-) This morning, prompted by Maryellen Weimer’s post on “the art of asking questions”, I’m musing about what kind of questions I have been asking, and why. And how I could be asking better questions. And for some reason, the word “thermocline” keeps popping up in my thoughts.

What a thermocline is, is one of the important definitions students typically have to learn in their intro to oceanography. And the different ways in which the term is used: as the depth range where temperatures quickly change from warm surface waters to cold deep waters, as, more generally, the layer with the highest vertical temperature gradient, or as seasonal or permanent thermoclines, to name but a few.

I have asked lots of questions about thermoclines, both during lectures, in homework assignments, and in exams. But most of my questions were more of the “define the word thermocline”, “point out the thermocline in the given temperature profile”, “is this a thermocline or an isotherm” kind, which are fine on an exam maybe, than of a kind that would be really conductive to student learning. I’ve always found that students struggled a lot with learning the term thermocline and all the connected ones like isotherm, halocline, isohaline, pycnocline, isopycnal, etc.. But maybe that was because I haven’t been asking the right questions? For example, instead of showing a typical pole-to-pole temperature section and pointing out the warm surface layer, the thermocline, and the deep layer*, maybe showing a less simplified section and having the students come up with their own classification of layers would be more helpful? Or asking why defining something like a thermocline might be useful for oceanographers, hence motivating why it might be useful to learn what we mean by thermocline.

A second piece of advice that I really liked in that post is “don’t ask open-ended questions if you know the answer you’re looking for”. Because what happens when you do that is, as we’ve probably all experienced, that we cannot really accept any answer that doesn’t match the one we were looking for. Students of course notice, and will start guessing what answer we were looking for, rather than deeply think about the question. This is actually a problem with the approach I suggested above: When asking students to come up with classifications of oceanic layers from a temperature section – what if they come up with something brilliant that does unfortunately not converge on the classical “warm upper layer, thermocline, cold deep layer” classification? Do we say “that’s brilliant, let’s rewrite all the textbooks” or “mmmh, nice, but this is how it’s been done traditionally”? Or what would you say?

And then there is the point that I get confronted with all the time at work; that “thermocline” is a very simple and very basic term, one that one needs to learn in order to be able to discuss more advanced concepts. So if we spent so much of our class time on this one term, would we ever get to teach the more complicated, and more interesting, stuff? One could argue that unless students have a good handle on basic terminology there is no point in teaching more advanced content anyway. Or that students really only bother learning the basic stuff when they see its relevance for the more advanced stuff. And I actually think there is some truth to both arguments.

So where do we go from here? Any ideas?

* how typical a plot to show in an introduction to oceanography that one is, is coincidentally also visible from the header of this blog. When I made the images for the header, I just drew whatever drawings I had made repeatedly on the blackboard recently and called it a day. That specific drawing I have made more times than I care to remember…

## Clickers

Remember my ABCD voting cards? Here is how the professionals do audience response.

Remember my post on ABCD voting cards (post 1, 2, 3 on the topic)?

I then introduced them as “low tech clickers”. Having never worked with actual clickers then, I really really liked the method. And I still think it’s a neat way of including and activating a larger group if you don’t have clickers available. But now that I have worked with actual clickers, I really can’t imagine going back to the paper version.

So what makes clicker that much better than voting cards?

Firstly – students are truly anonymous. With voting cards nobody but the instructor sees what students picked. But having the instructor see what you pick is still a big threshold. And to be honest – as the instructor, you do tend to remember where the correct answers are typically to be found, so it is totally fair that students hesitate to vote with voting cards.

Secondly – even though you as the instructor tend to get a visual impression of what the distribution of answers looked like, this is only a visual impression. The clicker software, however, keeps track of all the answers, so you can go back after your lecture and check the distributions. You can even go back a year later and compare cohorts. No such thing is possible with the voting cards unless you put in a huge effort and a lot of time.

Third – the distribution can be visualized in real time for the students to see. While with the voting cards I always tried to tell the students what I saw, this is not the same thing as seeing a bar diagram pop up and seeing that you are one out of two students who picked this one option.

If you read German – go here for inspiration. My colleague is great with all things clicker and I have learned so much from him! Most importantly (and I wish I had known this back when I used the voting cards): ALWAYS INCLUDE THE “I DON’T KNOW” OPTION. Especially when you make students to pick an answer (as I used to do) – if you don’t give them the “I don’t know” option, all you do is force them to guess, and that can really screw up your distribution as I recently found out. But more about that later…

P.S.: If I convinced you and you are now looking for alternatives to paper voting cards but can’t afford to buy a clicker system – don’t despair. I might write a post about it alternative solutions at some point, but if you want to get a couple of pointers before that post is up, just shoot me an email…

## Introducing voting cards (post 3/3)

How do you introduce voting cards as a new method in a way that minimizes student resistance?

As all new methods, voting cards (see post on the method here, and on what kind of questions to ask here) first seem scary. After all, students don’t know what will happen if they happen to chose the wrong answer. Will they be called out on it by the instructor? Will everybody point at them and laugh? And even if they chose the correct answer, will the instructor make them explain why they chose that answer?

When I introduce voting cards to a new group of students, I make sure to talk through all issues before actually using the cards. It is important to reassure the students that wrong answers will not be pointed out publicly, for example. It helps to use a very simple question that does not have right or wrong answers (“Which of these four colors is your favorite? Show me the one you like best!”) for the very first vote, so students get to experience the process without there being anything at stake. While showing their favorite color, they see that they cannot actually see their neighbors’ choices without making it very obvious (at least not in the classical lecture theatre setting that we are in, but even in other settings it is difficult). Hence their peers cannot actually see their own choice, either, without again making it very obvious.

During the first classes, voting usually looks like pictured above: Very close to the chest, held with both hands, shielding it from the neighbors.

Still there is probably going to be some resistance about committing to one answer because, after all, the instructor will still see it. But in my experience this can be overcome when the reasons for choosing the method are made sufficiently clear – that it benefits them to commit to one answer, because making thought processes explicit helps their learning. That it helps me, because I get a better feel of whether everybody understood a concept or only just the two vocal students, and whether I need to go into more detail with a concept or not. That it is a great basis for discussions.

After a couple of classes, voting cards are not even needed any more (although it can’t hurt to hand them out – it feels like less pressure if you could fall back on holding something up rather than speaking in public); discussion starts without having to be initiated through a voting process and subsequent questions for clarification. Also if they chose to still vote, students get much more daring in the way they hold up the cards – they stop caring about whether their peers can see what they voted for. So all in all a great technique to engage students.

## How to pose questions for voting card concept tests (post 2/3)

Different ways of posing questions for concept tests are being presented here

Concept tests using voting cards have been presented in this post. Here, I want to talk about different types of questions that one could imagine using for this method.

1) Classical multiple choice

In the classical multiple choice version, for each question four different answers are given, only one of which is correct. This is the tried and tested method that is often pretty boring.

However, even this kind of question can lead to good discussions, for example when it is introducing a new concept rather than just testing an old one. In this case, we had talked about different kinds of plate boundaries during the lecture, but not about the frame of reference in which the movement of plates is described. So what seemed to be a really confusing question at first was used to initiate a discussion that went into a lot more depth than either the textbook or the lecture, simply because students kept asking questions.

A twist on the classical multiple choice is a question for which more than one correct answer are given without explicitly mentioning that fact in the question. In a way, this is tricking the students a bit, because they are used to there being only one correct answer. For that reason they are used to not even reading all the answers if they have come across one that they know is correct. Giving several correct answers is a good way of initiating a discussion in class if different people chose different answers and are sure that their answers are correct. Students who have already gained some experience with the method often have the confidence to speak up during the “voting” and say they think that more than one answer is correct.

This is a bit mean, I know. But again, the point of doing these concept tests is not that the students name one correct answer, but that they have thought about a concept enough to be able to answer questions about the topic correctly, and sometimes that includes having the confidence to say that all answers are wrong. And it seems to be very satisfying to students when they can argue that none of the answers that the instructor suggested were correct! Even better when they can propose a correct answer themselves.

4) Problems that aren’t well posed

This is my favorite type of question that usually leads to the best discussions. Not only do students have to figure out that the question isn’t well posed, but additionally we can now discuss which information is missing in order to answer the question. Then we can answer the questions for different sets of variables.

For example for the question in the figure above, each of the answers could be correct during certain times of the year. During summer, the temperature near the surface is likely to be higher than that near the bottom of the lake (A). During winter, the opposite is likely the case (B). During short times of the year it is even possible that the temperature of the lake is homogeneous (C). And, since the density maximum of fresh water occurs at 4degC, the bottom temperature of a lake is often, but not inevitably, 4degC (D). If students can discuss this, chances are pretty high that they have understood the density maximum in freshwater and its influence on the temperature stratification in lakes.

5) Answers that are correct but don’t match the question.

This is a tricky one. If the answers are correct in themselves but don’t match the question, it sometimes takes a lot of discussing until everybody agrees that it doesn’t matter how correct a statement is in itself; if it isn’t addressing the point in question, it is not a valid answer. This can now be used to find valid answers to the question, or valid questions to the provided answers, or both.

This is post no 2 in a series of 3. Post no 1 introduced the method to the readers of this blog, post no 3 is about how to introduce the methods to the students you are working with.

## A, B, C or D?

Voting cards. A low-tech concept test tool, enhancing student engagement and participation. (Post 1/3)

Voting cards are a tool that I learned about from Al Trujillo at the workshop “teaching oceanography” in San Francisco in 2013. Basically, voting cards are a low-tech clicker version: A sheet of paper is divided into four quarters, each quarter in a different color and marked with big letters A, B, C and D (pdf here). The sheet is folded such that only one quarter is visible at a time.

A question is posed and four answers are suggested. The students are now asked to vote by holding up the folded sheet close to their chest so that the instructor sees which of the answers they chose, whereas their peers don’t.