Category Archives: method

Taking ownership of your own mentoring

Have you ever had questions related to your career development that you didn’t know who to ask for answers for? Or have you ever felt that you would probably profit from having a mentor, but didn’t know who that mentor could be? Or do you have a great mentor but wonder whether you might be relying too heavily on him or her? Then this post is for you!

(This post, and the article referenced at the bottom, are heavily inspired by the work of Kerry Ann Rockquemore, especially this post, and workshops she gave for the Earth Science Women’s Network.)

So. Let’s get started. Do you even know what your current mentoring needs are? In the image below we suggest different kinds of mentoring needs that you will probably all encounter throughout your career, hopefully not all at the same time.

It is really helpful to try and identify a person for each of those fields that might possibly be able to help. If you fill out the blank spaces in the graphic below now, before you actually urgently need someone to fill a specific role, it’ll be very valuable once the time comes!

Mentoring_map_01

A “mentoring map” to help you identify your mentoring needs as well as who might be able to fill those needs.

If you aren’t quite sure what each of the fields above contains, the image below might give you ideas:

Mentoring_map_02

Mentoring map. What exactly are your mentoring needs?

And now that you know what your needs are, how do you actually identify possible mentors for each category? We give some ideas in the image below!

Mentoring_map_03

Mentoring map and where to find possible mentors for the different mentoring needs

Do you feel like you are taking unfair advantage of your mentors? Then maybe think about paying it forward. Be a sponsor to the student that stands out in your class and recommend her for a scholarship. Be the safe space your friend needs. Give substantial feedback on your office mate’s paper. Even if you feel you are nowhere near ready to “be someone’s mentor”, that is probably not true. Give back when the opportunity arises, and don’t feel bad to ask for the mentoring you need!

For more details, check out our article:

Glessmer, M.S., A. Adams, M.G. Hastings, R.T. Barnes, Taking ownership of your own mentoring: Lessons learned from participating in the Earth Science Women’s Network, published in The Mentoring Continuum: From Graduate School Through Tenure, Syracuse University Graduate School Press, ed. Glenn Wright, 2015.

Pdf of the chapter here.

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on November 4th, 2015.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

Enabling backchannel communication between a lecturer and a large group

Using technology to enable active engagement with content in a large lecture.

In 2014, I presented the paper “Enabling backchannel communication between a lecturer and a large group” at the SEFI conference in Birmingham. That paper is based on work that I have done with two colleagues – the instructor of a large lecture, and the teaching assistant at the time.

Now if oceanographers hear something about “large lectures”, they typically envision a couple dozen students. In this case, it was a couple of hundred students in a lecture theatre that sits about 700.

The challenge

When sitting in on the class the year before, I noticed that there were a lot of questions that students were discussing around me that never made it to the instructor’s attention. This is not very surprising given the large number of students and that there were only two instructors in the room. But when talking about it afterwards, we decided that we wanted to find a way to channel student questions to make sure they reached the instructor. The “backchannel” was born.

We met up to discuss our options. It became clear very quickly that even though there are a lot of nice methods out there to invite feedback of the sort we wanted (for example through “muddiest point” feedback), this was not feasible with the number of students we were dealing with. So instead we decided to go for an online solution.

Twitter has been propagated for use in instruction for a while, and there are many other tools out there that enable backchannel communication. But we realized that we had very specific requirements which none of the existing tools were meeting simultaneously:

  • anonymous communication, to keep the threshold as low as possible
  • no special hardware or software requirements
  • easy to use
  • communication student to instructor, but not student-student
  • possibility of moderation

The solution

 In the end, Patrick coded a “backchannel” tool that could do all that. On a webpage, students enter text in a text field. They click a button to submit the text, and a moderator then, in real time, decides whether to forward the text to the instructor. The instructor then gets the text on a screen and can decide whether and when to incorporate it in their teaching.
We’ve found that this works really well from an operational point of view. The instructor has been really happy with the quality of questions he has been getting, and sometimes students even send links that they think should be shared with the class.

Students seem to like it, too, even though they aren’t engaging with the tool as much as we had anticipated. But there are a couple of reasons for that which we all name in our paper. Ultimately, we liked the tool enough to continue using it this year. The new semester has just started, so let’s see how it goes!

Thanks to my co-authors for a very interesting and enjoyable collaboration!

Enabling backchannel communication between a lecturer and a large group
M.S. Glessmer, M.-A. Pick and P. Göttsch
In Proceedings of the 42nd SEFI Conference. Birmingham, UK (2014)
http://www.sefi.be/conference-2014/0101.pdf

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on November 4th, 2015.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

Outreach activity: How do we make climate predictions?

This text was written for GeoEd, the education column of EGU’s blog, and first appeared there on Nov 27th, 2015.

In my second year studying physical oceanography, I got a student job in an ocean modelling group. When I excitedly told my friends and family about said job, most of them did not have the slightest idea what I might be doing. Aside from the obvious and oh-so-funny “you are a model now?!”, another common reaction was “modelling – with clay?” and the picture in those people’s head was that of an ocean model resembling the landscape in a miniature train set, except under water. And while there are many groups seeking to understand the ocean by using simplified versions of the ocean or ocean regions, simplified geometries, selected forcings acting on it, etc – this is not the kind of model I was supposed to be working with.

Talking about climate models with the general public

Explaining to a laymen audience what a climate model is a daunting task. We have all seen the images of a region divided into smaller and smaller squares as a visualization of boxes which represent a grid on which a set of differential equations is solved, yielding a solution for each of the boxes (See Figure 1). But do we really expect everybody we show this to grasp the idea of how this might help to understand climate if they don’t have the background to understand what a differential equation is, let alone how it has been discretised and programmed and is now being solved? From my experience it is very difficult to keep people interested and captivated using this approach and, unless they already have a pretty solid background, it is unlikely they will actively engage in the topic and ask clarifying questions.

Image01_cropped

Figure 1: Modelled sea surface temperature of the ocean off Mauritania, North-West Africa. Depending on the model resolution, smaller and smaller features in the sea surface temperature are resolved by the model. Still, even the most complex model is still nowhere near as complex as reality.

A new approach: Let them experience the process of building a model!

I therefore suggest we use a different approach. Instead of concentrating on explaining the mechanics of an ocean model, let us focus on letting people experience the idea behind it by using a “mystery tube” to represent the climate (or whatever process we want to model) and have the audience build their own “models”.

The mystery tube is all over the internet. I have not been able to find the original source but let’s look at what it is:

Basically, we have a tube that is closed off at the top and at the bottom (See Figure 2). Four pieces of string come out of it. When you pull one out, another one gets pulled into the tube. So far, so good. But the pattern of which string gets pulled in when another one gets pulled out suggests that there is something more going on inside the tube than just two pieces of string going in on one side and coming out at the other. So, how do we figure out what is going on? Some of you may have already seen a possible solution to the problem. Others might find one as soon as they’ve gotten their hands on a mystery tube and pulled on the strings a couple of times. Others might need their own tube and pieces of string to play around with before they are reasonably confident that they have an idea of how the mystery tube works.

Image02

Figure 2: A very non-fancy mystery tube: A paper kitchen towel roll with two pieces of curly ribbon going through. But what goes on inside? Still a mystery!

If you were to use mystery tubes in outreach (or with your friends and family, or – always a hit – with your colleagues), it is in fact a good idea to have a couple of “blank” tubes and pieces of string ready and let everyone have a go at building their own mystery tube that reproduces the functionality of the original one. Ideally, as you will see below, you would have more than just the bare necessities ready and also offer flat washers, springs, paper clips or any other distracting material that might or might not be inside the mystery tube.

Why offer “distractor” materials? Because we are trying to understand how people come up with climate models, remember? The original mystery tube represents the process we want to model. We do not know for sure all the important components of that process, and therefore do not know what needs to be included in the model, either.

— SPOILER BELOW! If you want to solve the mystery tube mystery yourself, do not read on! —

Now, in the instructions on the internet the two pieces of string are connected inside the tube by way of a ring through which they are both fed. When I first build my own mystery tube, I was too lazy to search for a ring to connect the pieces of string, so I just crossed the two threads over. After all, the ring wouldn’t be visible in the final product, and the function would remain the same anyway!

From empty cardboard kitchen towel rolls to climate models

Which brings me to the main point of this blog post, first made by my friend and fellow outreach enthusiast, Dr. Kristin Richter (http://kristinrichter.info, currently University of Innsbruck, Austria), who is always my first stop when wanting to bounce ideas for demonstrations or experiments off: This is exactly why modelling climate is so difficult! We can build a perfectly working mystery tube but unless we cut open the original one we will never know whether our solution is the same as the one in the original mystery tube, i.e. whether there is a ring inside, or a paper clip, or the two pieces of string are just crossed.

You might argue we could find out what is inside the original mystery tube by other means, for example by shaking it and listening for rattling, by weighing it, or by many other methods. Yet, can we ever be sure we know exactly what is inside? And more importantly, would we even think of shaking or weighing the mystery tube if we weren’t specifically looking for what connects the two pieces of string? And are we really sure we are reproducing the full functionality of the original mystery tube? Maybe the original ring has a blade on the inside, so after a certain number of experiments one of the strings will be cut? Or maybe there is something else inside that will happen eventually, but that we cannot yet predict because our mystery tube, while reproducing what we observed from the original tube, just does not include that element.

The same goes for climate models, of course. We can reproduce what we observe reasonably well. Assuming we know of all “parts” of the climate and how they work together, we can make a prediction. But the climate is a lot more complex than a mystery tube. Of course, climate models are based on physical principles and laws and not just best fits to observations. Yet, in many places decisions have to be made for or against including details, or for representing them by one parameterisation and not another.

Can we ever know for sure what the future will bring?

So does that mean we should give up on making models of the climate because, while we might be able to reproduce the status quo, prediction is impossible? Absolutely not! But we need to be aware of the possibility of feedback mechanisms that might become important once a threshold has been crossed or tipping points (like when a hypothetical blade inside the ring will have cut through one of the pieces of string). If we are aware that there might be more to the mystery tube than just the pattern of how strings move which we observed at the beginning of this post, we can watch out for signs of other components. Like listen intently to the noise the string makes when gliding through the mystery tube, or listening for rattling when you shake the tube, or monitor the strings for wear indicating there might be a hidden sharp edge somewhere.

And the same obviously goes for climate. We need to monitor all observations and look closely at any deviation of the observations from our model. We need to come up with ideas of processes, which might become important under different conditions and look out for signs that they might already start to occur. We need to be aware that processes we haven’t seen evidence for yet might still be important at a different parameter range.

Once we have gone through all this with our audience, I bet they have a better idea of what a modeller does – even though they still might not have a clue what that means for the average day at work. But typically, people find the mystery tube intriguing, and you should definitely be prepared to answer a lot of questions about what your model does, how you know whether it is right, what processes are included and what are not, and voilà! We are talking about how to make climate predictions.

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on November 27th, 2015.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

Four steps to great hands-on outreach experiences

Part 1 and 2 of this post were first posted on the EGU’s blog on Jan 29, 2016, and Feb 29, 2016, respectively.

Part 1 gives four steps to outreach activities, part 2 uses an example to further illustrate those four steps.

Part 1: For the best hands-on outreach experiences, just provide opportunities for playing!

 

“For the best hands-on outreach experiences, just provide opportunities for playing!” I claim. Seriously? You wonder. We want to spark the public’s curiosity about geosciences, engage the public in thinking about topics as important as sea level rise or ocean acidification, and provide learning experiences that will enable them to take responsibility for difficult decisions. And you say we should just provide opportunities for them to play?

Yes. Hear me out. Playing does not necessarily equal mindlessly killing time. Kids learn a lot by playing, and even grown ups do. But if you prefer, we can use the term “serious play” instead of just “play”. Using the term “serious play” makes it clear that we are talking about “improvising with the unanticipated in ways that create new value”, which is exactly what outreach should be doing: getting people intrigued and wanting to understand more about your topic.

So how would we go about if we wanted to create outreach activities which gave the public opportunity to play in order to lure them into being fascinated by our field of science? There are several steps I recommend we take.

  1. Identify the topic nearest and dearest to your heart

Even if your aim is to educate the public about climate change or some other big picture topic, pick the one element that fascinates you most. If you are really fascinated by what you are showing, chances are that the excitement of doing the activity will carry over to your audience. Plus, once you have this really great activity, you will likely be asked to repeat it many times, so you had better pick one that you love! J

Me, I am a physical oceanographer. I care about motion in the ocean: Why and how it happens. Consequently, all of my outreach activities have people playing with water. Sometimes at different temperatures, sometimes at different salinities, sometimes frozen, sometimes with wind, but always with water.

  1. Find an intriguing question to ask

Questions that intrigue me are, for example, “do ice cubes melt faster in fresh water or in salt water?”, “how differently will ice look when I freeze salt water instead of fresh water?” or “what happens if a stratification is stable in temperature and unstable in salt?”. Of course, all these questions are related to scientific questions that I find interesting, but even without knowledge of all the science around them, they are cool questions. And they all instantly spark follow-up questions like “what would happen if the ice cubes weren’t floating, but moored to the ground?”, “what if I used sugar instead of salt?”, “wait, does the food dye influence what happens here?”. And all of those questions can be investigated right then and there. As soon as someone asks a question, you hand them the materials and let them find the answer themselves. That is why we talk about hands-on outreach activities and not demonstrations: It is about actively involving everybody in the exploration and wonder of doing scientific experiments!

  1. Test with family, friends and colleagues

Many, if not all, the outreach activities I am using and promoting have been tested on family, friends and colleagues before. You know that you have found an intriguing question when your friends sacrifice the last bit of red wine they brought at a Norwegian mountain cabin, to use as stand in for food dye in an experiment you just told them about, because they absolutely have to see it for themselves!

By the way, this is always good to aim at with outreach activities: always try to keep them easy enough to be recreated at a mountain cabin, in your aunt’s kitchen, at the beach or anywhere anyone who saw it or heard about it wants to show their friends. People might occasionally have to get a little creative to replace some of the materials, but that’s part of the charm and of the inquiry we want!

  1. Bring all the materials you need, and have fun!

And then, finally, Just Do It! Bring all your materials and start playing and enjoying yourself!

But now they can play with water and dye. That doesn’t mean they understand my research!

True, by focussing on a tiny aspect you won’t get to explain the whole climate system. But you will probably change the mindset of your audience, at least a little bit. Remember, you studied for many years to come to the understanding you have now, it is not a realistic expectation to convey all that in just one single outreach occasion. But by showing how difficult it is to even understand one tiny aspect (and how much there is still to discover), they will be a lot more likely to inquire more in the future, they will ask better questions (to themselves or to others) and they will be more open to learning about your science. Your activity is only the very first step. It’s the hook that will get them to talk to you, to become interested in what you have to say, to ask questions. And you can totally have backup materials ready to talk in more depth about your topic!

But what if it all goes horribly wrong during my activity?

The good thing is that since you are approaching the whole hands-on outreach as “get them to play!” rather then “show them in detail how the climate system works”, there really isn’t a lot that can go wrong. Yes, you can mess up and the experiment can just not show what you wanted to show. But every time I have had that happen to me, I could “save” the situation by engaging the participants in discussing how things could work better, similar to what Céline describes. People will continue to think about what went wrong and how to fix it, and will likely be even more intrigued than if everything had worked out perfectly.

But what if I am just not creative enough to come up with new ideas?

First, I bet once you start playing, you will come up with new ideas! But then of course, we don’t need to always create outreach activities from scratch. There are many awesome resources around. EGU has its own large collection in the teacher’s corner. And of course, Google (or any websearch of your choice) will find a lot. And if you were interested in outreach activity in physical oceanography specifically, you could always check out my blog “Adventures in Oceanography and Teaching”. I’m sure you’ll find the one activity that you will want to try yourself on a rainy Sunday afternoon. You will want to show your friends when they comes over to visit, and you’ll tell your colleagues about it. And there you are – you found your outreach activity!

 

Part 2: One example of how playing works in outreach activities!

 

In part 1, I talked about hands-on outreach in very general terms, and identified four steps to great outreach. Today, I want to talk about those four steps in more detail, using one of my favourite outreach activities as an example.

Step 1. Identify the topic nearest and dearest to your heart

Me, I am a physical oceanographer. I care about motion in the ocean: Why and how it happens. Consequently, all of my outreach activities have to do with water. Sometimes at different temperatures, sometimes at different salinities, sometimes frozen, sometimes with wind, sometimes with ships, but always with water.

Today, let’s concentrate on thermohaline circulation as the topic we want to get people interested in. That sounds like a lot, so lets break it down: we want to know how oceanic circulation is influenced by both heat and salt in the ocean. To boil this down to one short activity, let’s take away the ocean (and with that all the complicating influences of Earth’s rotation, or topography of ocean basins) and only look at what heat and salt do with water in a tank. In fact, let us focus on different temperatures at first. The easiest way to do this is to introduce water of one temperature into a volume at a different temperature, this way we don’t have to deal with the heating or cooling processes.

Introducing water can mean pouring it into the larger tank, which will lead to some kind of stratification (provided your temperatures are different enough). In order to see the stratification, it always helps to have food dye in the water you are introducing (always put food dye in the smaller volume of water, makes it a lot easier to see the contrast!). To make things most interesting, it might be nice to show two cases simultaneously: pouring hot water and cold water into a lukewarm tank. And, since we see that the hot water forms a layer on top of the lukewarm water and the cold water at the bottom, wouldn’t it be much more fun to introduce them both somewhere at medium height and see what happens?

2_Slide1

Two bottles, one filled with hot water (dyed red) and one filled with cold water (dyed blue) in a larger container of lukewarm water.

Step 2. Find an intriguing question to ask

Depending on who you want to reach as your main audience, you might need to ask different questions. For some audiences, the focus needs to clearly be on your activity’s connection to climate. For other audiences, the questions can be a simple “Wow, that looks weird. Can you figure out what is going on here?”. Depending on the context I was doing my activity in, I could for example ask:

  • Why is the bottle with the red water “pouring up”? The audience I might ask this question are for example kids in a school setting that I am wanting to get excited in science in general. 2_Slide2
  • How can I fill the green cup with hot water without touching it? Audience here could be the general public at a science fair, and if someone manages to fill the green cup, they win a sticker. This questions definitely makes people want to give it a try!2_Slide3
  • What can these fingers tell us about how water mixes in the ocean? This question is for an audience that already knows a lot about the ocean and physical processes in it, for example university students, or a very interested general public.2_Slide4
  • In the subtropical gyres you have a strong salinity stratification. How can nutrients get to the surface ocean? This question is closely related to the one before, but here the element of play isn’t as prominent. So this would be for an audience that knows a lot about ocean physics and biogeochemistry already, like university students or even colleagues at scientific conferences.2_Slide5
  • What drives global ocean currents? This is again a question that you might ask the general public since on one hand not a lot of knowledge about ocean physics is required, and it is on the other hand very easy to see the connection between your activity and the ocean.

    2_Slide6

    Map modified after free-world-maps.com

Step 3. Test with family, friends and colleagues

This step is important for several reasons.

First, you want to work out (most of) the kinks in the activity before using it in front of a large audience. This includes

  • knowing what kind of materials you actually need to run it (For example, I tend to forget that I not only need large containers of water that are prepared at the right temperatures and salinities for several repeats of the experiment, but that in order to set up the experiment for repeats, I need somewhere to get rid of the water from previous experiments),
  • seeing people get really excited about the activity (which is a good memory to calm you down when you get nervous about doing the activity in public for the first time), or, if the aren’t, a good time to tweak the activity a little.
Step 4. Bring all the materials you need, and have fun!

And there you are – ready to do your outreach activity! For your big day, this is what I would recommend:

  • It sounds lame, but you should have a good packing list that includes not only stuff that you need to run the activity, but also stuff that you need to store stuff in on your way home, when everything is wet and full of food dye.
  • If you are about to play with a lot of food dye or other staining substances, consider not wearing your favourite pair of white jeans. Consider also whether your scarf will be constantly hanging in your water tank getting wet, and whether your hair might get caught somewhere.
  • Bring a friend to do the activity with you. It’s more fun, and it really takes away a lot of stressors if there are two people there (Run out of water? No worries, one of you can run and fetch more water while the other talks to people who still want to know what is going on. Question you have no idea how to answer? She will know, or you can look it up together later. Need the loo? How great is it that you don’t have to pack all your stuff and take it with you? ;-))
  • Have someone you know for sure is interested in your activity show up early on to look at it and talk to you about it. Nothing makes it easier for other people to approach and join you in your conversation and activity as someone who is already there and obviously excited. (You can also use your friend mentioned above to play this role until things get going)
  • Bring “backup materials”. Even if your activity is only very vaguely related to your research, bring a current poster of your research (maybe not the A0 version, but A3 or A4) and anything you typically show when talk about your research (Maps? Figures? Instrumentation?). When you get talking to people, chances are you will get talking about how your activity is related to bigger research questions, and you will want to be able to talk about them.
  • And bring a different kind of “backup materials”: Bring pictures and/or movies of your experiment to show what it should have looked like in case the freezer that was supposed to have turned your ice cube tray full of water into ice cubes over night turns out to be a cold room.
  • Take pictures. This one is super important, and I always forget about it in the heat of the moment. You constantly need that picture with you and a bunch of kids looking at your activity for grant proposals or for end-of-year reports!
  • Last but not least: Have fun and take this as a great opportunity to play! Discover features in your activity that you have never noticed before, and, together with your audience, “improvise with the unanticipated in ways that create new value” – I guarantee that it will happen!

Do you have stories of your outreach to share? Any experiments we should all know about? I’d love to hear from you, please leave a comment below!

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on February 1st, 2016.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

Experiment: Demystifying the Coriolis force

Mirjam S. Glessmer & Pierré D. de Wet

Abstract

Even though experiments – whether demonstrated to, or personally performed by students – have been part of training in STEM for a long time, their effectiveness as an educational tool are sometimes questioned. For, despite students’ ability to produce correct answers to standard questions regarding these laboratory exercises, probing deeper often reveals a lack of conceptual understanding.

One way to help students make sense of experiments is to use them in combination with an elicit-confront-resolve approach. With this approach, before the experiment demonstrating a specific concept is run, students are asked to discuss the expected outcome in groups. In so doing, should (specific) misconceptions be harbored about the underlying concept, these are elicited. Incorrect student feedback (feedback illustrating that a misconception is present) is not corrected at this stage. As the demonstration plays out, a mismatch between observation and hypothesis confronts students with their misconceptions. Finally, repetition of the experiment and peer discussion as well as discussion with the instructor lead to resolving of the misunderstandings.

Here, we apply the elicit-confront-resolve approach to a standard demonstration in introductory dynamics, namely the interplay of a rotating frame of reference, movement of particles observed from outside that frame of reference and the resulting fictitious forces. The efficacy of the elicit-confront-resolve approach for this purpose is discussed. Additionally, recommendations are given on how to modify instruction to further aid students in interpreting and understanding their observations.

Key words

Coordinate system, frame of reference, fictitious force, hands-on experiment, elicit-confront-resolve

Introduction

In many STEM disciplines, demonstrations and hands-on experimentation have been part of the curriculum for a long time. However, whether students actually learn from watching demonstrations and conducting lab experiments, and how their learning can be best supported by the instructor, is under dispute (Hart et al, 2000). There are many reasons why students might fail to learn from demonstrations (Roth et al, 1997). For example, separating the signal to be observed from the inevitable noise can be difficult, and inference from other demonstrations might hinder interpretation of a specific experiment. Sometimes students even “remember” witnessing outcomes of experiments that were not there (Milner-Bolotin, Kotlicki, and Rieger (2007)).

Even if students’ and instructors’ observations were the same, this does not guarantee congruent conceptual understanding and conceptual dissimilarity may persist unless specifically treated. However, helping students overcome deeply rooted notions is not simply a matter of telling them which mistakes to avoid. Often they are unaware of the discrepancy between the instructors’ words and their own thoughts (Milner-Bolotin, Kotlicki, and Rieger (2007)).

One way to address misconceptions is by using an elicit-confront-resolve approach (McDermott, 1991). Posner et al. (1982) suggested that dissatisfaction with existing conceptions, which in this method is purposefully created in the confront-step, is necessary for students to make major changes in their concepts. As shown by Kornell (2009), this approach enhances learning by confronting the student with their lack of an answer to a posed question. Similarly, Muller et al. (2007) find that learning from watching science videos is improved if those videos present and discuss common misconceptions, rather than just presenting material textbook-style.

In this article we look at how an elicit-confront-resolve approach can further student engagement and learning. This is done by using a typical introductory demonstration in geophysical fluid dynamics, namely the effect of rotation on the movement of a ball as seen from within and from outside the rotating system. The motivation for the choice of experiment is dual: the rising popularity of rotating tables in undergraduate oceanography instruction (Mackin et al, 2012), and the difficulties students display in anticipating the movement of an object on a rotating body when they themselves are not part of the rotating system.

 

The Coriolis force as example for the instructional method

On a rotating earth, all large-scale motion is subject to the influence of the fictitious Coriolis force, and without a solid understanding of the Coriolis force it is impossible to understand the movement of ocean currents or weather systems. Furthermore, the Coriolis force forms an important part of classical oceanographic theories, such as the Ekman spiral, inertial oscillations, topographic steering and geostrophic currents. A thorough understanding of the concept of fictitious forces and observations in rotating vs. non-rotating systems is thus essential in order to gain a deeper understanding of these systems. Therefore, most introductory books on oceanography, or more generally geophysical fluid dynamics, present the concept in some form or other (cf. e.g. Cushman-Roisin (1994), Gill (1982), Pinet (2009), Pond and Pickard (1983), Talley et al. (2001), Tomczak and Godfrey (2003), Trujillo and Thurman (2013)). Yet, temporal and spatial frames of reference have been described as thresholds to student understanding (Baillie et al., 2012).

The frame of reference is the chosen set of coordinate axes relative to which the position and movement of an object is described. The choice of axes is arbitrary and usually made such as to simplify the descriptive equations of the object under regard. Any object can thus be described in relation to different frames of reference. When describing objects moving on the rotating Earth, the most commonly used frame of reference would be fixed on the Earth (co-rotating), so that the motion of the object is described relative to the rotating Earth. Alternatively, the motion of the same object could be described in an inert frame of reference outside of the rotating Earth. Even though the movement of the object is independent of the frame of reference used to describe it, this independence is not immediately apparent. Objects moving on the rotating Earth seemingly experience a deflecting force when viewed from the co-rotating reference frame. Comparison of the expressions for the movement of a body on the rotating Earth in the inert versus rotating coordinate systems, shows that the rotating reference frame requires additional terms to correctly describe the motion. One of these terms, introduced to convert the equations of motion between the inert and rotating frames, is the so-called Coriolis term (Coriolis, 1835).

Ever since its first mathematical description in 1835 (Coriolis, 1835) this concept is most often taught as a matter of coordinate transformation, rather than focusing on its physical relevance (Persson, 1998). Students are furthermore taught that the Coriolis force is a “fictitious” force, resulting from the rotation of a system and that its influence is not visible when observed from outside the rotating frame of reference. It is therefore often perceived as “a ‘mysterious’ force resulting from a series of ‘formal manipulations’” (Persson, 2010).

In many oceanography programs, the difficult task of helping students gain a deeper understanding of these systems is approached by presenting demonstrations, either in the form of videos or simulations (e.g. a ball being thrown on a merry-go-round, showing the movement both from a rotating and a non-rotating frame, Urbano & Houghton (2006)), or in the lab as demonstration, or as a hands-on experiment. While helpful in visualizing an otherwise abstract phenomenon, using a common rotating table introduces difficulties when comparing the observed motion to the motion on Earth. This is, among other factors, due to the table’s flat surface (Durran and Domonkos, 1996), the alignment of the (also fictitious) centrifugal force with the direction of movement of the ball (Persson, 2010), and the fact that a component of axial rotation is introduced to the moving object when launched. Hence, the Coriolis force is not isolated. Regardless of the drawbacks associated with the use of a (flat) rotating table to illustrate the Coriolis effect, we see value in using it to make the concept of fictitious forces more intuitive, and it is widely used to this effect.

During conventional instruction, students are exposed to simulations and after instruction, students are able to calculate the influence of the Coriolis term. Nevertheless, they have difficulty in anticipating the movement of an object on a rotating body when confronted with a real-life situation where they themselves are not part of the rotating system. When asked, students report that they are anticipating a deflection depending on the rotation direction and rate. Contextually triggered, these knowledge elements are invalidly applied to seemingly similar circumstances and lead to incorrect conclusions. Similar problems have been described for example in engineering education (Newcomer, 2010).

 

The Coriolis demonstration

A demonstration observing a body on a rotating table from within and from outside the rotating system was run as part of the practical experimentation component of the “Introduction to Oceanography” semester course. Students were in the second year of their Bachelors in meteorology and oceanography at the Geophysical Institute of the University of Bergen, Norway. Similar experiments are run at many universities as part of their oceanography or geophysical fluid dynamics instruction.

 

Materials:

  • Rotating table with a co-rotating video camera (See Figure 1. For simpler and less expensive setups, please refer to “Possible modifications of the activity”)
  • Screen where images from the camera can be displayed
  • Solid metal spheres
  • Ramp to launch the spheres from
  • Tape to mark positions on the floor
folie1

Figure 1A: View of the rotating table. Note the video camera on the scaffolding above the table and the red x (marking the catcher’s position) on the floor in front of the table, diametrically across from where, that very instant, the ball is launched on a ramp. B: Sketch of the rotating table, the mounted (co-rotating) camera, the ramp and the ball on the table. C: Student tracing the curved trajectory of the metal ball on a transparency. On the screen, the experiment is shown as filmed by the co-rotating camera, hence in the rotating frame of reference.

 

 

Time needed:

About 45 minutes to one hour per student group. The groups should be sufficiently small so as to ensure active participation of every student. In our small lab space, five has proven to be the upper limit on the number of students per group.

 

Student task:

In the demonstration, a metal ball is launched from a ramp on a rotating table (Figure 1A,B). Students simultaneously observe the motion from two vantage points: where they are standing in the room, i.e. outside of the rotating system of the table; and, on a screen that displays the table, as captured by a co-rotating camera mounted above it. They are subsequently asked to:

  • trace the trajectory seen on the screen on a transparency (Figure 1C),
  • measure the radius of this drawn trajectory; and
  • compare the trajectory’s radius to the theorized value.

The latter is calculated from the measured rotation rate of the table and the linear velocity of the ball, determined by launching the ball along a straight line on the floor.

 

Instructional approach

In years prior to 2012, the course had been run along the conventional lines of instruction in an undergraduate physics lab: the students read the instructions, conduct the experiment and write a report.

In 2012, we decided to include an elicit-confront-resolve approach to help students realize and understand the seemingly conflicting observations made from inside versus outside of the rotating system (Figure 2). The three steps we employed are described in detail below.

folie2

Figure 2: Positions of the ramp and the ball as observed from above in the non-rotating (top) and rotating (bottom) case. Time progresses from left to right. In the top plots, the position in inert space is shown. From left to right, the current position of the ramp and ball are added with gradually darkening colors. In the bottom plots, the ramp stays in the same position, but the ball moves and the current position is always displayed with the darkest color.

  1. Elicit the lingering misconception

1.a The general function of the “elicit” step

The goal of this first step is to make students aware of their beliefs of what will happen in a given situation, no matter what those beliefs might be. By discussing what students anticipate to observe under different physical conditions before the actual experiment is conducted, the students’ insights are put to the test. Sketching different scenarios (Fan (2015), Ainsworth et al. (2011)) and trying to answer questions before observing experiments are important steps in the learning process since students are usually unaware of their premises and assumptions. These need to be explicated and verbalized before they can be tested, and either be built on, or, if necessary, overcome.

 

1.b What the “elicit” step means in the context of our experiment

Students have been taught in introductory lectures that in a counter-clockwise rotating system (i.e. in the Northern Hemisphere) a moving object will be deflected to the right. They are also aware that the extent to which the object is deflected depends on its velocity and the rotational speed of the reference frame.

A typical laboratory session would progress as follows: students are asked to observe the path of a ball being launched from the perimeter of the circular, not-yet rotating table by a student standing at a marked position next to the table, the “launch position”. The ball is observed to be rolling radially towards and over the center point of the table, dropping off the table diametrically opposite from the position from which it was launched. So far nothing surprising. A second student – the catcher – is asked to stand at the position where the ball dropped off the table’s edge so as to catch the ball in the non-rotating case. The position is also marked on the floor with insulation tape.

The students are now asked to predict the behavior of the ball once the table is put into slow rotation. At this point, students typically enquire about the direction of rotation and, when assured that “Northern Hemisphere” counter-clockwise rotation is being applied, their default prediction is that the ball will be deflected to the right. When asked whether the catcher should alter their position, the students commonly answer that the catcher should move some arbitrary angle, but typically less than 90 degrees, clockwise around the table. The question of the influence of an increase in the rotational rate of the table on the catcher’s placement is now posed. “Still further clockwise”, is the usual answer. This then leads to the instructor’s asking whether a rotational speed exists at which the student launching the ball, will also be able to catch it him/herself. Ordinarily the students confirm that such a situation is indeed possible.

 

  1. Confronting the misconception

2.a The general function of the “confront” step

For those cases in which the “elicit” step brought to light assumptions or beliefs that are different from the instructor’s, the “confront” step serves to show the students the discrepancy between what they stated to be true, and what they observe to be true.

 

2.b What the “confront” step means in the context of our experiment

The students’ predictions are subsequently put to the test by starting with the simple, non-rotating case: the ball is launched and the nominated catcher, positioned diametrically across from the launch position, seizes the ball as it falls off the table’s surface right in front of them. As in the discussion beforehand, the table is then put into rotation at incrementally increasing rates, with the ball being launched from the same position for each of the different rotational speeds. It becomes clear that the catcher need not adjust their position, but can remain standing diametrically opposite to the student launching the ball – the point where the ball drops to the floor. Hence students realize that the movement of the ball relative to the non-rotating laboratory is unaffected by the table’s rotation rate.

This observation appears counterintuitive, since the camera, rotating with the system, shows the curved trajectories the students had expected; circles with radii decreasing as the rotation rate is increased. Furthermore, to add to their confusion, when observed from their positions around the rotating table, the path of the ball on the rotating table appears to show a deflection, too. This is due to the observer’s eye being fooled by focusing on features of the table, e.g. cross hairs drawn on the table’s surface or the bars of the camera scaffold, relative to which the ball does, indeed, follow a curved trajectory. To overcome this latter trickery of the mind, the instructor may ask the students to crouch, diametrically across from the launcher, so that their line of sight is aligned with the table’s surface, i.e. at a zero zenith angle of observation. From this vantage point the ball is observed to indeed be moving in a straight line towards the observer, irrespective of the rate of rotation of the table.

To further cement the concept, the table may again be set into rotation. The launcher and the catcher are now asked to pass the ball to one another by throwing it across the table without it physically making contact with the table’s surface. As expected, the ball moves in a straight line between the launcher and the catcher, who are both observing from an inert frame of reference. However, when viewing the playback of the co-rotating camera, which represents the view from the rotating frame of reference, the trajectory is observed as curved.

 

  1. Resolving the misconception

3.a The general function of the “resolve” step

Misconceptions that were brought to light during the “elicit” step, and whose discrepancy with observations was made clear during the “confront” step, are finally corrected in the “resolve” step. While this sounds very easy, in practice it is anything but. The final step of the elicit-confront-resolve instructional approach thus presents the opportunity for the instructor to aid students in reflecting upon and reassessing previous knowledge, and for learning to take place.

 

3.b What the “resolve” step means in the context of our experiment

The instructor should by now be able to point out and dispel any remaining implicit assumptions, making it clear that the discrepant trajectories are undoubtedly the product of viewing the motion from different frames of reference. Despite the students’ observations and their participation in the experiment this is not a given, nor does it happen instantaneously. Oftentimes further, detailed discussion is required. Frequently students have to re-run the experiment themselves in different roles (i.e. as launcher as well as catcher) and explicitly state what they are noticing before they trust their observations.

 

Possible modifications of the activity:

We used the described activity to introduce the laboratory activity, after which the students had to carry out the exercise and write a report about it. Follow-up experiments that are often conducted usually include rotating water tanks to visualize the effect of the Coriolis force on the large-scale circulation of the ocean or atmosphere, for example on vortices, fronts, ocean gyres, Ekman layers, Rossby waves, the General circulation and many other phenomena (see for example Marshall and Plumb (2007)).

Despite their popularity in geophysical fluid dynamics instruction at the authors’ current and previous institutions, rotating tables might not be readily available everywhere. Good instructions for building a rotating table can, for example, be found on the “weather in a tank” website, where there is also the contact information to a supplier given: http://paoc.mit.edu/labguide/apparatus.html. A less expensive setup can be created from old disk players or even Lazy Susans. In many cases, setting the exact rotation rate is not as important as having a qualitative difference between “fast” and “slow” rotation, which is very easy to realize. In cases where a co-rotating camera is not available, by dipping the ball in either dye or chalk dust (or by simply running a pen in a straight line across the rotating surface), the trajectory in the rotating system can be visualized. The method described in this manuscript is easily adapted to such a setup.

Lastly we suggest using an elicit-confront-resolve approach even when the demonstration is not run on an actual rotating table. Even if the demonstration is only virtually conducted, for example using Urbano & Houghton (2006)’s Coriolis force simulation, the approach is beneficial to increasing conceptual understanding.

Discussion

The authors noticed in 2011 that most students participating in that year’s lab course, despite having participated in performing the experiment, still harbored misconceptions. Despite having taken part in performing the demonstration, misunderstanding remained as to what forces were acting on the ball and what the movement of the ball looked like in the different frames of reference. This led to the authors adopting the elicit-confront-resolve approach for instruction, as described above, in 2012.

We initially considered starting the lab session on the Coriolis force by throwing the ball diametrically across the rotating table. Students would then see on-screen the curved trajectory of a ball, which had never made physical contact with the table rotating beneath it. It was thought that initially considering the motion from the co-rotating camera’s view, and seeing it displayed as a curved trajectory when direct observation had shown it to be linear, might hasten the realization that it is the frame of reference that is to blame for the ball’s curved trajectory. However the speed of the ball makes it very difficult to observe its curved path on the screen in real time. Replaying the footage in slow motion helps in this regard. Yet, removing direct observation through recording and playback seemingly hampers acceptance of the occurrence as “real”. It was therefore decided that this method only be used to further illustrate the concept once students were familiar with the general (or standard) experimental setup.

In 2012, 7 groups of 5 students each conducted this experiment under the guidance of both authors together. The authors gained the impression that the new strategy of instruction enhanced the students’ understanding. In order to test this impression and the learning gain resulting from the experiment with the new methodology, in 2013 identical work sheets were administered before and after the experiment. These work sheets were developed by the authors as instructional materials to make sure that every student individually went through the elicit-confront-resolve process even when, with future cohorts, this experiment might be run by other instructors (who might not be as familiar with the elicit-confront-resolve method) and with larger student groups (where individual conversations with every student might be less feasible for the instructor). However, it turned out to be useful for quantifying what we had previously only qualitatively noticed: That a large part of the student population did indeed expect to see a deflection despite observing from an inert frame of reference.

In total, 8 students took the course in 2013, and all agreed to let us talk about their learning process in the context of this article. One of those students did not check the before/after box on the work sheet. We therefore cannot distinguish the work done before and after the experiment, and will disregard this student’s responses in the following discussion. This student however answered correctly on one of the tests and incorrectly on the other.

In the first question, students were instructed to consider both a stationary table and a table rotating at two different rates. They were then asked to, for each of the scenarios, mark with an X the location where they thought the ball would contact the floor after dropping off the table’s surface. In the work sheet done before instruction, all 7 students predicted that the ball would hit the floor in different spots – diametrically across from the launch point for no rotation, and at increasing distances from that first point with increasing rotation rates of the table (Figure 3). This is the same misconception we noticed in earlier years and which we aimed to elicit with this question: students were applying correct knowledge (“In the Northern Hemisphere a moving body will be deflected to the right”) to situations where this knowledge was not applicable (when observing the rotating body and the moving particle upon it from an inert frame of reference).

folie3

Figure 3A: Depiction of the typical wrong answer to the question where a ball would land on a floor after rolling across a table rotating at different rotation rates. B: Correct answer to question in (A). C: Correct trajectories of balls rolling across a rotating table.

In a second question, students were asked to imagine the ball leaving a dye mark on the table as it rolls across it, and to draw these traces left on the table. In this second question students were thus required to infer that this would be analogous to regarding the motion of the ball as observed from the co-rotating frame of reference. Five students drew them correctly and consistently with the direction of rotation they assumed in the first questions, while the remaining two did not attempt to answer this question.

After the experiment had been run repeatedly and discussed until the students signaled no further need for re-runs or discussion, the students were asked to redo the work sheet. This resulted in 6 students answering both questions correctly. The remaining student answered the second question correctly, but repeated the same incorrect answer to the first question that they gave in their earlier worksheet.

Seeing as the students had extensively discussed and participated in the experiment immediately prior to doing the work sheet for the second time, it is maybe not surprising that the majority answered the questions correctly during the second iteration. In this regard it is important to note that our teaching approach was not planned as a scientific study, but rather developed naturally over the course of instruction. Had we set out to determine the longer-term impact of its efficacy, or its success in abetting conceptual understanding, we should ideally have tested the concept in a new context. As a teaching practice this is advisable.

However, the students’ laboratory reports supply additional support of the claimed usefulness of our new approach. These reports had to be submitted within seven days of originally doing the experiment and accompanying work sheets. One of the questions in their laboratory manual explicitly addresses observing the motion from an inert frame of reference as well as the influence of the table’s rotational period on such motion. This question was answered correctly by all 8 students. This is remarkable for two reasons: firstly, because in the previous year without the elicit-confront-resolve instruction, this question was answered incorrectly by the vast majority of students; and secondly, because for this specific cohort, it is one of the few questions that all students answered correctly in their laboratory reports.

Seven students most certainly make for an insufficient sample size to claim these results have any statistical significance, and this discussion only scratches the surface of what and how students understand frames of reference. However, there is preliminary indication that a) students do indeed harbor the misconception we suspected, and b) that an elicit-confront-resolve approach helped resolve the misunderstanding.

Conclusions

In the suggested instructional strategy, students are required to explicitly state their expectations about what the outcome of an experiment will be, even though their presuppositions are likely to be wrong. The verbalizing of their assumptions aids in making them aware of what they implicitly hold to be true. This is a prerequisite for further discussion and enables confrontation and resolution of potential misconceptions.

This elicit-confront-resolve approach has implications beyond instruction on the Coriolis force or frames of reference. Being able to correctly calculate solutions to textbook problems does not necessarily imply a correct understanding of a concept. Generally speaking, when investigating the roots of student misconceptions, the problem is often located elsewhere than initially suspected. The instructor’s awareness hereof goes a long way towards better understanding and better supporting students’ learning.

We would also like to point out that gaining (the required) insight from a seemingly simple experiment, such as the one discussed in this paper, might not be nearly as straightforward or obvious for the students as anticipated by the instructor. Again, probing for conceptual understanding rather than the ability to calculate a correct answer proved critical in understanding where the difficulties stemmed from, and only a detailed discussion with several students could reveal the scope of difficulties. We would encourage every instructor not to take at face value the level of difficulty your predecessors claim an experiment to have!

Acknowledgements

The authors are grateful for the students’ consent to present their worksheet responses in this article.

Supplementary materials

Movies of the experiment can be seen here:

Rotating case: https://vimeo.com/59891323

Non-rotating case: https://vimeo.com/59891020

References

Ainsworth, S., Prain, V., & Tytler, R. (2011). Drawing to Learn in Science Science, 333 (6046), 1096-1097 DOI: 10.1126/science.1204153

 

Baillie, C., MacNish, C., Tavner, A., Trevelyan, J., Royle, G., Hesterman, D., Leggoe, J., Guzzomi, A., Oldham, C., Hardin, M., Henry, J., Scott, N., and Doherty, J. 2012. Engineering Thresholds: an approach to curriculum renewal. Integrated Engineering Foundation Threshold Concept Inventory 2012. The University of Western Australia, < http://www.ecm.uwa.edu.au/__data/assets/pdf_file/0018/2161107/Foundation-Engineering-Threshold-Concept-Inventory-120807.pdf>

 

Coriolis, G. G. 1835. Sur les équations du mouvement relatif des systèmes de corps. J. de l’Ecole royale polytechnique 15: 144–154.

 

Cushman-Roisin, B. 1994. Introduction to Geophysical Fluid DynamicsPrentice-Hall. Englewood Cliffs, NJ, 7632.

 

Durran, D. R. and Domonkos, S. K. 1996. An apparatus for demonstrating the inertial oscillation, BAMS, Vol 77, No 3

 

Fan, J. (2015). Drawing to learn: How producing graphical representations enhances scientific thinking. Translational Issues in Psychological Science, 1 (2), 170-181 DOI: 10.1037/tps0000037

 

Gill, A. E. 1982. Atmosphere-ocean dynamics (Vol. 30). Academic Pr.

 

Kornell, N., Jensen Hays, M., and Bjork, R.A. (2009), Unsuccessful Retrieval Attempts Enhance Subsequent Learning, Journal of Experimental Psychology: Learning, Memory, and Cognition 2009, Vol. 35, No. 4, 989–998

 

Hart, C., Mulhall, P., Berry, A., Loughran, J., and Gunstone, R. 2000. What is the purpose of this experiment? Or can students learn something from doing experiments?, Journal of Research in Science Teaching, 37 (7), p 655–675

 

Mackin, K.J., Cook-Smith, N., Illari, L., Marshall, J., and Sadler, P. 2012. The Effectiveness of Rotating Tank Experiments in Teaching Undergraduate Courses in Atmospheres, Oceans, and Climate Sciences, Journal of Geoscience Education, 67–82

 

Marshall, J. and Plumb, R.A. 2007. Atmosphere, Ocean and Climate Dynamics, 1st Edition, Academic Press

 

McDermott, L. C. 1991. Millikan Lecture 1990: What we teach and what is learned – closing the gap, Am. J. Phys. 59 (4)

 

Milner-Bolotin, M., Kotlicki A., Rieger G. 2007. Can students learn from lecture demonstrations? The role and place of Interactive Lecture Experiments in large introductory science courses. The Journal of College Science Teaching, Jan-Feb, p.45-49.

 

Muller, D.A., Bewes, J., Sharma, M.D. and Reimann P. 2007. Saying the wrong thing: improving learning with multimedia by including misconceptions, Journal of Computer Assisted Learning (2008), 24, 144–155

 

Newcomer, J.L. 2010. Inconsistencies in Students’ Approaches to Solving Problems in Engineering Statics, 40th ASEE/IEEE Frontiers in Education Conference, October 27-30, 2010, Washington, DC

 

Persson, A. 1998. How do we understand the Coriolis force?, BAMS, Vol 79, No 7

 

Persson, A. 2010. Mathematics versus common sense: the problem of how to communicate dynamic meteorology, Meteorol. Appl. 17: 236–242

 

Pinet, P. R. 2009. Invitation to oceanography. Jones & Bartlett Learning.

 

Posner, G.J., Strike, K.A., Hewson, P.W. and Gertzog, W.A. 1982. Accommodation of a Scientific Conception: Toward a Theory of Conceptual Change. Science Education 66(2); 211-227

 

Pond, S. and G. L. Pickard 1983. Introductory dynamical oceanography. Gulf Professional Publishing.

 

Roth, W.-M., McRobbie, C.J., Lucas, K.B., and Boutonné, S. 1997. Why May Students Fail to Learn from Demonstrations? A Social Practice Perspective on Learning in Physics. Journal of Research in Science Teaching, 34(5), page 509–533

 

Talley, L. D., G. L. Pickard, W. J. Emery and J. H. Swift 2011. Descriptive physical oceanography: An introduction. Academic Press.

 

Tomczak, M., and Godfrey, J. S. 2003. Regional oceanography: an introduction. Daya Books.

 

Trujillo, A. P., and Thurman, H. V. 2013. Essentials of Oceanography, Prentice Hall; 11 edition (January 14, 2013)

 

Urbano, L.D., Houghton J.L., 2006. An interactive computer model for Coriolis demonstrations. Journal of Geoscience Education 54(1): 54-60

 

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on January 24th, 2017.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

A tool for planning online teaching units

Nicole Podleschny & Mirjam Glessmer, 2015

In our recent workshop on “supporting self-organized learning with online media”, Nicole Podleschny and I came up with a morphological box to help plan online teaching units. The morphological box is basically a list of criteria that we thought might be relevant, and then we suggest different values for each of the criteria and leave plenty of space for participants’ own ideas. By providing a very broad overview over the many parameters and possibilities, we hoped to get participants away from the prevailing understanding that “online learning” is necessarily the same as multiple-choice e-assessment, and to get them think more broadly about what options might be most appropriate for whatever their goals might be.

The very important first step in planning of any kind of teaching unit has to be — as always! — to think about what learning outcomes the instructor wants to achieve. Only when this is really clear, appropriate methods and tools can be chosen!

Then we can have a look at the morphological box:

morphological_box

Morphological box for planning of online learning units (Podleschny & Glessmer, 2015)

Now we can go through the different criteria and have a look at what value seems to make sense. Of course, there are many more options possible than those we suggest here – please feel free to fill in whatever suits your needs best!

Sometimes it is really helpful to just be aware of different options. Even though you might not want to pick any of the options given in the morphological box, maybe just reading them and deciding against them will spark an idea of what actually works best for your case.

The morphological box can also be used to design different scenarios and discuss them against each other in order to figure out which criteria are more relevant to you than others.

If you would like to give it a try, you can download our morphological box below.

Morphological box [pdf English | pdf German]

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on December 26th, 2015.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

Using art in your science teaching and outreach. The why and the how.

This post was first published at the EGU’s blog’s “educational corner” GeoEd, in March 2016 (link here).

Sometimes we look for new ways to engage our students or the general public in discussions about our science. Today I would like to suggest we use art! Someone recently told me about her work on “STEAM”, which is STEM+Arts and apparently big on the rise. While I had never heard about it before, and initially found the idea a bit weird and artificial, there are certainly many occasions where thinking about topics in a more comprehensive way than just through disciplinary lenses could be of great benefit, both to get a fuller view of what is going on, as well as to maybe reach people in a different way, and therefore reach people that might not necessarily be interested in either of the parts by itself.

There are many different kinds of art that we can use in STEM teaching and outreach, ranging from art that uses science as its central theme to art that just happens to be displaying something we have a scientific interest in. And while in this blog post “art” is taken to mean visual art, you can think this much more widely and include music, theatre, anything you can think of! Dream big!

Art that incorporates scientific data

One example of art that uses science as a central theme and that is very well suited for our purposes is the amazing art of Jill Pelto, who communicates scientific research through art. What that means is that she takes graphs of recent dramatic changes in the climate system, like sea level rise or melting glaciers, and uses them in her art as part of the image. For example, a graph of the global average temperature is integrated on the boarder between a burning forest and the flames leaping into the smoky sky. Which you only notice when you look carefully, similarly to the boundary between the school of clown fish and the forest of anemones moving in the waves, showing the declining ocean pH which threatens this ecosystem (see figure below). Brian Kahn describes Jill Pelto’s paintings as “Trojan horse for science to reach a public that doesn’t necessarily think about data points and models”.

And that is a great approach to using this art. But how else could we use art like this in teaching and outreach?

 

Jill+Pelto+Clown+Fish+Web+(1)

“Clown Fish” by Jill Pelto. Used with permission. Click on the image to go to Jill Pelto’s gallery and discover more amazing artwork!

These kind of images I could imagine using in courses where students are to investigate a scientific topic in a project. Each group of students could be handed a different image, and they could be asked to figure out as much as possible about the topic and present it back to their peers. I would imagine that giving students a data set in such a visually appealing form would provoke an emotional connection and response much more easily than if they were presented with “just” the data. In the final exhibition, the art would work as great eye catchers to lure visitors into a topic.

I could also imagine using Jill Pelto’s art in a science outreach workshop. There, I would ask participating PhD students or scientists to take the one time series (or any other visual representation they have of their data that shows the most important part of their story) and, inspired by the art they saw, integrate their data into an eye-catching display that tells their story for them.

Wow, this really makes me want to do this for my own research!

Art that visualizes scientific results

The best-known example of art that tells scientific stories is Greg Johnson’s “Climate change science 2013: Haiku“. A poster of all 20 illustrations is up in my office (Thanks, Joke and Torge!) and I can tell you – it is a great conversation piece! The haikus and illustrations provide just enough information to spark curiosity, so I often find myself discussing climate change with my (non-climate scientist) colleagues. Clearly, the haikus would also work as excellent conversation starters in outreach!

full_01_cover_text-563x421

Picture from http://www.sightline.org/2013/12/16/the-entire-ipcc-report-in-19-illustrated-haiku/, used with permission

In teaching, I would use Greg Johnson’s illustrated haikus to break the IPCC report’s summary for policy makers down into its chapters, and hand out one illustration per group. Depending on what kind of students I was teaching, I would either ask them to read the corresponding summaries, or browse the chapter, or read one of the original articles cited in that chapter. Or even ask them to find articles that might shed a different light on the (obviously oversimplified) message of the haikus. What kind of evidence would they want to see to shoot down those messages or in support of it? Those kind of thoughts are a very good practice for their own research when they always need to consider whether the conclusions they draw are the only possible ones.

Here, again, the art helps to make very complex science easily approachable, and would again be awesome as eye catcher in an exhibition where student groups present their work to each other. (If you are worried about all the posters you are supposed to be printing, check out this post for a cheap and easy solution).

Or the haikus could be used as inspiration when you ask your students to read articles and summarize them in a haiku plus drawing. This would be great practice to get to the point, and also it would be great practice for outreach. How cool would it be if your students had a piece of art and a short poem summarizing their theses?

For more inspiration along those lines, check out Greg Johnson’s blog.

Art that incidentally shows science we are interested in

Alternatively to looking at art that doesn’t explicitly focuses on science as its topic, but which can still be used to discuss science.

One example is given in the TED talk “the unexpected math behind Van Gogh’s `Starry Night´” by Natalya St. Clair, where the painting is deconstructed and put in the context of the development of mathematical theories for turbulence. I have linked the video below and it is totally worth watching!

[youtube PMerSm2ToFY]

The video could serve as a great first exposure to turbulence in a physics class and would make for a very interesting assignment in a flipped setting. It could also be watched in art class to help underline that art is a “serious” subject and not just a bit of splashing with paint (or whatever prejudices your audience might have).

Or you could ask your students to attempt a similar interpretation of a different picture. For example when talking about different kinds of waves in your oceanography class, ask your students to browse a gallery of famous seaside paintings, online or “for real”, pick one painting and interpret the state of the sea, the shape of the clouds, the color of the light, to learn as much as possible about the weather conditions depicted in the painting. Always interesting, too: Check for consistency of wind direction from all the flags and sails and flying hair!

Alternatively, you could use a collection of pictures to talk about how knowledge in your field developed (for an example of how this could work for soil, see Laura Roberts-Artal’s blog post.

See – so many ways to include art in your science teaching and outreach to capture new audiences’ interest or just look at your topic from a different angle!

How would you use art in your teaching and outreach? Let us know in the comments below!

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on February 16th, 2016.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

Experiment: Ice cubes melting in fresh water and salt water

Explore how melting of ice cubes floating in water is influenced by the salinity of the water. Important oceanographic concepts like density and density driven currents are visualized and can be discussed on the basis of this experiment.

Context

Audience

This hands-on experiment is suited for many different audiences and can be used to achieve a wealth of different learning goals. Audience ranges from first-graders over undergraduates in physical oceanography to outreach activities with the general public. Depending on the audience, this activity can be embedded in very different contexts: For children, either in their physics teaching to motivate learning about concepts like density, or in the context of learning about the climate system and ocean circulation. For college/university students the activity can either be used in physics teaching to get a different view on density; in oceanography/Earth science to talk about ocean circulation and processes that are important there; to motivate the scientific process; or to practice writing lab reports (you can be sure that students will at some point be tasting the water to make sure they didn’t accidentally swap the salt water and fresh water cup – a great teachable moment for a) Never putting anything in your mouth in a laboratory setting, and b) Always documenting exactly what you are doing because stuff that you think you will definitely remember obviously isn’t remembered that clearly after all). For the general public, this is typically a stand-alone activity.

Skills and concepts that students must have mastered

It helps if the concept of density is known, but the experiment can also be used to introduce or deepen the understanding of the concept.

How the activity is situated in the course

I use this activity in different ways: a) as a simple in-class experiment that we use to discuss the scientific method, as well as what needs to be noted in lab journals and what makes a good lab report, or density-driven circulation; b) to engage non-majors or the general public in thinking about ocean circulation, what drives ocean currents, … in one-off presentations.

 

Goals

Content/concepts goals for this activity

Students learn about concepts that are important not only in physical oceanography, but in any physical or Earth science: density in general; density of water in particular, depending on the water’s temperature and salinity; how differences in density can drive currents both in the model and in the world ocean; how different processes acting at the same time can lead to unexpected results; how to model large scale processes in a simple experiment. After finishing the activity, they can formulate testable hypotheses, are able to reason based on density how a flow field will develop and they can compare the situations in the cups to the “real” ocean.

Higher order thinking skills goals for this activity

Students learn about and practice the use of the scientific method: formulation of hypotheses, testing, evaluating and reformulating.

Other skills goals for this activity

Students practice writing lab reports, making observations, working in groups.

 

Description and Teaching Materials

Materials

(per group of 2-4 students):

  • 1 clear plastic cup filled with room-temperature salt water (35psu or higher, i.e. 7 or more tea spoons of table salt per liter water), marked as salt water (optional)
  • 1 clear plastic cup filled with room-temperature fresh water, marked as fresh water (optional)
  • 2 ice cubes
  • liquid food dye either in drop bottle, with a pipette or with a straw as plunging syphon

Description

Before the experiment is started, students are asked to make a prediction which ice cube will melt faster, the one in salt water or the one in fresh water. Students discuss within their groups and commit to one hypothesis.
Students then place the ice cubes into the cups and start a stop watch/note the time. Students observe one of the ice cube melting faster than the other one. When it becomes obvious that one is indeed melting faster, a drop of food dye can be added on each of the ice cubes to color the melt water. Students take the time until each of the ice cubes has melted completely.

Discussion

The ice cube in the cup containing the fresh water will melt faster, because the (fresh) melt water is colder than the room-temperature fresh water in the cup. Hence its density is higher and it sinks to the bottom of the cup, being replaced by warmer waters at the ice cube. In contrast, the cold and fresh melt water in the salt water cup is less dense than the salt water, hence it forms a layer on top of the salt water and doesn’t induce a circulation like the one in the fresh water cup. The circulation is clearly visible as soon as the food dye is added: While in the freshwater case the whole water column changes color, only a thin meltwater layer on top of the salt water is colored (for clarification, see images in the presentation below)

 

Teaching Notes and Tips

Students usually assume that the ice cube in salt water will melt faster than the one in fresh water, “because salt is used to de-ice streets in winter”. Have students explicitly state their hypothesis (“the one in salt water will melt faster!”), so when they measure the time it takes the ice cubes to melt, they realize that their experiment does not support their hypothesis and start discussing why that is the case. (Elicit the misconception, so it can be confronted and resolved!)

My experience with this experiment is that all groups behave very consistently:

  • At least 80% of your audience will be very sure that the ice cube in salt water will melt faster than the one in fresh water. The other 20% will give the correct hypothesis, but only because they expect a trick question, and they will most likely not be able to come up with an explanation.
  • You can be 100% sure that at least in one group, someone will say “oh wait, which was the salt water again?” which hands you on a plate the opportunity to say “see — this is a great experiment to use when talking about why we need to write good documentations already while we are doing the experiment!”
  • You can also be 100% sure that in that group, someone will taste the water to make sure they know which cup contains the salt water. Which lets you say your “see — perfect experiment to talk about lab safety stuff! Never ever put things in your mouth in a lab!”
  • You can also be sure, that people come up with new experiments they want to try.
    • At EMSEA14, people asked what would happen if the ice cubes were held at the bottom of the beaker.
    • At a workshop on inquiry-based learning, people asked what the dye would do if there was no ice in the cups, just salt water and fresh water. Perfect opportunity to say “try! Then you’ll know! And btw — isn’t this experiment perfect to inspire the spirit of research (or however you would say that in English – “Forschergeist” is what I mean!). This is what you see in the pictures in this blogpost.

It is always a good idea to have plenty of spare ice cubes and salt/fresh water at room temperature ready so people can run the experiment again if they decide to either focus on something they didn’t observe well enough the first time round, or try a modified experiment like the ones described above.

A reviewer of this activity asked how easily students overcome the idea that water in the cup has to have just one temperature. In my experience this is not an issue at all – students keep “pointing” and thereby touching the cups, and in the thin-walled plastic cups I typically use the temperature gradient between “cold” melt water and “warm” salt water is easily felt. The (careful!) touching of the cups can also be explicitly encouraged.

Different ways to use this experiment

This experiment can be used in many different ways depending on the audience you are working with.

  • Demonstration: If you want to show this experiment rather than having students conduct it themselves, using colored ice cubes is the way to go (see experiment here). The dye focuses the observer’s attention on the melt water and makes it much easier to observe the experiment from a distance, on a screen or via a projector. Dying the ice cubes makes understanding much easier, but it also diminishes the feeling of exploration a lot – there is no mystery involved any more. And remember in order for demonstrations to increase the learning outcome, they need to be embedded in a larger didactical setting, including forming of hypotheses before the experiment is run and debriefing afterwards.
  • Structured activity: For an audience with little knowledge about physics, you might want to start with a very structured activity, much like the one described above. Students are handed (non-colored) ice cubes, cups with salt water and fresh water and are asked to make a prediction about which of the ice cubes is going to melt faster. Students test their hypothesis, find the results of the experiment in support with it or not, and we discuss. This is how I usually use this experiment in class (see discussion here).The advantage of using this approach is that students have clear instructions that they can easily follow. Depending on how observant the group is, instructions can be very detailed (“Start the stop watch when you put the ice cubes in the water. Write down the time when the first ice cube has melted completely, and which of the ice cubes it was. Write down the time when the second ice cube has melted completely. …”) or more open (“observe the ice cubes melting”).
  • Problem-solving activity: Depending on your goals with this experiment, you could also consider making it a problem-solving activity: You would hand out the materials and ask the students to design an experiment to figure out which of the cups contains fresh water and which salt water (no tasting, of course!). This is a very nice exercise and students learn a lot from designing the experiment themselves.
  • Open-ended investigation: In this case, students are handed the materials, knowing which cup contains fresh and salt water. But instead of being asked a specific question, they are told to use the materials to learn as much as they can about salt water, fresh water, temperature and density.As with the problem-solving exercise, this is a very time-intensive undertaking that does not seem feasible in the framework we are operating in. Also it is hard to predict what kind of experiments the students will come up with, and if they will learn what you want them to learn. On the other hand, students typically learn much more because they are free to explore and not bound by a specific instruction from you, so maybe give it a try?
  • Problem-based learning: This experiment is also very well suited in a Problem-Based Learning setting, both to work on the experiment itself or, as we did, to have instructors experience how problem-based learning works so they can use it in their own teaching later. Find a suggested case and a description of our experiences with it here.
  • Inquiry-based learning: Similarly as with Problem-Based Learning, this experiment can be used to let future instructors experience the method of inquiry-based learning from a student perspective. For my audience, people teaching in STEM, this is a nice case since it is close enough to their topics so they can easily make the transfer from this case to their own teaching, yet obscure enough that they really are learners in the situation.

Pro tip: If you are not quite sure how well your students will be able to cope with this experiment, prepare ice cubes dyed with food coloring and use them in a demonstration if students need more help seeing what is going on, or even let students work with colored ice cubes right from the start. If ice cubes and hence melt water are dyed right away, it becomes a lot easier to observe and deduct what is happening. Feel free to bring the photos or time lapse movie below as a backup, too!

dyed_ice_cubes_01

Dyed ice cubes about to be put into fresh water (left) and salt water (right)

dyed_ice_cubes_02

When the ice cubes start melting, it becomes very clear that they do so in different manners. In the left cup, the cold meltwater from the ice cube is denser than the lukewarm water in the cup. Hence it sinks to the bottom of the beaker and the water surrounding the ice cube is replaced by warmer water. On the right side, the lukewarm salt water is denser than the cold melt water, hence the cold meltwater floats on top, surrounding the ice cube which therefore melts more slowly than the one in the other cup.

dyed_ice_cubes_03

The ice cube in the fresh water cup (left) is almost completely gone and the water column is fairly mixed with melt water having sunk to the bottom of the beaker. The ice cube in the salt water cup (right) is still a lot bigger and a clear stratification is visible with the dyed meltwater on top of the salt water.

And here a time-lapse movie of the experiment.

Another way to look at the experiment: With a thermal imaging camera!

screen-shot-2017-06-11-at-17-12-29

Cold (dark purple) ice cubes held by warm (white-ish) fingers over room-temperature (orange) cups with water

screen-shot-2017-06-11-at-17-12-55

After a while, both cups show very different temperature distributions. The left one is still room temperature(-ish) on top and very cold at the bottom. The other one is very cold on top and warmer below.

screen-shot-2017-06-11-at-17-13-20

When you look in from the top, you see that in the left cup the ice has completely melted (and the melt water sunk to the bottom), whereas in the right cup there is still ice floating on top.

Assessment

Depending on the audience I use this experiment with, the learning goals are very different. Therefore, no one assessment strategy can be used for all different applications. Below, I am giving examples of what are possible ways to assess specific learning goals:

– Students apply the scientific process correctly: Look at how hypotheses are stated (“salt melts ice” is not a testable hypothesis, “similar-sized ice cubes will melt faster in salt water than in fresh water of the same temperature” is).

– Students are able to determine what kind of density-driven circulation will develop: Suggest modifications to the experiment (e.g. ice cubes are made from salt water, or ice cubes are held at the bottom of the cups while melting) and ask students to predict what the developing circulation will look like.

– Students can make the transfer from the flow field in the cup to the general ocean circulation: Let students compare the situation in the cup with different oceanic regions (the high Arctic, the Nordic Seas, …) and argue for which of those regions displays a similar circulation or what the differences are (in terms of salinity, temperature, and their influence on density).

In general, while students run the experiment, I walk around and listen to discussions or ask questions if students aren’t already discussing. Talking to students it becomes clear very quickly whether they understand the concept or not. Asking them to draw “what is happening in the cup” is a very useful indicator of how much they understand what is going on. If they draw something close to what is shown on slide 28 of the attached slide show, they have grasped the main points.

 

Equipment

Don’t worry, it is totally feasible to bring all the equipment you need with you to run the experiment anywhere you want. This is what we brought to EMSEA14 to run the workshop three times with 40 participants each:

EMSEA14_list

What we brought to EMSEA14 to run workshops on the ice cubes melting in fresh and salt water experiment

In one big grocery bag:

  • 4 ice cube trays
  • 4 ice cube bags (backup)
  • 2 thermos flasks (to store ice cubes)
  • 1 insulating carrier bag (left)
  • 4 empty 1.5l water bottles to mix & store salt water in
  • 1 tea spoon for measuring salt
  • 500g table salt
  • 21 clear plastic cups for experiments
  • 10 clear plastic cups to hand out ice cubes in
  • 11 straws (as pipettes)
  • 1 flask of food dye
  • 11 little cups with lids to hand out food dye in
  • nerves of steel (not shown :-))

And if you are my friend, you might also get the “ice cube special” — a pink bucket with all you will ever need to run the experiment! Below is what the ice cube experiment kit looks like that I made for Marisa, with labels and everything…

IMG_4202

An “ice cube experiment” kit that I made for a friend. Want one, too?

References and Resources

This activity has been discussed before, for example here:

I have also written about it a lot on my blog, see posts tagged “melting ice cubes experiment“.

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on November 4th, 2015.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

Asking students to take pictures to help them connect theory to the reality of their everyday lives

— This post was written for “Teaching in the Academy” in Israel, where it was published in Hebrew! Link here. —

Many times students fail to see the real-life relevance of what they are supposed to be learning at university. But there is an easy way to help them make the connection: Ask them to take pictures on their smartphones of everything they see outside of class, write a short sentence about what they took a picture of, and why it is interesting, and submit it on an electronic platform to share with you and their peers. And what just happened? You made students think about your topic on their own time!

Does it work?

Does it work? Yes! Obviously there might be some reluctance to overcome at first, and it is helpful to either model the behaviour you want to see yourself, or have a teaching assistant show the students what kind of pictures and texts you are looking for.

Do I have to use a specific platform?

Do I have to use a specific platform? No! I first heard about this method after Dr. Margaret Rubega introduced the #birdclass hashtag on Twitter for her ornithology class. But I have since seen it implemented in a “measuring and automation technology” class that already used a Facebook group for informal interactions (see here), and by a second class on the university’s conventional content management system. All that is required is that students can post pictures and other students can see them.

Do you have examples?

One example from my own teaching in physical oceanography: Hydraulic jumps (see figure below). The topic of hydraulic jumps is often taught theoretically only and in a way that students have a hard time realizing that they can actually observe them all the time in their real lives, for example when washing your dishes, cleaning your deck or taking a walk near a creek. But when students are asked to take pictures of hydraulic jumps, they start looking for them, and noticing them. And even if all of this only takes 30 seconds to take and post a picture (and most likely they spent more time thinking about it!), that’s 30 extra seconds a student thought about your content, that otherwise he or she would have only thought about doing their dishes or cleaning their deck or their car.

hydraulic_jumps

Collection of many images depicting hydraulic jumps found in all kinds of environments of daily life

And even if you do this with one single topic and not every single topic in your class, once students start looking at the world through the kind of glasses that let them spot the hydraulic jumps, they are going to start spotting theoretical oceanography topics everywhere. They will have learned to actually observe the kind of content you care about in class, but in their own world, making your class a lot more relevant to them.

An additional benefit is that you, as the instructor, can also use the pictures in class as examples that students can relate to. I would recommend picking one or two pictures occasionally, and discussing for a minute or two why they are good examples of the topic and what is interesting about them. You can do this as introduction to that day’s topic or as a random anecdote to engage students. But acknowledging the students’ pictures and expanding on their thoughts is really useful to keep them engaged in the topic and make them excited to submit more and better pictures (hence to find better examples in their lives, which means to think more about your course’s topic).

Does this work for subjects outside of STEM, too?

Does this work for subjects outside of STEM, too? Yes! In a language class, for example, you could ask people to submit pictures of something “typically English [or whatever language you are teaching]”. You can then use the pictures to talk about cultural features or prejudices. This could also be done in a social science context. In history, you might ask for examples of how a specific historical period influences life today. In the end, it is not about students finding exact equivalents – it is about them trying to relate their everyday lives to the topics taught in class and the method presented in this article is just a method to help you accomplish that.

P.S.: This text originally appeared on my website as a page. Due to upcoming restructuring of this website, I am reposting it as a blog post. This is the original version last modified on October 1st, 2016.

I might write things differently if I was writing them now, but I still like to keep my blog as archive of my thoughts.

How to make your science meaningful and accessible to any audience

Are you hesitant to do outreach because you don’t really know how to convey your message to an audience that isn’t as fascinated by your field as you are and doesn’t have at least some background knowledge? Then here is a tool that will help you make your science meaningful and accessible to any audience!

First: There is a need for science communication and we all know it. The obvious reason is because these days, pretty much all funding agencies require some form of science outreach or dissemination. Other reasons for wanting to do some form of science communication are that tax payers are funding a lot of the basic research going on and that they therefore have a right to know what they are paying for; and that the knowledge we create mainly gets locked up in scientific journals or presented at scientific conferences, but it doesn’t reach relevant audiences by itself.

And then you have a mountain of information in your head that you have accumulated over years or decades by studying and doing research on your topic. How do you find the message the audience should hear? What is critical? What really matters? And who is your relevant audience? Journalists, policy makers, citizens? School children? Anyone else?

There is a great tool that can help you with all of those questions, developed by COMPASS (and they have successfully trained thousands of scientists!): The #COMPASSMessageBox. It helps you break down your message by giving you step-by-step instructions and guidelines on how to do it:

First by dividing your overall message into different parts:

  • Who is your audience?
  • What is the overarching topic you are working on?
  • Why should your audience care? “So what?”
  • What is the problem you are addressing?
  • What solutions are you providing?
  • What are the benefits if this problem was addressed?

In addition, you are given a couple of guidelines (and the scientific reasons behind those):

  • “The public” doesn’t have your background knowledge, therefore boil your message down to 5 new facts max!
  • More knowledge doesn’t change attitudes, so don’t just lecture your audience, listen to them and interact!
  • We have all been trained to communicate to a scientific audience, using specific norms. The public, however, is used to and interested in a different kind of communication than scientific community, so adapt the way you structure your information!
  • Last, not least: No jargon! Don’t “waste” one or more of your five facts on introducing jargon!

So here we are, scientists! Make funding agencies happy! Become visible as experts! Gain recognition! Contribute to the democratisation of science! But also: Enjoy interacting with new people who will get excited about your science even though it is something they maybe thought they would never be interested in! Feel a new sense of purpose! And have fun being creative and coming up with new and different opportunities for communication! :-)

P.S.: Below you see one example of the #COMPASSMessageBox, filled with the stuff I wanted to write about in this blog post. Give it a try, it’s a really useful tool!

message_box