Category Archives: literature

Even though students in the active classroom learn more, they feel like they learn less

If you’ve been trying to actively engage students in your classes, I am sure you’ve felt at least some level of resistance. Even though we know from literature (e.g. Freeman et al., 2014) that active learning increases student performance, it’s sometimes difficult to convince students that we are asking them to do all the activities for their own good.

But I recently came across an article that I think might be really good to help convince students of the benefits of active learning: Deslauriers et al. (2019) are “measuring actual learning versus feeling of learning in response to being actively engaged in the classroom” in different physics classes. They compare active learning (which they base on best practices in the given subject) and passive instruction (where lectures are given by experienced instructors that have a track record of great student evaluations). Apart from that, both groups were treated equally, and students were randomly assigned to one or the other group.

Figure from Deslauriers et al. (2019), showing a comparison of performance on the test of learning and feeling of learning responses between students taught with a traditional lecture (passive) and students taught actively for the statics class

As expected, the active case led to more learning. But interestingly, despite objectively learning more in the active case, students felt that they learned less than the students in the passive group (which is another example that confirms my conviction that student evaluations are really not a good measure of quality of instruction), and they said they would choose the passive learning case given the choice. One reason might be that students interpret the increased effort that is required in active learning as a sign that they aren’t doing as well. This might have negative effects on their motivation as well as engagement with the material.

So how can we convince students to engage in active learning despite their reluctance? Deslauriers et al. (2019) give a couple of recommendations:

  • Instructors should, early on in the semester, explicitly explain the value of active learning to students, and explicitly point out that increased cognitive effort means that more learning is taking place
  • Instructors should also have students take some kind of assessment early on, so students get feedback on their actual learning rather than relying only on their perception
  • Throughout the semester, instructors should use research-based strategies for their teaching
  • Instructors should regularly remind students to work hard and point out the value of that
  • Lastly, instructors should ask for frequent student feedback throughout the course (my favourite method here) and respond to the points that come up

I think that showing students data like the one above might be really good to get them to consider that their perceived learning is actually not a good indicator for their actual learning, and convincing them that putting in the extra effort that comes with active learning is helping them learn even though it might not feel like it. I’ve always explicitly talked to students about why I am choosing certain methods, and why I might continue doing that even when they told me they didn’t like it. And I feel that that has always worked pretty well. Have you tried that? What are your experiences?

Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom
Louis Deslauriers, Logan S. McCarty, Kelly Miller, Kristina Callaghan, Greg Kestin
Proceedings of the National Academy of Sciences
Sep 2019, 16 (39) 19251-19257; DOI: 10.1073/pnas.1821936116

Teaching field courses in a virtual setting

For many people it has been (and still is!) a huge hassle to quickly figure out ways to teach field courses in a covid-19 world, and I can relate so much! But I’m also getting more and more excited about the possibilities that are opening up when we think about fieldwork in a new way. And as I’ve been researching and teaching workshops for university teaching staff on how to transition field courses into a socially-distanced world, I have seen many exciting examples. In this blogpost, I want to share what I think is important to consider when transitioning field courses online, and some really amazing ways I’ve seen it done in the second half of the post.

Most importantly: Don’t despair, and don’t undermine whatever you end up doing!

Yes, we’d all prefer to be outside for our field courses, and not stuck to our home office, looking at our students’ faces in tiny moving stamps on a video call (at best) or talking into the wide, quiet void (at worst). There are many ways to bring fieldwork to life even in socially-distant settings, and even small “interventions” might have a large effect.

There are a couple of things we need to keep in mind:

Students might actually learn better in an unconventional setting

While we like to think that field courses are taught a certain way because they have been optimized for the specific learning outcomes, that might not actually always be the case. In many cases, they are just following a tradition without actually questioning it (and I’ll talk a little about why that is bad further down). And there are studies that show that sometimes virtual learning environments work better than traditional ones: Finkelstein (2005) showed for a direct current circuit laboratory that students who used simulated equipment outperformed students who went through a conventional lab course, both on a conceptual survey of the domain and in the coordinated tasks of assembling a real circuit and describing how it worked. So why would we assume that similar things might also be true for virtual field courses?

Virtual science is real science, too

Honestly, how many scientists do we know who are in the field every day or even only most of the time? Very very few. Most science these days happens virtually, whether data is acquired remotely, or whether scientists are using datasets that other people measured, or scientists working with numerical models. Virtual science is real science, too, and therefore even though it is not the only kind of science, maybe it’s helpful to convey to the students that while they are missing out on a fun experience (and certainly on some learning outcomes that we wish they had), they are still able to do real science.

(On that note: Kitchen Oceanography is also science! Check out this post for proof…)

Don’t accidentally undermine your virtual field work

That said, while I think it’s important to be honest about what is lost — the travel to an exciting destination, the experience of being on a research ship, the smell of a certain weather pattern, the feeling of different temperatures and humidities than at home — we need to be super careful to not undermine whatever we end up teaching virtually. It’s maybe not our first choice to do it this way, and we might not have spent as much time preparing it as we would have liked, but constantly telling students what they are missing out on is not going to increase their motivation in a time that is already taxing on everybody.

What are field courses?

When I’m speaking about field courses here, what I envision are the kind of field courses I am familiar with in STEM education: Excursions where biologists investigate an ecosystem, sea practicals where oceanographer spend time on a research ship, trips where engineering students look at structures for coastal protection in situ — basically outdoor teaching.

Following the classification by Fedesco et al. (2020), those would all either fall into the categories of

  • collecting primary data/visiting primary sources”, where students enter an authentic, new-to-them research setting in order to do open-ended investigations on data that they generate while in the field, and where learning outcomes (partly — I would argue that many learning outcomes don’t) depend on the results of that data. Students are creating new knowledge and are actively participating in authentic research processes;
  • “guided discovery of a site”, where the instructor is familiar with the site and plans activities that help students discover things, leading to pre-defined learning outcomes, because students are working with skills and concepts that they learned earlier in the course and apply them to a setting that is known in advance; or maybe
  • “backstage access”, where students visit a site that people usually don’t have access to, for example a wave power plant (or, when I was teaching the intro to oceanography a looong time ago back in Bergen, a company that makes oceanographic instrumentation, thanks Ailin!).

Learning outcomes in field courses

While field courses might have very specific, subject- and location-specific content, there are many learning outcomes that are common to most field courses, e.g.

  • social development
  • observation and perception skills
  • giving meaning to learning
  • providing first-hand experience
  • stimulating interest and motivation

(Compare Larsen et al., 2017, and others)

I think it is super helpful (always, but especially in this case) to look closely at learning outcomes, and to see how interconnected they really are. When I did this for the courses I am currently involved in, it turned out that surprisingly many of the learning outcomes can very easily be done virtually. Anything that is to do with planning of experiments, data analysis, learning of concepts could be disconnected from practicing observational skills or team working. And once they are disconnected, they can be practiced in different exercises which don’t have to rely on the same method of instruction. This makes it much easier to, for example, practice some parts in online discussions, while other parts required students to be outside and observe something themselves. The more things become modular in your mind, the easier it is to implement them.

What motivates students in field courses

When we think about field courses, we usually remember (and envision) them as extremely motivating because typically they are the occasions where students get super excited and want to dig deep and really understand the material. But why is that?

One explanation can be found in the self-determination theory by Deci & Ryan, where three basic psychological needs that need to be fulfilled in order for people to feel instrinsic motivation are described: autonomy, competence and relatedness.

Autonomy in the context of a field course means that students typically get to decide more when they are out and about doing fieldwork than when they are passively sitting in a lecture, just consuming whatever someone else decided to talk about. They might or might not get to decide what kind of questions they work on, but even if they don’t they are a lot more free in how they structure their work, how they interact with peers during that time, …

Interacting with peers is an important component for the second basic psychological need: Relatedness. In field courses, students and instructors typically spend informal time together: sitting in a bus, waiting for a boat, during the actual fieldwork. This provides opportunities for conversations that might otherwise not happen, to relate to peers and instructors on a more personal level, to also experience instructors as role models.

Lastly, field courses help students feel competence in a way they usually don’t get to in normal university settings. They work long days, potentially under challenging physical conditions, on the kind of question that they feel is more authentic than the exercises they typically do. So this might be one of the few times where they feel competent in the identity they are trying to develop: as a professional in their chosen field.

Barriers to fieldwork

But all the benefits of fieldwork come at a price (Giles et al., 2020). And those costs are not to be underestimated, especially because the barriers to fieldwork are especially felt by disabled students and those from racial and ethnic minorities, all of whom are critically underrepresented in the geosciences anyway.

Barriers include for example

  • the financial burden of travel / equipment / functional clothing
  • the emotional burden of dealing with daunting practical aspects of being outdoors (toilet breaks, periods)
  • the physical burden of accessibility issues (the physically challenging aspects of fieldwork that are satisfying and fun for some can on the other hand completely exclude others)
  • the logistical and financial burden (and emotional!) of finding a replacement for caring responsibilities
  • the mental burden of dealing with previous or expected harassment and inappropriate behavior

In the light of all these burdens, there is an urgent need to consider what can be done to make traditional field courses more accessible! And I think having to reinvent so many things now is a great opportunity to make sure those barriers are taken down.

Things to consider when filming for virtual field courses

Virtual field courses seems to often mean “videos of the instructor talking”, whether in their office or in the field. When filming instructional videos, for me the most important points to consider are the viewers’ attention spans, and what might keep a viewer engaged.

As for the attention span, there are many different studies that find that the shorter, the better. Of course it always depends on the video and the material and lots of other things, but the best advice would be to really think about whether anything needs to be longer than 15 minutes in one go (unless it is extremely well produced).

In order to keep viewers engaged, it’s really important to not only keep students in the role of “viewers”, but to engage them more actively. But for the periods where they are “just” watching, it seems that it is helpful to have the instructor visible and make them relatable as an authentic person. Especially having more than one instructor that interact with oneanother makes it more engaging and also provides more potential role models to students.

A list of best practices for creating engagement in educational videos is given in Choe et al., 2019; my take-away from that here.

How to motivate students in virtual field courses

Haha, you were hoping for an easy answer here? I think keeping in mind the three basic psychological needs of students that I described in the framework of the self-determination theory (autonomy, competence and relatedness) is extremely important. The better we can find ways to give students opportunities to feel any and all of those, the more motivated they’ll be.

Good-practice examples of virtual field courses

(This section was first called “best-practice”, but then I noticed that I am showing quite a lot of my own work and decided I’d rather take it down a notch ;-))

There are many categorizations possible for the examples I’m showing below, but I went for the continuum from “fully virtual” on the one hand and then “fully synchronous outside” on the other.

Fully virtual

If you are doing a fully virtual field course, no matter whether it is video-based or text based, it’s really helpful to integrate activities that aren’t related to listening or reading, for example:

Working with pictures of real examples

Providing students with a picture of a field site, or some example of a process, or some instrumentation that they’ve just learnt about, and asking them to annotate the picture is a quick and easy activity that also helps you gauge the students’ level of understanding. This works well if you just want students doing something else than listening to you for 15 minutes.

Working with simulations

It’s fascinating how many really nice virtual representations exist online on all kinds of topics once one starts looking!

I was very impressed with this virtual arboretum I came across recently. If you were teaching about plants, this might be a neat tool for example when you want students to practice drawing plant features, for example.

Investigating a compilation of media

At the recent #FieldWorkFix conference, we were shown this platform for a virtual site assessment which I found super impressive: It’s basically “only” 360° pictures, movies and audio files that are located on a map, so students can do a virtual walk through a park that they would otherwise have visited. But the way this is done, by for example also including a picture of the parking spot and visitors center, makes it feel very real and relatable, and the other pictures, movies and audio files of the park make it possible to do the real assessment.

Another example that I find extremely inspiring is not of a whole site, but it’s a study guide on ID-ing different kinds of rocks. There is a large visual bank of rocks, each combined with the data that students need to make an ID, for example a scale so one can estimate the real size of the rocks, responses to different acids that give clues about the chemical composition, etc.. It seems incredibly comprehensive and like a lot of fun!

Investigating real data

There are of course also many amazing datasets compiled for different regions, for example Svalbox.no for Svalbard, where students can use gis-systems to access many different kinds of data in a geo-referenced frame. Combined with for example google Earth this can be used for free exploration into many different questions.

Creating the features you want to investigate

Last not least, if you want students to do some practical work at home in a virtual course, there is always kitchen oceanography, which in this context means hands-on activities that can be done solely with materials that students typically have at home already. It can mean investigating ocean currents in plastic cups with water, ice and black tea (for 24 easy ideas check out my advent calendar), or it can mean using bread or chocolate bars to simulate an investigation into how rocks behave under pressure. Or if you wanted to get fancy, you could even send out materials (e.g. sand samples in small zip lock bags to get a feel of different grain sizes). Doing small hands-on stuff at home can be a great way to change up long days of sitting in front of a computer…

With “remotely controlled kitchen oceanography” we’ve shown how small, hands-on stuff that students do at home can be combined with experiments with more complicated setups, that are streamed from my kitchen. We were all in a video conference and could therefore all see each others’ experiments while being able to really closely look at our own. Doing something similar with an instructor in the field should be easy enough (if the network and weather cooperate).

Virtual with “outdoor” aspects

As much fun as kitchen oceanography breaks are, sometimes it might be even better to get students out the door with a purpose.

Observe something related to your field right outside your door

I’ve long been a fan of local fieldwork, i.e. sending students out to discover something related to the course’s topic right outside their door. For examples see for example my post on hydraulic jumps that are everywhere, on #BergenWaveWatching, or on #MoreThanWeeds.

But how to implement it in a virtual field course?

One way to take the pressure off students when doing local fieldwork tasks was shown to us at the #FieldWorkFix conference in this super best practice example that I got to experience myself during a fairly intensive virtual conference day: During the one hour lunch break, we not only had to eat lunch, but were asked to go outside and follow the wandering cards on here. Those are cards that give you instructions for your short walk: “Follow something yellow”, “sit for 2 minutes and observe things around you”, “take a right turn”, that kind of things (I, of course, didn’t follow the instructions because I wanted to see some water during my lunch break). We were also instructed to take pictures of something related to our field course, upload it on a website and write a short description (which I did).

And it was a great experience: Within this one hour, I did manage to eat lunch, go outside, take a picture, upload it, and add a description. This let me get some exercise and oxygen, gave me a purpose for my walk, and also proved how easy and fast these kinds of tasks can be if you don’t feel that you need to go to The Best wave watching spot, see the most exciting plant, whatever, but instead just have to find anything related to the course. And it was great to see all the different pictures of participants coming together! This is a way to introduce the local excursions that I will definitely be using in the future to give students that feeling of competence but also a glimpse of one of the typical feelings of fieldwork: That time is precious and every minute and every observation counts. But that a lot can be gained in a really short time, too!

Outdoor asynchronous

If one of the learning outcomes is to practice observation and classification skills, working with citizen science apps like iNaturalist or the german Naturgucker are great. Both are parts of citizen science projects where everybody can upload pictures and other observations (e.g. audio files) that are then classified either by that person directly or through discussions on the platform. Here students contribute to “real science” by collecting data that is relevant for a larger purpose, and they interact with specialists and thus get feedback and feel part of a bigger community. I don’t know anything like that for my own topics, but in biology those are great tools.

One tool that I really want to use in asynchronous outdoor teaching myself are geocaches. Geocaching is a virtual treasure hunt: small “treasures” (often tiny plastic boxes) are hidden and can be found using an app tht gives clues where to look. Geocaches can also be virtual, and are already used for educational purposes for example as “EarthCaches“. This special form of geocaches has been developed by the Geological Society of America and the goal is to bring people to geologically interesting sites and teach them something related to that site. Wouldn’t it be awesome to do something like that for your class?

Geocaches are peer-reviewed before they appear on the app, so a lower threshold version of the same idea could be QR-codes that you hide in the area you want your students to investigate, and have the QR-codes link to websites that you can easily adapt with the seasons, or update from year to year, or have full and easy control over. Of course you might need to check the QR-codes are still there before you run the class the next year, but this is fairly low-key if you are working close to home. (Close to home being an important caveat: in fully virtual semesters, students might actually not be where you are. Please consider ways to accommodate them!)

Outdoor synchronouns

In the last workshop I ran on virtual field courses, a participant told us about a tour guide system his institute had just bought in order to be able to do in-person excursions. The devil is in the detail, of course (how do you make sure all students can see while still maintaining the necessary distance from each other?), but that sounded like a great idea.

Ideas for assessment

Depending on what is being done during the virtual field courses, traditional forms of assessment might still work, or maybe they need to be adapted. What you could consider is including activities into your assessment that are motivating students in themselves, for example asking them to write Instagram or blog posts (and check out this blog post for a grading rubric for Instagram posts!).

In my experience, writing for a different audience than just one overwhelmed instructor is very motivating to students, both because they can use it to show their friends and family what they are doing all day long, and because social media provides the potential for super positive feedback (check out Robert’s tweet about one of my kitchen oceanography experiments that just received its 330th “like” today!). An assignment like that helps on all three psychological basic needs that help foster intrinsic motivation: feeling autonomous, competent and related. So why not give it a shot?

What is your experience with virtual field courses? Do you have best practice examples to add to this? Please share!

References

Ronny C. Choe, Zorica Scuric, Ethan Eshkol, Sean Cruser, Ava Arndt, Robert Cox, Shannon P. Toma, Casey Shapiro, Marc Levis-Fitzgerald, Greg Barnes, and H. Crosbie (2019). “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, CBE—Life Sciences Education, Vol. 18, No. 4. Published Online: 1 Nov 2019
https://doi.org/10.1187/cbe.18-08-0171

Fedesco, H. N., Cavin, D., Henares, R. (2020). Field-based Learning in Higher Education: Exploring the Benefits and Possibilities. Journal of the Scholarship of Teaching and Learning, Vol. 20, No. 1, April 2020, pp.65-84. doi: 10.14434/josotl.v20i1.24877

Finkelstein, N. D., Adams, W. K., Keller, C. J., Kohl, P. B., Perkins, K. K., Podolefsky, N. S., Reid S., LeMaster. R. (2005). When learning about the real world is better done virtually: A study of substituting computer simulations for laboratory equipment. Physical review special topics – Physics education research 1, 010103

Giles, S., Jackson, C. & Stephen, N. Barriers to fieldwork in undergraduate geoscience degrees. Nat Rev Earth Environ 1, 77–78 (2020). https://doi.org/10.1038/s43017-020-0022-5

Larsen, C., Walsh, C., Almond, N., & Myers, C. (2017). The “real value” of field trips in the early weeks of higher education: the student perspective. Educational Studies, 43(1), 110-121.

Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. New York: Guilford

Asking questions that aim at specific levels of the modified Bloom’s taxonomy

I’m currently preparing a couple of workshops on higher education topics, and of course it is always important to talk about learning outcomes. I had a faint memory of having developed some materials (when still working at ZLL together with one of my all time favourite colleagues, Timo Lüth) to help instructors work with the modified Bloom’s taxonomy (Anderson & Krathwohl, 2001), and when I looked it up, I realized I had not blogged about it. But since I was surprised at how helpful I still find the materials, here we go! :-)

The idea is that instructors are often told to ask specific types of questions (usually “concept” questions), but that it is really difficult to know what that means and how to do it.

So we developed a decision tree that gives an overview over all different kinds of questions. The decision tree can support you in

  • constructing questions that provoke specific cognitive processes in your students,
  • checking what exactly you are asking your students to do when posing existing questions, and
  • modifying existing questions to better match your purpose.

The nitty gritty details and the theoretical foundation are written up in Glessmer & Lüth (2016), unfortunately in german. But check out the decision trees below, I think they work pretty well on their own! We have four different versions of that decision tree, that guide you through both the cognitive and knowledge dimension until you reach the sweet spot you wanted to reach. Have fun!

Here is one example, links to the others below.

Downloads:

  • Abstract decision tree (most helpful for getting familiar with the general concept) [pdf English | pdf German]
  • Decision tree with example questions (most helpful for constructing, or classifying, or changing questions) [pdf English | pdf German]
  • Decision tree with example multiple-choice questions (most helpful as inspiration when working with multiple-choice questions) [pdf English | pdf German]
  • Comparison of our decision tree with “conventional” types of questions (if you want to find out what a “concept question” really is when classified in the Bloom taxonomy) [pdf English | pdf German]

Any comments, feedback, suggestions? Please do get in touch!

Glessmer, M. S., & Lüth, T. (2016). Lernzieltaxonomische Klassifizierung und gezielte Gestaltung von Fragen. Zeitschrift für Hochschulentwicklung, 11 (5) doi: 10.3217/zfhe-11-05/12

#TeachingTuesday: Student feedback and how to interpret it in order to improve teaching

Student feedback has become a fixture in higher education. But even though it is important to hear student voices when evaluating teaching and thinking of ways to improve it, students aren’t perfect judges of what type of teaching leads to the most learning, so their feedback should not be taken onboard without critical reflection. In fact, there are many studies that investigate specific biases that show up in student evaluations of teaching. So in order to use student feedback to improve teaching (both on the individual level when we consider changing aspects of our classes based on student feedback, as well as at an institutional level when evaluating teachers for personnel decisions), we need to be aware of the biases that student evaluations of teaching come with.

While student satisfaction may contribute to teaching effectiveness, it is not itself teaching effectiveness. Students may be satisfied or dissatisfied with courses for reasons unrelated to learning outcomes – and not in the instructor’s control (e.g., the instructor’s gender).
Boring et al. (2016)

What student evaluations of teaching tell us

In the following, I am not presenting a coherent theory (and if you know of one please point me to it!), these are snippets of current literature on student evaluations of teaching, many of which I found referenced in this annotated literature review on student evaluations of teaching by Eva (2018). The aim of my blogpost is not to provide a comprehensive literature review, rather than pointing out that there is a huge body of literature that teachers and higher ed administrators should know exists somewhere out there, that they can draw upon when in doubt (and ideally even when not in doubt ;-)).

6 second videos are enough to predict teacher evaluations

This is quite scary, so I thought it made sense to start out with this study. Ambady and Rosenthal (1993) found that silent videos shorter than 30 seconds, in some case as short as 6 seconds, significantly predicted global end-of-semester student evaluations of teachers. These are videos that do not even include a sound track. Let this sink in…

Student responses to questions of “effectiveness” do not measure teaching effectiveness

And let’s get this out of the way right away: When students are asked to judge teaching effectiveness, that answer does not measure actual teaching effectiveness.

Stark and Freishtat (2014) give “an evaluation of course evaluations”. They conclude that student evaluations of teaching, though providing valuable information about students’ experiences, do not measure teaching effictiveness. Instead, ratings are even negatively associated with direct measures of teaching effectiveness and are influenced by gender, ethnicity and attractiveness of the instructor.

Uttl et al. (2017) conducted a meta-analysis of faculty’s teaching effectiveness and found that “student evaluation of teaching ratings and student learning are not related”. They state that “institutions focused on student learning and career success may want to abandon [student evaluation of teaching] ratings as a measure of faculty’s teaching effectiveness”.

Students have their own ideas of what constitutes good teaching

Nasser-Abu Alhija (2017) showed that out of five dimensions of teaching (goals to be achieved, long-term student development, teaching methods and characteristics, relationships with students, and assessment), students viewed the assessment dimension as most important and the long-term student development dimension as least important. To students, the grades that instructors assigned and the methods they used to do this were the main aspects in judging good teaching and good instructors. Which is fair enough — after all, good grades help students in the short term — but that’s also not what we usually think of when we think of “good teaching”.

Students learn less from teachers they rate highly

Kornell and Hausman (2016) review recent studies and report that when learning is measured at the end of the respective course, the “best” teachers got the highest ratings, i.e. the ones where the students felt that they had learned the most (which is congruent with Nasser-Abu Alhija (2017)’s findings of what students value in teaching). But when learning was measured during later courses, i.e. when meaningful deep learning was considered, other teachers seem to have more effective. Introducing desirable difficulties is thus good for learning, but bad for student ratings.

Appearances can be deceiving

Carpenter et al. (2013) compared a fluent video (instructor standing upright, maintaining eye contact, speaking fluidly without notes) and a disfluent video (instructor slumping, looking away, speaking haltingly with notes). They found that even though the amount of learning that took place when students watched either of the videos wasn’t influenced by the lecturer’s fluency or lack thereof, the disfluent lecturer was rated lower than the fluent lecturer.

The authors note that “Although fluency did not significantly affect test performance in the present study, it is possible that fluent presentations usually accompany high-quality content. Furthermore, disfluent presentations might indirectly impair learning by encouraging mind wandering, reduced class attendance, and a decrease in the perceived importance of the topic.”

Student expect more support from their female professors

When students rate teachers effectiveness, they do that based on their assumption of how effective a teacher should be, and it turns out that they have different expectations depending on the gender of their teachers. El-Alayi et al. (2018) found that “female professors experience more work demands and special favour requests, particularly from academically entitled students”. This was both true when male and female faculty reported on their experiences, as well as when students were asked what their expectations of fictional male and female teachers were. 

Student teaching evaluations punish female teachers

Boring (2017) found that even when learning outcomes were the same for students in courses taught by male and female teachers, female teachers received worse ratings than male teachers. This got even worse when teachers didn’t act in accordance to the stereotypes associated with their gender.

MacNell et al. (2015) found that believing that an instructor was female (in a study of online teaching where male and female names were sometimes assigned according to the actual gender of the teacher and sometimes not) was sufficient to rate that person lower than an instructor that was believed (correctly or not) to be male.

White male students challenge women of color’s authority, teaching competency, and scholarly expertise, as well as offering subtle and not so subtle threats to their persons and their careers

This title was drawn from the abstract of Pittman (2010)’s article that I unfortunately didn’t have access to, but thought an important enough point to include anyway.

There are very many more studies on race, and especially women of color, in teaching contexts, which all show that they are facing a really unfair uphill battle.

Students will punish a percieved accent

Rubin and Smith (1990) investigated “effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants” in North America and found that 40% of undergraduates avoid classes instructed by nonnative English-speaking teaching assistants, even though the actual accentedness of teaching assistants did not actually influence student learning outcomes. Nevertheless, students judged teaching assistants they perceived as speaking with a strong accent as poorer teachers.

Similarly, Sanchez and Khan (2016) found that “presence of an instructor accent […] does not impact learning, but does cause learners to rate the instructor as less effective”.

Student will rate minorities differently

Ewing et al. (2003) report that lecturers that were identified as gay or lesbian received lower teaching ratings than other lecturers with undisclosed sexual orientation when they, according to other measures, were perfoming very well. Poor teaching performance was, however, rated more positively, possibly to avoid discriminating against openly gay or lesbian lecturers.

Students will punish age

Stonebraker and Stone (2015) find that “age does affect teaching effectiveness, at least as perceived by students. Age has a negative impact on student ratings of faculty members that is robust across genders, groups of academic disciplines and types of institutions”. Apparently, when it comes to students, from your mid-40ies on, you aren’t an effective teacher any more (unless you are still “hot” and “easy”).

Student evaluations are sensitive to student’s gender and grade expectation

Boring et al. (2016) find that “[student evaluation of teaching] are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness.

What can we learn from student evaluations then?

Pay attention to student comments but understand their limitations. Students typically are not well situated to evaluate pedagogy.
Stark and Freishtat (2014)

Does all of the above mean that student evaluations are biased in so many ways that we can’t actually learn anything from them? I do think that there are things that should not be done on the basis of student evaluations (e.g. rank teacher performance), and I do think that most times, student evaluations of teaching should be taken with a pinch of salt. But there are still ways in which the information gathered is useful.

Even though student satisfaction is not the same as teaching effectiveness, it might still be desirable to know how satisfied students are with specific aspects of a course. And especially open formats like for example the “continue, start, stop” method are great for gaining a new perspective on the classes we teach and potentially gaining fresh ideas of how to change things up.

Also tracking ones own evaluation over time is helpful since — apart from aging — other changes are hopefully intentional and can thus tell us something about our own development, at least assuming that different student cohorts evaluate teaching performance in a similar way. Also getting student feedback at a later date might be helpful, sometimes students only realize later which teachers they learnt from the most or what methods were actually helpful rather than just annoying.

A measure that doesn’t come directly from student evaluations of teaching but that I find very important to track is student success in later courses. Especially when that isn’t measured in a single grade, but when instructors come together and discuss how students are doing in tasks that build on previous courses. Having a well-designed curriculum and a very good idea of what ideas translate from one class to the next is obviously very important.

It is also important to keep in mind that, as Stark and Freishtat (2014) point out, statistical methods are only valid if there are enough responses to actually do statistics on them. So don’t take very few horrible comments to heart and ignore the whole bunch of people who are gushing about how awesome your teaching is!

P.S.: If you are an administrator or on an evaluation committee and would like to use student evaluations of teaching, the article by Linse (2017) might be helpful. They give specific advice on how to use student evaluations both in decision making as well as when talking to the teachers whose evaluations ended up on your desk.

Literature:

Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64(3), 431–441. https://doi.org/10.1037/0022-3514.64.3.431

Boring, A. (2017). Gender biases in student evaluations of teachers. Journal of Public Economics, 145(13), 27–41. https://doi.org/10.1016/j.jpubeco.2016.11.006

Boring, A., Dial, U. M. R., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research, (January), 1–36. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: Instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20(6), 1350–1356. https://doi.org/10.3758/s13423-013-0442-z

El-Alayi, A., Hansen-Brown, A. A., & Ceynar, M. (2018). Dancing backward in high heels: Female professors experience more work demands and special favour requests, particularly from academically entitled students. Sex Roles. https://doi.org/10.1007/s11199-017-0872-6

Eva, N. (2018), Annotated literature review: student evaluations of teaching (SET), https://hdl.handle.net/10133/5089

Ewing, V. L., Stukas, A. A. J., & Sheehan, E. P. (2003). Student prejudice against gay male and lesbian lecturers. Journal of Social Psychology, 143(5), 569–579. http://web.csulb.edu/~djorgens/ewing.pdf

Kornell, N. & Hausman, H. (2016). Do the Best Teachers Get the Best Ratings? Front. Psychol. 7:570. https://doi.org/10.3389/fpsyg.2016.00570

Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94- 106. https://doi.org/10.1016/j.stueduc.2016.12.004

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291– 303. https://doi.org/10.1007/s10755-014-9313-4

Nasser-Abu Alhija, F. (2017). Teaching in higher education: Good teaching through students’ lens. Studies in Educational Evaluation, 54, 4-12. https://doi.org/10.1016/j.stueduc.2016.10.006

Pittman, C. T. (2010). Race and Gender Oppression in the Classroom: The Experiences of Women Faculty of Color with White Male Students. Teaching Sociology, 38(3), 183–196. https://doi.org/10.1177/0092055X10370120

Rubin, D. L., & Smith, K. A. (1990). Effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants. International Journal of Intercultural Relations, 14, 337–353. https://doi.org/10.1016/0147-1767(90)90019-S

Sanchez, C. A., & Khan, S. (2016). Instructor accents in online education and their effect on learning and attitudes. Journal of Computer Assisted Learning, 32, 494–502. https://doi.org/10.1111/jcal.12149

Stark, P. B., & Freishtat, R. (2014). An Evaluation of Course Evaluations. ScienceOpen, 1–26. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1

Stonebraker, R. J., & Stone, G. S. (2015). Too old to teach? The effect of age on college and university professors. Research in Higher Education, 56(8), 793–812. https://doi.org/10.1007/s11162-015-9374-y

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. http://dx.doi.org/10.1016/j.stueduc.2016.08.007

#TeachingTuesday: Some things I read about making good lecture videos

Just imagine you had written an article on “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, like Choe et al. (2019) did. What excellent timing to inform teaching decisions all around the world!

Choe et al. compare 8 different video styles (all of which can be watched as supplementary material to the article which is really helpful!), 6 to replace “normal lectures” and two that complement them, to investigate the influence of video style on both how much students are learning from each, and how they feel watching them.

The “normal lecure” videos were different combinations of the lecturer and information on slides/blackboards/tablets/…: a “classic classroom” where the lecturer is filmed in front of a blackboard and a screen, a “weatherman” style in front of a green screen on which the lecture slides are later imposed, a “learning glass” where the lecturer is seen writing on a board, a “pen tablet” where the lecturer can draw on the slides, a “talking head” where the lecturer is superimposed on the slides in a little window, and “slides on/off” where the video switches between showing slides or the lecturer.

And the good news: Turns out that the style you choose for your recorded video lecture doesn’t really affect student learning outcomes very much. Choe et al. did, however, deduce strengths and weaknesses of each of the lecture formats, and from that come up with a list of best practices for student engagement, which I find very helpful. Therein, they give tips for different stages of the video production, related to the roles (lecturer and director of the video), and content covered in the videos, and these are really down-to-earth, practical tips like “cooler temperatures improve speaker comfort”.  And of course all the things like “not too much text on slides” and “readable font” are mentioned, too; always a good reminder!

One thing they point out that I wasn’t so clear to me before is that it’s important that the lecturer is visible and that they maintain eye contact with the camera. Of course that adds a layer of difficulty to recording lectures — and a lot of awkward feelings and extra work in terms of what to wear and actually having to shower and stuff — but in the big scheme of things if it creates a better user experience, maybe it’s not such a big sacrifice. Going forward, I’ll definitely keep that in mind!

Especially making the distinction between the roles of “lecturer” and “director” was a really helpful way for me to think about making videos, even though I am playing both roles myself. But it reminds me of how many considerations (should) go into a video besides “just” giving the lecture! If you look at the picture above, you’ll see that I’ve started sketching out what I want to be able to show on a future video, and what that means for how many cameras I need, where to place them, and how to orient them (portrait or landscape). When I made the (german) instructions for kitchen oceanography, I filmed myself in portrait mode, thinking of posting them to my Instagram stories, but then ended up editing a landscape video for which I then needed to fill all the awkward space around the portrait movie. Would have been helpful to think about it in these terms before!

Choe et al. even include a “best practice” video in their supplementary material, which I find super helpful. Because even though in some cases it might be feasible to professionally produce lectures in a studio, but that’s not what I (or most people frantically producing video lectures) these days have access to. So seeing something that is professionally produced but that doesn’t (seem) to require incredibly complicated technology or fancy editing is reassuring. In fact, even though the lecturer appears to have been filmed in front of a green screen, I think in the end it’s not too unsimilar to what I did in the (german) instructions for kitchen oceanography mentioned above: A lecturer on one side, the slides (in a portrait format) on the other.

In addition to the six “lecture” videos, there was a “demo” video where the lecturer showed a simple demonstration, and an “interview” video, where the lecturer was answering questions that were shown on a screen (so no second person there). Those obviously can’t replace a traditional lecture, but can be very useful for specific learning outcomes!

The “demo” type video is the one I am currently most interested in, since that’s where I can best contribute my expertise in a niche where other people appreciate getting some input. Also, according to Choe at al., students found that type of video engaging, entertaining, and of high learning value. All the more reason for me to do a couple more demo videos over the next couple of days, I’m already on it!

References:

Ronny C. Choe, Zorica Scuric, Ethan Eshkol, Sean Cruser, Ava Arndt, Robert Cox, Shannon P. Toma, Casey Shapiro, Marc Levis-Fitzgerald, Greg Barnes, and H. Crosbie (2019). “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, CBE—Life Sciences Education, Vol. 18, No. 4. Published Online: 1 Nov 2019
https://doi.org/10.1187/cbe.18-08-0171

Anna is answering questions on our Nature article at #ShareEGU20

It feels like an enourmously long time ago that our article on “ice front blocking of ocean heat transport to an Antarctic ice shelf” got published in Nature, but it was in fact only a little more than two months ago. Only right after, life changed so drastically that it feels as if it’s been decades since…

But anyway, here is your chance to ask any and all questions related to that article that you might have! At #shareEGU20, EGU’s “sharing geosciences online” event, anyone can log onto their system and ask main author of the article, Anna Wåhlin, all they ever wanted to know! How cool is that?

Using campsites for scicomm

Last summer at the Science in Public conference in Manchester, I heard a talk by Anna Woolman on science communication in campsites. It stuck with me as a really good idea. Now I came across the recent article by Woolman (2020) on that study that I found so inspiring, so here are my thoughts on it for you!

Reaching non-specialist audiences and engaging them with science at an affordable seaside campsite

The idea behind the study is that while science days and science festival and those kinds of events are great opportunities for the interested public to engage with cutting edge research or other interesting science, the problem is that it will only engage the interested public. As long as people have to choose to specifically enter a space (whether physically or on the internet) where scicomm happens, doing so actually needs to be made a priority. A priority in how time and money are spent, and in competition with many other things that might be a lot more important to people. So how can people be reached without relying on them to make the effort to enter in a scicomm space?

In this study, the scicomm topic was “insects as a sustainable food source”. The way they did it was a pop up kitchen in the middle of a campsite where they offered a menu made from insects as well as information and conversations on that topic. And here is what they recommend:

Affordable campsites

In the study, an affordable campside near the seaside was chosen in order to reach audiences who might not make an active effort to engage with science otherwise. The assumption that those audiences are more likely to be found on affordable, local campsites than in high-end holiday ressorts is grounded in literature.

(Also, a campsite can provide infrastructure that will make your experience as scicommer a lot more enjoyable. Parking spots, toilets, food, all within easy reach…)

People have time

In the study, Woolman found that since people were on vacation and had time, engagement wasn’t just the sadly too common “grab and go” of scicomm giveaways, but that extended engagement (longer than 10 minutes) could easily take place. This is important because other scicomm activities that take place in spaces where people just happen to be are often in very busy places like shopping malls or even train stations, where there is a lot of people going through, but where engagement is made difficult because people are there for a specific purpose which they want to get done and then go some place else. At a campsite, on the other hand, people have a lot of time on their hands and are often grateful for some kind of unexpected stimulation or the opportunity to have the kids kept busy for a couple of minutes.

School holidays or a weekend in November?

Depending on who your target audience is and what type of engagement you are going for, it might  be a good idea to do your scicomm activity during the busy times. For example during the summer school holidays, camsites are typically most busy, with all sorts of people. If you were to target families with school-aged kids, for example, this would be the time to do your activity! But of course it’s also possible that your target audience are pensioneers — then maybe choosing a weekend or even week day outside of the school holidays might be a better idea! It might not be as busy in total numbers, but the density of your target audience might be relatively higher.

So what now?

I really like the idea of doing pop up scicomm at campsites. At my friend Sara‘s windsurfing school, this was happening when both she and other Kiel Science Outreach Campus (KiSOC; I was the project’s scientific coordinator at that time) PhD students did scicomm on their projects on the beach (in the picture you see a 3D movie on water striders being test-watched). Another project was related to sunscreen — very appropriate to do this on the beach! And from that experience doing scicomm specifically at that place, but more generally in a similar setting was something I wanted to do more of, and that I’ve been thinking about for two reasons.

#WaveWatching

As you know, my pet project is wave watching. And what better place to do it than on a beach? And that beach specifically is great because it offers a variety of features that influence a wave field (Check out a short wave watching movie from that beach here), plus I enjoy hanging out there (which I think is a really important factor when planning a scicomm activity — it needs to be enjoyable! If it’s not, that will show and put people off your science, no matter how awesome it might be).

I’ve been thinking about offering wave watching excursions there and actually had some scheduled this spring and summer, where I would meet up with people, walk to different spots on the beach, and explain what physics they can observe there. Well, there is always next year, or my wave watching Instagram @fascinocean_kiel :-)

GEO-Tag der Natur

I’m the programme manager of the german “GEO-Tag der Natur” festival on biodiversity. As part of my job I’ve been thinking about engaging different audiences through new formats, and this seems like a great idea. For GEO-Tag der Natur, there are typically excursions into interesting biotopes where experts on that type of biotope explain animals and plants that can be found there. Usually we advertise excursions in spots that are especially interesting in terms of biodiversity, but even just a regular beach, forest, or nature around wherever the campsite is located are super interesting and there is so much to discover anywhere! So using campsites as home bases for our excursions is definitely something that I want to try when it’s possible again. It’s also an attractive idea for the campsites themselves to be able to offer these kinds of events, so it’s a win-win!

What are your thoughts on doing scicomm on a campsite? Let me know!

References:

Woolman, A. (2020) ‘Reaching non-specialist audiences and engaging them with science at an affordable seaside campsite’. Research for All, 4 (1): 6–15. DOI https://doi.org/10.18546/RFA.04.1.02

Our Nature article in 20 tweets

(Not true, there were 22 tweets, but apparently I can’t count! :-D)

For those of you that don’t follow my Twitter, here is what I posted over there the day our Nature paper got published:

Published online in @Nature today: “Ice front blocking of ocean heat transport to an Antarctic ice shelf” by @a_wahlin @nadsteiger @dareliuselin @telemargrete @meermini (Yes! That’s me!!! :-)) @ClnHz @ak_mazur et al.. What is it all about? A thread. 1/x

And here is the link to our Nature article!

The Antarctic ice sheet has been losing mass recently. Ice sheets consist of the “grounded” parts that rest on land or sea floor, and the parts that float on the sea. If the floating part get thinner, the grounded part “flows off” land much more easily (pic by @dareliuselin) 2/x

Floating parts of ice shelves break off&melt. But why are ice sheets thinning? Mainly because of melting from below. We are thus concerned with what controls how much warm(-ish) water is transported across the Antarctic continental shelf towards the ice (Sketch: Kjersti Daae) 3/x

I’m writing “Warm(-ish) water”, because the water is only 1-2°C “warm”, but that’s still warmer than the freezing point. IF this warm(ish) water gets in contact with ice, it will nibble away at it. But that’s a big IF, that we set out to investigate 4/x

From existing data, it seemed that the shoreward heat flux is much larger than what would be needed to cause the observed melting. But this is a heat flux that was measured not right where the melting is happening, but a lot further offshore 5/x

It’s difficult to measure the heat flux right up to the ice shelf, because Antarctica isn’t the friendliest of environments for research ships, gliders, moorings, etc, especially in winter. Cool toys like floats, or CTDs on seals give a lot of data, but not enough yet 5/x

But @a_wahlin, @dareliuselin & team put moorings closer to the ice shelf than ever before, the closest one of three only 700m from the ice shelf front. There was absolutely no guarantee that the moorings would survive (Pic by @a_wahlin showing @dareliuselin) 6/x

Luckily, despite being threatened by storms, ice bergs etc, the moorings recorded for two years, right next to the ice shelf, giving us better estimates of heat fluxes than were available ever before 7/x

While the moorings were out in Antarctica, we went to LEGI in Grenoble and worked on the Coriolis rotating platform, basically a 13-m diameter swimming pool on a merry-go-round. SO EXCITING! (Pic by Nadine Steiger) 8/x

It’s really an amazing experience to sit in an office above a swimming pool when both are rotating together. As long as it’s dark outside the tent that covers both, you don’t really notice movement. But when the light comes on it’s very easy to get dizzy! (Pic Samuel Viboud) 9/x

We were not playing on the merry-go-round for two months just for fun, though. Rotating the large water tank is important to correctly represent the influence of Earth’s rotation on ocean currents, which is very important for this research question 10/x

In the rotating platform, we built a plastic “ice shelf” that was mounted at the end of a v-shaped plastic “canyon”. We could set up a current and then modify parameters to investigate their influence on the transport towards and underneath the ice shelf (Pic @a_wahlin) 11/x

If you are interested to read a lot more about this (also about how parts of the team went for a swim in the rotating tank, and about how sick you can get when sitting on a merry-go-round all day every day for weeks), check out @dareliuselin’s blog 12/x

Link to Elin’s blog!

In a nutshell: We put particles in the water and lit them, layer by layer, with lasers. We took pictures of where the particles in each layer were, and with the “particle image velocimetry” (PIV) technique, we got a 3D map of particle distributions over time 13/x

And what we found both from the data that we got from the moorings in Antarctica, that we were lucky enough to recover, as well as from the tank experiments at the rotating platform was really interesting: Ice front blocking of ocean heat transport to the Antarctic ice shelf14/x

The ice shelf, at its most offshore part, still reaches down to 250-500m. That means that the depth of the water column changes drastically at the front of the ice shelf. And that has important consequences for depth-independent part of the current 15/x

The barotropic, i.e. depth-independent part of the current is blocked by the step shape of the ice front (as well as the plastic front in the tank). Only the baroclinic (depth-varying) part can flow below the ice, but that part is much smaller 16/x

In the tank we changed the shape of the ice front to see that it’s really the large step that blocks the current. Other configurations lead to different flow pattern. But the large step shape is what the Getz Ice Shelf system looks like, and other systems, too 17/x

What that means is that looking at the density structure of the water column, thus the relative magnitude of barotropic and baroclinic components of the current, is a better indicator of ice shelf melting than the heat transport onto the continental shelf 18/x

It also shows the importance of accurately representing the step of the ice shelf front accurately in models in order to simulate the heat transport towards the ice as well as the melting of the ice shelves 19/x

TL;DR: Article published @Nature on ice front blocking of ocean heat transport to an Antarctic ice shelf, and I contributed to the exciting study and feel so honored to have been part of this amazing project with @a_wahlin, @dareliuselin, @clnhz et al. (Pic Samuel Viboud) 20/x

Playing in a 13-m-diameter pool on a merry-go-round results in Nature article

A long, long time ago (ok, in fall of 2017) I got the chance to join Elin Darelius and Anna Wåhlin’s team for a measuring campaign at the Coriolis platform in Grenoble for several weeks. I was there officially in an outreach officer-like role: To write and tweet about the experiments, conduct “ask me anything” events, write guest posts newsletters and websites, etc.. A lot of my work from that time is documented on Elin’s blog, that I blogged on almost daily during those periods. And we had so many amazing pictures to share (mostly green, that’s because of the lasers we used).

Turbulence in a rotating system is 2D, therefore the whole water column is rotating in this eddy that we accidentally made when moving parts of the structure in the tank

But I was extremely lucky: Neither Elin nor Anna nor anyone else on the team saw me as “just the outreach person”, which is a role that outreach people are sadly sometimes pushed in. Instead, they knew me as an oceanographer and that’s how I was integrated in the team: We discussed experiments all the way from the setup in an empty tank (below you see Elin with her “Antarctica”)

No matter how carefully you planned your experiments, once you start actually conducting them, there is always something that doesn’t work quite the way you imagined. But since time in facilities like the Coriolis platform is limited, it is hugely important to think on your feet, come up with ideas quickly, and fix things. Which is the part of science that I enjoy the most: Being confronted with a problem “in the field” and having to fix it right then and there, using whatever limited equipment and information you have available.

Speaking of “limited information”: Sometimes you have to make educated guesses about what’s in the data you are currently collecting in order to make decisions on how to proceed, without being able to know for sure what’s in the data. We took tons of pictures and videos and obviously also observed by eye what was happening in the tank, but in the end, the “real” data collection was happening with images that we couldn’t analyse on the spot (and that’s what the research part is about that took place in between fall of 2017 and now: many many hours of computing and analysing and discussing and rinse and repeat).

Grenoble was also an amazing experience just because of the sheer size of the Coriolis platform. Below you see the operations room, an office that is built above the tank and rotates with it. And let me tell you, being on a merry-go-round all day long isn’t for everybody!

I really also enjoy the hands-on work. Below is me in waders in the 13-m-diameter rotating pool (while it’s rotating, of course), using a broom to sweep up “neutrally buoyant” particles that we use to track the flow that over night settled on the topography (so much for “neutrally buoyant”, but close enough). Sometimes it comes in handy to be an early bird and doing this work before everybody else gets up, so the tank has the chance to settle into solid body rotation again before experiments start for the day.

Here you see the layer of particles in different stages of disturbance, and me having fun with it (it might not be obvious from the picture, but I’m standing in waist-deep water there)

But then we weren’t playing all day long for weeks. There were times of intense discussions of preliminary results. Exciting times! And of course, those discussions only intensified when all the data was in and could be analysed in more depth.

I loved being part of the whole process and contributing to this exciting publication now!

#SciCommSunday: The reason why I choose to post selfies on my #SciComm Social Media

“I don’t want my face on the internet!”, “My science should speak for itself, it shouldn’t matter who I am as a person!”, “I just don’t like what I look like in pictures!”, “People won’t perceive me as professional when I include selfies in my science communication work!”: There are many reasons for not posting selfies on the internet, and I sympathise with many of them. However, I have chosen (and continue to choose) to post the occasional selfie. Why is that?

My main goal I am trying to achieve with my scicomm Instagram @fascinocean_kiel is to show that exciting science (specifically ocean physics) can be discovered EVERYWHERE if you are open to seeing it. This means that I post pictures of water that I take on walks along any kind of river, lake, ocean, but also in puddles, sinks, or tea cups, pretty much daily.

#ThisIsWhatAScientistLooksLike

But in order to make my Insta relatable to other people, I find it important to put these pictures in the context of my life. Yes, I live on the Baltic Sea coast and therefore have the opportunity to see “the ocean” (well, kinda) on an almost daily basis, which is reflected in my Insta. But I commute to work in Hamburg (where I see Elbe river and the Port of Hamburg, which you also see quite a lot), and I travel a lot throughout Germany and beyond. Some days I’m on the train — on those days you’ll often see pictures of water taken from the train window. Or if I am giving workshops in locations with fancy taps, you will see those. My point is: You can discover oceanography everywhere. If you choose to look for it.

But then who does get this excited about this kind of stuff? Well, I do. And this is where #ThisIsWhatAScientistLooksLike comes in. I’m not wearing a lab coat, and I am not even observing this science as part of my job. I’m not even employed as a scientist any more, nor do I want to be. But I didn’t loose my identity as a scientist when I decided to stop pursuing an academic career. That was a huge fear I had when I was in the process of wanting out of academia — that I would be a failed scientist if I left, even if I left because I would rather be somewhere else. So for me, showing that I am still a scientist even if that’s not my day job anymore is my way of offering myself as the role model that I wish I had during that time, showing that leaving academia doesn’t make you any less of a scientist.

Of course, #ThisIsWhatAScientistLooksLike also includes other aspects, for example making women or other minorities in science more visible. Or showing that there is no one “correct” way of being a scientist. For example the clothes you wear or how much effort you put into looking put together are in no way correlated to how serious you are about your science. Contributing to spreading that message is a nice side effect for me.

But does posting selfies do anything to how people perceive scientists?

#ScientistsWhoSelfie

There is a 2019 study by Jarreau et al. that looked at this. They compared different kinds of Instagram posts, some showing selfies of scientists, some showing only lab equipment or other pictures of the work only. And they found that posting selfies does actually have an impact on how scientists are perceived.

Scientists posting selfies (as opposed to those only posting “work stuff”) were perceived as significantly warmer. Appearing warm is definitely desirable in this context, as warmth is a component of trustworthiness. Obviously, as a scientist we want to be, but also be perceived as, trustworthy. This perception is created in this study when selfies were used.

Another finding is that posting selfies does not result in scientists being perceived as less competent, both for male and female scientists. So here goes the fear mentioned above that posting selfies will make you appear less serious about your work! Or does it? Note that of course this study does not guarantee that nobody ever will think less of you because you are posting selfies. Of course there might be people you are working with, or more generally, that see your selfies online and think any number of weird things. In general, this does not appear to be the case. But you know your bosses, your community, your life best, so ultimately if this is a concern you have, you need to weigh the potential benefits of posting selfies against that risk. In my case, I have decided that I can totally live with what some people might think about me posting selfies because I know that the people who matter to me don’t think less of me because of it. Additionally, I have gotten a lot of feedback that people actually enjoy seeing selfies on my Insta occasionally, because it does make it more relatable.

As a women, I also find it important that I post selfies, because the study showed that this can contribute to making science be perceived less as “exclusively male”. The common stereotype of what a scientist looks like is still to this day an old white male (in a lab coat and with messy hair). Of course there are plenty of those around, but there are so many brilliant and inspiring women out there, too, that I’d like to see that stereotype change.

In total, results of the study are that showing selfies can potentially help change attitudes towards scientists towards the better. The study doesn’t explore the mechanisms through which this happens (so it might depend on, for example, facial expressions, features of the background, or tons of other things), so it is by no means guaranteed to work for every selfie being posted on the internet (and also how many selfies do people need to see for this effect to kick in, or what does the ratio to “science stuff only” pictures need to be? And how long does the effect last?). In any case, to me, this study is indication enough that me posting selfies might have all the intended consequences, and that’s reason enough for me to choose to post selfies. And I encourage you to check out the study and consider posting selfies, too!

P.S.: This picture is clearly not a selfie, it was taken by my brilliant colleague Sebi Berens (www.sebiberensphoto.com / @sebiberensphoto). Thank you, Sebi!

Literature:

Jarreau PB, Cancellare IA, Carmichael BJ, Porter L, Toker D, Yammine SZ (2019) Using selfies to challenge public stereotypes of scientists. PLoS ONE 14(5): e0216625. https://doi.org/10.1371/journal.pone.0216625