Category Archives: literature

What does “sensemaking” really mean in the context of learning about science? (Reading Odden & Russ, 2019)

I read the article “Defining sensemaking: Bringing clarity to a fragmented theoretical construct” by Odden and Russ (2019) and what I loved about the article are two main things: I realized that “sensemaking” is the name of an activity I immensely enjoy under certain conditions, and being able to put words to that activity made me really happy! And I found it super helpful that the differences between “sensemaking” and other concepts like “explaining” or “thinking” were pointed out, because that gave me an even clearer idea of what is meant by “sensemaking”.

What is sensemaking? The definition given in the Odden and Russ (2019) article is simple:

Sensemaking is a dynamic process of building or revising an explanation in order to “figure something out”—to ascertain the mechanism underlying a phenomenon in order to resolve a gap or inconsistency in one’s understanding.

Odden and Russ discuss that in the educational science literature, sensemaking has previously been used to mean three different things, that can all be reconceiled under this definition, but that have been discussed mostly independently before:

  1. An approach to learning: Sensemaking can mean really wanting to figure something out by yourself — making sense of an intriguing problem by bringing together what you know, asking yourself questions, building and testing hypotheses, but not asking other people for the correct solution. This is my approach to escape games, for example — I hate using the help cards! I know that it should be possible to figure the puzzles out, so I want to do it myself! This approach is obviously desirable in science learners, since they are not just relying on memorizing responses or assembling surface-level knowledge. They really want to make sense out of something that did not make sense before.
  2. A cognitive process: In this sense, sensemaking is really about how students bring together pieces of previous knowledge and experiences, and new knowledge, and how they integrate them to form a new and bigger coherent structure, for example by using analogies or metaphors.
  3. A way of communicating: Sensemaking then is the collaborative effort to make sense by bringing together different opinions or to construct an explanation, and than critiquing it in order to make sure the arguments are watertight. This can happen both using technical terms and everyday language.

And now how is “sensemaking” different from other, seemingly similar terms? (Or, as the authors say, how can we differentiate sensemaking “from other <good> things to do when learning science”?) This is my summary of the arguments from the article:

Thinking. Compared with sensemaking, thinking is a lot broader. One can do a lot of thinking without attempting to create any new sense. Thinking does not require the critical approach that is essential to sensemaking.

Learning. While sensemaking is a form of learning, there are a lot of other forms that don’t include sensemaking, for example memorization.

Explaining. Sensemaking requires the process of “making sense” of something that previously did not make sense, explanating does not necessarily require that. Depending on the context, explanations can sometimes well be generated out of previous knowledge without building new relationships or anything.

Argumentation. Argumentation is a much wider term than sensemaking — one can for example argue with the goal of persuading someone else rather than building a common understanding and making sense out of information.

Modeling. There is a great overlap between modeling and sensemaking, but sensemaking is typically more dynamic and short-term, whereas modeling is a more formal activity that can take place over days and weeks, sometimes with the purpose of communicating ideas.

I found reading this article enlightening because it is giving me a language to talk about sensemaking, to articulate nuances, that I previously did not have. By reflecting on situations where I really enjoy sensemaking (another example is wave watching: I am trying to make sense of what I see by running through questions in my head. Can I observe what causes the waves? Is their behavior consistent with what I would expect given what I can observe about the topography? If not, what does that tell me about the topogaphy in places where I can’t observe it?) and on others where I don’t (thinking of times in school when I did not see the point of trying to make sense out of something [as in make all the individual pieces of previous knowledge and new information fit together coherently without conflict] and just needed to go though the motions of it to pass a test or something), I find it intriguing to think about why I sometimes engage in the process and enjoy it, and sometimes I don’t even try to engage.

How does it work for you, do you know under what conditions you engage in sensemaking, and under which don’t you?

Odden, T. O. B., & Russ, R. S. (2019). Defining sensemaking: Bringing clarity to a fragmented theoretical construct. Science Education, 103(1), 187-205.

Asking for the “nerd topic” when introducing workshop participants to each other to foster self-disclosure to create community

I am currently teaching a lot of workshops on higher education topics where participants (who previously didn’t know each other, or me) spend 1-1.5 days talking about topics that can feel emotional and intimate and where I want to create an environment that is open and full of trust, and where connections form that last beyond the time of the workshop and help participants build a supportive network. So a big challenge for me is to make sure that paticipants quickly feel comfortable with each other and me.

As I am not a big fan of introductory games and that sort of things, for a long time I just asked them to introduce themselves and mention the “one question they need answered at the end of the workshop to feel like their time was well invested” (way to put a lot of pressure on the instructor! But I really like that question and in any case, it’s better to know the answer than to be constantly guessing…).

For the last couple of workshops, I have added another question, and that is to ask participants to quickly introduce us to their “nerd topic”*, which we define as the topic that they like to spend their free time on, wish they could talk about at any time and with anyone, and that just makes them happy. For me, that’s obviously kitchen oceanography!

Introductions in my workshops usually work like this: I go first and introduce myself. I make sure to not talk about myself in more detail than I want them to talk about themselves and to not include a lot of orga info at this point so I am not building a hierarchy of me being the instructor who gets to talk all the time, and then them being the participants who only get to say a brief statement each when I call on them. I model the kind of introduction I am hoping for to make it clear what I am hoping for from them. Then I call on people in the order they appear on my zoom screen and they introduce themselves. (I hate the “everybody pass the word on to someone who hasn’t spoken yet!” thing because it’s hugely stressful to me and making sure I call on someone who really hasn’t spoken yet and don’t forget anyone binds all my mental capacities if I am a participant. So when I am the workshop lead, I do call people myself and check off a list who has spoken already).

Including the “nerd topic” question has worked really well for me. Firstly, I LOVE talking about kitchen oceanography, and getting to talk about it (albeit really briefly) in the beginning of a workshop (when I am usually a little stressed and flustered) makes me happy and relaxes me. My excitement for kitchen oceanography shows in the way I speak, and I get positive feedback from participants right away. Even if kitchen oceanography isn’t necessarily their cup of tea, they can relate to the fascination I feel for a specific topic that not many other people care for.

And the same happens when, one after the other, the other participants introduce themselves. Nerd topics can be anything, and in the recent workshops topics ranging from children’s books to reading about social justice, from handcrafts to gardening, from cooking beetroots with spices to taste like chocolate to fermenting all kinds of foods, from TV series to computer games, from pets to children, from dance to making music. People might not come forward with their nerdiest nerd topics or they might make them sound nerdier than they actually are (who knows?), but so far for every nerd topic, there have been nods and smiles and positive reactions in the group and it is very endearing to see people light up when they talk about their favorite things. Participants very quickly start referencing other people’s nerd topics and relating them to their own, and a feeling of shared interests (or at least shared nerdiness) and of community forms.

Since they fit so well with the content of my workshops, I like to come back to nerd topics throughout the workshops. When speaking about motivation, they are great to reflect on our own motivation (what makes you wanting to spend your Saturday afternoons and a lot of money on this specific topic?). When speaking about the importance of showing enthusiasm in teaching, they were a perfect demonstration of how people’s expressions changed from when they talked about their job title and affiliation to talking about their nerd topic. Also practicing designing intriguing questions is easier when the subject is something you are really passionate about. Nerd topics are also great as examples to discuss the difference between personal and private — sharing personal information, showing personality, is a great way to connect with other people, but it does not mean that we need to share private information, too.  And if participants are thinking about their USP when networking online, connecting their field of study with their nerd topic always adds an interesting, personal, unique touch.

Maybe “nerd topics” are especially useful for the kind of workshops I teach and not universally the best icebreaker question. In any case, for my purposes they work super well! But no matter what the nature of the workshop: Self-disclosure has been shown to lead to social validation and formation of professional relationships, both in online professional communities (Kou & Gray, 2018) and in classrooms (Goldstein & Benassi, 1994) and other settings. Listening to others disclosing information about themselves makes people like the other party better. But there is some reciprociticy in this: openness fosters openness, and as soon as the roles are reversed, the second person disclosing information can catch up on being liked, and the more is disclosed from both sides, the more the liking and other positive emotions like closeness and enjoyment grow (Sprecher et al. 2013). So maybe asking about participants’ “nerd topics” is a good icebreaker question for your classes, too?

*While I really like the longer form of the question, I’m actually not super happy with the term “nerd topic” itself. But I don’t have a good and less charged alternative. If you have any suggestions, I’d love to hear them!

Goldstein, G. S., & Benassi, V. A. (1994). The relation between teacher self-disclosure and student classroom participation. Teaching of psychology, 21(4), 212-217.

Kou, Y., & Gray, C. M. (2018). ” What do you recommend a complete beginner like me to practice?” Professional Self-Disclosure in an Online Community. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-24.

Sprecher, S., Treger, S., & Wondra, J. D. (2013). Effects of self-disclosure role on liking, closeness, and other impressions in get-acquainted interactions. Journal of Social and Personal Relationships, 30(4), 497-514.

“Invisible learning” by David Franklin

Several things happened today.

  1. I had a lovely time reading in the hammock
  2. I tried to kill two birds with one stone (figuratively of course): writing a blog post about the book I read (which I really loved) and try a new-to-me format of Instagram posts: A caroussel, where one post slides into the next as you swipe (so imagine each of the images below as three square pictures that you slide through as you look at the post)

Turns out that even though I really like seeing posts in this format on other people’s Instagram, it’s way too much of a hassle for me to do it regularly :-D

Also a nightmare in terms of accessibility without proper alt-text, and for google-ability of the blog post. So I won’t be doing this again any time soon! But I’m still glad I tried!

And also: check out the book!

Invisible Learning: The magic behind Dan Levy’s legendary Harvard statistics course. David Franklin (2020)

Metaphors of learning (after Ivar Nordmo and the article by Sfard, 1998)

On Thursday, I attended a workshop by Ivar Nordmo, in which he talked about two metaphors of learning: “learning as acquisition” and “learning as participation”. He referred to an article by Sfard (1998), and here is my take-away from the combination of both.

When we talk about new (or new-to-us) concepts, we often describe them with words that have previously been used in other contexts. As we bring the words into a new domain, their meaning might change a little, but the first assumption will be that the new concept we describe by those old words is, indeed, described by those words carrying the same old, familiar meaning.

When concepts are described by metaphors that developed in a different context, or are commonly used in different contexts, an easy assumption is that all their properties are transferrable between contexts. On the one hand that makes it easy to quickly grasp new contexts, on the other hand that easy assumption is most likely not entirely correct, which can lead us to misunderstanding the new concept if we don’t examine our implicit assumptions. And usually we don’t stop to consider whether the words we are using that were borrowed from a different context, are actually leading our thinking on a separate context without us realizing that this might not be appropriate.

The way we think about learning, for example, depends on the language we use to conceptualize it, and there are two metaphores who lead to substantially different ways of understanding learning, with far-reaching consequences.

Learning as acquisition

Learning is commonly defined as “gaining knowledge”. Facts or concepts are building blocks of knowledge that we acquire, accumulate, and construct meaning from. We can test whether people posess knowledge or skills (we might even be able to assess someone’s potential based on their performance). Someone might have a wealth of knowledge. They might be providing teaching and knowledge to someone else, who is receiving instruction and might share it with others. We can transfer knowledge to different applications. We might be academically gifted. In all these cases, we gain posession of something.

We think of knowledge as something we posess, intellectual property rights clearly assign ownership to ideas, and stealing ideas is a serious offence. As any other expression of wealth, knowledge is guarded and passed on from parents to children, or maybe shared as a special favor, making access to those from less knowledge-affluent circles difficult. It is perfectly fine to admit to wanting to accumulate knowledge just for the fun of it, without intending to use it for anything, same as it is socially accepted to get rich without considering what that money could and maybe should be used for.

Learning as participation

Changing the language we use to talk about things might also change how we think about the things themselves.

An alternative metaphor to “learning as aquisition” is “learning as participation”. In that metaphor, learning is described as a process that happens in specific contexts and without a clear end point. The focus then is on communicating in the language that a community communicates in, in taking part in the community’s rituals, but simultaneously influencing the community’s language and rituals in a shared negotiation with the goal of building community.

When learning is about participation, it is not a private property but a shared activity. This means that the status that, in the acquisition metaphor, comes with being knowledge-rich, is now gone. Actions can be successful or failures, but that does not make the actors inherently smart or stupid. They can act one way in one context on a given day, and could act differently at any time.

While the participation metaphor brings up all the positive associations of a growth mindsets on the individual level and equal access to learning in society, it is hard to imagine it without preserving parts of the acquisition metaphor. If knowledge is not something we possess within us, how can we even bring it from one situation into the next? How do individual learning biographies contribute to the shared activities? Can someone still be a teacher and someone else a learner?

I find considering these two metaphors really eye-opening as to how much the language we use shapes how we think about the world. Which I was aware of for example in the debate on how to use gender-neutral language, but which I never applied to learning before.

The recommendation by Sfard (1998) is not to choose one metaphor, but to carefully consider what is inadvertently implied by the language we use. Meaning transported in metaphors between domains might be buried so deeply that we are unaware of it, yet it can lead us to think about one domain wrongly and unknowingly assuming properties or causalities from a completely different domain, and to making sense in that second domain based on a faulty, assumed understanding. So awareness of the metaphors we use, and reflexion on what that does to our thinking, is not only useful but neccessary.

I don’t claim to have gotten far with these thoughts yet, but it was definitely eye-opening!

Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational researcher, 27(2), 4-13.

Student evaluations of teaching are biased, sexist, racist, predjudiced. My summary of Heffernan’s 2021 article

One of my pet peeves are student evaluations that are interpreted way beyond what they can actually tell us. It might be people not considering sample sizes when looking at statistics (“66,6% of students hated your class!”, “Yes, 2 out of 3 responses out of 20 students said something negative”), or not understanding that student responses to certain questions don’t tell us “objective truths” (“I learned much more from the instructor who let me just sit and listen rather than actively engaging me” (see here)). I blogged previously about a couple of articles on the subject of biases in student evaluations, which were then basically a collection of all the scary things I had read, but in no way a comprehensive overview. Therefore I was super excited when I came a sytematic review of the literature this morning. And let me tell you, looking at the literature systematically did not improve things!

In the article “Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching.” (2021), Troy Heffernan reports on a systematic analysis of the existing literature of the last 30 years represented in the major databases, published in peer-reviewed English journals or books, and containing relevant terms like “student evaluations” in their titles, abstracts or keywords. This resulted in 136 publications being included in the study, plus an initial 47 that were found in the references of the other articles and deemed relevant.

The conclusion of the article is clear: Student evaluations of teaching are biased depending on who the students are that are evaluating, depending on the instructor’s person and prejudices that are related to characteristics they display, depending on the actual course being evaluated, and depending on many more factors not related to the instructor or what is going on in their class. Student evaluations of teaching are therefore not a tool that should be used to determine teaching quality, or to base hiring or promotion decisions on. Additionally, those groups that are already disadvantaged in their evaluation results because of personal characteristics that students are biased against, also receive abusive comments in student evaluations that are harmful to their mental health and wellbeing, which should be reason enough to change the system.

Here is a brief overview over what I consider the main points of the article:

It matters who the evaluating students are, what course you teach and what setting you are teaching in.

According to the studies compiled in the article, your course is evaluated differently depending on who the students are that are evaluating it. Female students evaluate on average 2% more positively than male students. The average evaluation improves by up to 6% when given by international students, older students, external students or students with better grades.

It also depends on what course you are teaching: STEM courses are on average evaluated less positively than courses in the social sciences and humanities. And comparing quantitative and qualitative subjects, it turns out that subjects that have a right or wrong answer are also evaluated less positively than courses where the grades are more subjective, e.g. using essays for assessment.

Additionally, student evaluations of teaching depend on even more factors beside course content and effectiveness, for example class size and general campus-related things like how clean the university is, whether there are good food options available to students, what the room setup is like, how easy to use course websites and admission processes are.

It matters who you are as a person

Many studies show that gender, ethnicity, sexual identity, and other factors have a large influence on student evaluations of teaching.

Women (or instructors wrongly perceived as female, for example by a name or avatar) are rated more negatively than men and, no matter the factual basis, receive worse ratings at objective measures like turnaround time of essays. Also the way students react to their grades depends on their instructor’s gender: When students get the grades they expected, male instructors get rewarded with better scores, when their expectations are not met, men get punished less than women. The bias is so strong that for young (under 35 years old) women teaching in male-dominated subjects, this has been shown to have an effect of up to 37% lower ratings for women.

These biases in student evaluations result in strengthening the position of an already privileged group: white, able-bodied, heterosexual, men of a certain age (ca 35-50 years old), who the students believe to be heterosexual and who are teaching in their (and their students’) first language get evaluated a lot more favourable than anybody who does not meet one or several of the criteria.

Abuse disguised as “evaluation”

Sometimes evaluations are also used by students to express anger or frustration, and this can lead to abusive comments. Those comments are not distributed equally between all instructors, though, they are a lot more likely to be directed at women and other minorities, and they are cummulative. The more minority characteristics an instructor shows, the more abusive comments they will receive. This racist, sexist, ageist, homophobic abuse is obviously hurtful and harmful to an already disadvantaged population.

My 2 cents

Reading the article, I can’t say I was surprised by the findings — unfortunately my impression of the general literature landscape on the matter was only confirmed by this systematic analysis. However, I was positively surprised to read the very direct way in which problematic aspects are called out in many places: “For example, women receive abusive comments, and academics of colour receive abusive comments, thus, a woman of colour is more likely to receive abuse because of her gender and her skin colour“. This is really disheartening to read on the one hand because it becomes so tangible and real, especially since in addition to being harmful to instructors’ mental health and well-being when they contain abuse, student evaluations are also still an important tool in determining people’s careers via hiring and promotion decisions. But on the other hand it really drives home the message and call to action to change these practices, which I really appreciate very much: “These practices not only harm the sector’s women and most underrepresented and vulnerable, it cannot be denied that [student evaluations of teaching] also actively contribute to further marginalising the groups universities declare to protect and value in their workforces.”.

So let’s get going and change evaluation practices!


Heffernan, T. (2021). Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching. Assessment & Evaluation in Higher Education, 1-11.

An overview over what we know about what works in university teaching (based on Schneider & Preckel, 2017)

I’ve been leading a lot of workshops and doing consulting on university teaching lately, and one request that comes up over and over again is “just tell me what works!”. Here I am presenting an article that is probably the best place to start.

The famous “visible learning” study by Hattie (2009) compiled pretty much all available articles on teaching and learning, for a broad range of instructional settings. Their main conclusion was that the focus should be on visible learning, which means learning where learning goals are explicit, there is a lot of feedback happening between students and teachers throughout the interactions, and the learning process is an active and evolving endeavour, which both teachers and students reflect on and constantly try to improve.

However, what works at schools does not necessarily have to be the same that works at universities. Students are a highly select group of the general population, the ones that have been successful in the school system. For that group of people, is it still relevant what teaching methods are being used, or is the domain-specific expertise of the instructors combined with skilled students enough to enable learning?

The article “Variables associated with achievement in higher education: A systematic review of meta-analyses” by Schneider & Preckel (2017) systematically brings together what’s known about what works and what doesn’t work in university teaching, and their main findings.

Below, I am presenting the headings of the “ten cornerstone findings” as quotes from the article, but I am providing my own interpretations and thoughts based on their findings.

1. “There is broad empirical evidence related to the question what makes higher education effective.”

Even though instructors might not always be aware of it because literature on university teaching has been theoretical for a long time (or they just don’t have the time to read enough to gain an overview over the existing literature), but these days there is a lot of empirical evidence of what makes university teaching effective!

There is a HUGE body of literature on studies investigating what works and what does not, but results always depend on the exact context of the study: who taught whom where, using what methods, on what topic, … Individual studies can answer what worked in a very specific context, but they don’t usually allow for generalizations.

To help make results of studies more generally valid, scientists bring together all available studies on a particular teaching method, “type” of student or teacher in meta studies. By comparing studies in different context, they can identify success factors of applying that specific method across different contexts, thus making it easier to give more general recommendations of what methods to use, and how.

But then if you aren’t just interested in how to use one method, but what design principles you should be applying in general, you might want to look at systematic reviews of meta-studies. Systematic review of meta-studies bring together everything that has been published on a given topic and try to distill the essence from that. One such systematic review of meta-studies is the one I am presenting here, where the authors have compiled 38 meta-analyses (which were found to be all available meta-analyses relevant to higher education) and thus provide “a broad overview and a general orientation of the variables associated with achievement in higher education”.

2. “Most teaching practices have positive effect sizes, but some have much larger effect sizes than others.”

A big challenge with investigations of teaching effectiveness is that most characteristics of teaching and of learners are related to achievement. So great care needs to be taken in order to not interpret the effect one measures for example in a SoTL project as the optimal effect, because some characteristics and their related effects are much larger than others: “The real question is not whether an instructional method has an effect on achievement but whether it has a higher effect size than alternative approaches.”

This is really important to consider especially for instructors who are (planning on) trying to measure how effective they or their methods are, or who are looking in the literature for hints on what might work for them — it’s not enough to just look if a method does have a positive effect, but to consider whether even more effective alternatives might exist.

3. “The effectivity of courses is strongly related to what teachers do.”

Great news! What we do as teachers does influence how much students learn! And often times it is through really tiny things we do or don’t do, like asking open-ended questions instead of closed-ended ones, writing keywords instead of full sentences on our slides or the blackboard (for more examples, see point 5).

And there are general things within our influence as teachers that positively contribute to student learning, for example showing enthusiasm about the content we are teaching, being available to students and being helpful, and treating the students respectfully and friendly. All these behaviours help create an atmosphere in which students feel comfortable to speak their minds and interact, both with their teacher and among each others.

But it is, of course, also about what methods we chose. For example, choosing to have students work in small groups is on average more effective than having them learn both individually or as the whole group together. And small groups become most effective when students have clear responsibilities for tasks and when the group depends on all students’ inputs in order to solve the task. Cooperation and social interaction can only work when students are actively engaged, speak about their experiences, knowledge and ideas, discuss and evaluate arguments. This is what makes it so successful for learning.

4. “The effectivity of teaching methods depends on how they are implemented.”

It would be nice to know that just by using certain methods, we can increase teaching effectivity, but unfortunately they also need to be implemented in the right way. Methods can work better or not so well, depending on how they are done. For example, asking questions is not enough, we should be asking open instead of closed questions. So it is not only about using large methods, but to tweak the small moments to be conductive to learning (examples for how to do that under point 5)

Since microstructure (all the small details in teaching) is so important, it is not surprising that the more time teachers put into planning details of their courses, the higher student achievement becomes. Everything needs to be adapted to the context of each course: who the students are and what the content is. This is work!

5. “Teachers can improve the instructional quality of their courses by making a number of small changes.”

So now that we know that teachers can increase how much students learn in their classes, here is a list of what works (and many of those points are small and easy to implement!)

  • Class attendance is really important for student learning. Encourage students to attend classes regularly!
  • Make sure to create the culture of asking questions and engaging in discussion, for example by asking open-ended questions.
  • Be really clear about the learning goals, so you can plan better and students can work towards the correct goals, not to wrong ones that they accidentally assumed.
  • Help students see how what you teach is relevant to their lives, their goals, their dreams!
  • Give feedback often, and make sure it is focussed on the tasks at hand and given in a way that students can use it in order to improve.
  • Be friendly and respectful towards students (duh!),
  • Combine spoken words with visualizations or texts, but
    • When presenting slides, use only a few keywords, not half or full sentences
    • Don’t put details in a presentation that don’t need to be there, not for decoration or any other purpose. They are only distracting from what you really want to show
    • When you are showing a dynamic visualization (simulation or movie), give an oral rather than a written explanation with it, so the focus isn’t split between two things to look at. For static pictures, this isn’t as important.
  • Use concept maps! Let students construct them themselves to organize and discuss central ideas of the course. If you provide concept maps, make sure they don’t contain too many details.
  • Start each class with some form of “advance organizer” — give an overview over the topics you want to go through and the structure in which that will happen.

Even though all these points are small and easy to implement, their combined effect can be large!

6. “The combination of teacher-centered and student-centered instructional elements is more effective than either form of instruction alone.”

There was no meta-analysis directly comparing teacher-centered and student-centered teaching methods, but elements of both have high effects on student learning. The best solution is to use a combination of both, for example complementing teacher presentations by interactive elements, or having the teacher direct parts of student projects.

Social interaction is really important and maximally effective when teachers on the one hand take on the responsibility to explicitly prepare and guide activities and steer student interactions, while on the other hand giving students the space to think for themselves, choose their own paths and make their own experiences. This means that ideally we would integrate opportunities for interaction in more teacher-centered formats like lectures, as well as making sure that student-centered forms of learning (like small groups or project-based learning) are supervised and steered by the instructor.

7. “Educational technology is most effective when it complements classroom interaction.”

We didn’t have a lot of choice in the recent rise of online learning, but the good news is that it can be pretty much as effective as in-person learning in the classroom. Blended learning, i.e. combining online and in-class instruction, is even more effective, especially when it is used purposefully for visualizations and such.

Blended learning is not as successful as in-person learning when used mainly to support communication; compared to in-person, online communication is limiting social interaction (or at least it was before everybody got used to it during covid-19? Also, the article points out explicitly that instructional technologies are developing quickly and that only studies were included that were published before 2014. Therefore MOOCs, clickers, social media and other newer technologies are not included).

8. “Assessment practices are about as important as presentation practices.”

Despite constructive alignment being one of the buzzwords that is everywhere these days, the focus of most instructors is still on the presentation part of their courses, and not equally on assessment. But the results presented in the article indicate that “assessment practices are related to achievement about as strongly as presentation practices”!

But assessment does not only mean developing exam questions. It also means being explicit about learning goals and what it would look like if they were met. Learning outcomes are so important! For the instructor to plan the whole course or a single class, to develop meaningful tests of learning and then actually evaluating it, in order to give feedback to students. Students, on the other hand, need guidance on what they should focus on both in reflecting on what they learned during past lessons, preparing for future lessons, and preparing for the exam.

Assessment also means giving formative feedback (feedback with the explicit and only purpose of helping students learn or teachers improve teaching, not giving a final evaluation after the fact) throughout the whole teaching process. 

Assessment also doesn’t only mean the final exam, it can also mean smaller exercises or tasks throughout the course. Testing frequently (more than two or three times per semester) helps students learn more. Requiring that students show they’ve learnt what they were supposed to learn before the instructor moves on to the next topic has a large influence on learning. And the frequent feedback that can be provided on that basis helps them learn even more.

And: assessment can also mean student-peer assessment or student self-assessment, which agree on average fairly well with assessment by the instructor but have the added benefit of explicitly thinking about learning outcomes and whether they have been achieved. Of course, this is only possible when learning outcomes are made explicit.

The assessment part is so important, because students optimize where to spend their time based on what they perceive as important, which is often related to what they will need to be able to do in order to pass an exam. The explicit nature of the learning outcomes (and their alignment with the exam) are what students use to decide what to spend time and attention on.

9. “Intelligence and prior achievement are closely related to achievement in higher education.”

Even though we as instructors have a large influence on student achievement by all the means described above, there are also student characteristics that influence how well students can achieve. Intelligence and prior achievement are correlated to how well pupils will do at university (although both are not fixed characteristics that students are born with, but formed by how much and what quality of education students attended up to that point). If we want better students, we need better schools.

10. “Students’ strategies are more directly associated with achievement than students’ personality or personal context.”

Despite student backgrounds and personalities being important for student achievement, even more important are what strategies they are using to learn, to prepare for exams, to set goals and regulate how much effort they put on what task. Successful strategies are frequent class attendance as well as a strategic approach to learning, meaning that instead of working hard non stop, students allocate time and effort to those topics and problems that are most important. But also on the small scale, what students do matters: Note taking, for example, is a much more successful strategy when students are listening to a talk without slides. When slides are present, the back-and-forth between slides and notes seems to distract students from learning.

Training strategies works best in class rather than outside in extra courses with artificial problems.

So where do we go from here?

There you have it, that was my summary of the Schneider & Preckel (2017) systematic review of meta-analyses of what works in higher education. We know now of many things that work pretty much universally, but even though many of the small practices are easy to implement, it still doesn’t tell us what methods to use for our specific class and topic. So where do we go from here? Here are a couple of points to consider:

Look for examples in your discipline! What works in your discipline might be published in literature that was either not yet used in meta-studies, or published in a meta-study after 2014 (and thus did not get included in this study). So a quick literature search might be very useful! In addition to published scientific studies, there is a wealth of information available online of what instructors perceive to be best practice (for example SERC’s Teach the Earth collection, blogs like this one, tweets collected under hashtags like #FieldWorkFix, #HigherEd). And of course always talk to people teaching the same course at a different institution or who taught it previously at yours!

Look for examples close to home! What works and what doesn’t is also culture dependent. Try to find out what works in similar courses at your institution or a neighboring one with the same or a similar student body and similar learning outcomes?

And last not least: Share your own experiences with colleagues! Via twitter, blogs, workshops, seminars. It’s always good to share experiences and discuss! And on that note — do you have any comments on this blog post? I’d love to hear from you! :)


Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological bulletin, 143(6), 565.

Why students cheat (after Brimble, 2016)

Recently, one topic seemed to emerge a lot in conversations I’ve been having: Students cheating, or the fear thereof. Cheating is “easier” when exams are written online and we don’t have students directly under our noses, and to instructors it feels like cheating has increased a lot (and maybe it has!). We’ve discussed all kinds of ways to avoid cheating: Asking questions that have answers that cannot easily be googled (but caution — this tends to make things a lot more difficult than just asking for definitions!). Putting enough time pressure on students so they don’t have time to look up things they don’t know (NOT a fan of that!!!). Using many different exams in parallel where students get assigned exercises randomly so that they would at least have to make sure they are copying from someone trying to answer the same question. But one question that has been on my mind a lot is why do students cheat in the first place, and is there anything we can do as instructors to influence whether they will want to cheat?

I read the chapter “Why students cheat: an exploration of the motivators of student academic dishonesty in higher education” in the Handbook of Academic Integrity by Brimble (2016) and here are some of the points, all backed up by different studies (for references, check back to that chaper), that stood out to me:

Students are under an enormous pressure to succeed academically, yet at the same time they are real people with lives, families, responsibilities, possibly jobs, and more. Whether its because of financial considerations, expectations of parents or peers, or other reasons: Cheating might sometimes feel like it’s the only solution to survive and finish a course among competing priorities.

Since students are under such a pressure to succeed, it is important to them that the playingfield is level and others don’t get an unfair and undeserved advantage over them. If students feel like everybody else is cheating, they might feel like they have to cheat in order to keep up. Also if the workload is so high they feel like they cannot possibly manage in other ways or content is so difficult, they feel like cheating is their only way out.

Students also feel that cheating is a “victimless crime”, so no harm done, really. Especially helping other students, even if that counts in fact as cheating, isn’t perceived as doing anything wrong. Especially if courses feel irrelevant to their lives or if students don’t have a relationship with the instructor, it does not feel like they are doing anything wrong by cheating.

Also in other cases, students might not even be aware that they are cheating (for example if they are new at university, or studying in interdisciplinary programs where norms differ between programs, or in situations that are new to them (like for example in open-book online exams, where it isn’t clear what needs to be cited and what’s common knowledge?).

Students report the actions of their role models in their academic field, their instructors, are super important in forming an idea of what is right and acceptable. If instructors don’t notice that students cheat, or worse, don’t react to it by reporting and punishing such a behavior, this feels almost like encouragement to cheat more, both to the original cheater and to others who observe the situation. Students then rationalize cheating even when they know it’s wrong.

Cheating is also a repeat offense — and the more a student does it, the easier it gets.

So from reading all of that, what can we do as instructors to lower the motivation to cheat?

First: educate & involve

If students don’t know exactly what we define as cheating, they cannot be blamed if they accidentally cheat. It’s our job to help them understand what cheating means in our specific context. We can probably all be a little more explicit about what is acceptable and what is not, especially in situations where there is a grey area. Of course it’s not a fun topic, but we need to be explicit about rules and also what happens when rules aren’t adhered to.

Interestingly, apparently the more involved students are in campus culture, the more they want to protect the institution’s reputation and not cheat. So building a strong environment that includes e.g. regularly communicated honor codes that become part of the culture might be beneficial, as well as helping students identify with the course, the study program, the institution.

Second: prosecute & punish

It’s not enjoyable, but if we notice any cheating, we need to prosecute it and punish it, even though that might come at high costs to us in terms of time, conflict, admin. The literature seems to be really clear on this one: If we let things slide a little, they become acceptable.

Ideally we would know what the rules and procedures are like at our institutions if we see something that we feel is cheating, and who the people are that can support us in dealing with the situation. If not, maybe now is a good time to figure this out.

Third: engage & adapt

Cheating is more likely to occur when there are no, or only weak, instructor-student relationships. Additionally, if students don’t feel engaged in a course, if they don’t receive enough guidance by the instructor, or if a course feels irrelevant or like they aren’t learning anything anyway, students are more likely to cheat. Similarly if a course feels too difficult or too time-consuming, if the workload is too high, or if they feel treated unfairly.

So the lesson here is to build strong relationships and make the courses both engaging and relevant to students. Making sure that the learning outcomes are relevant in the curriculum and for students’ professional development is, of course, always good advice, but in the light of making students want to learn and not have them feel like they just need to tick a box (and then do it by cheating because it really doesn’t matter one way or the other). Explaining what they will be able to do once they meet the learning outcomes (both in terms of what doors the degree opens, but also what they can practically do with the skills they learned) is another common — nevertheless now particularly useful — piece of advice. And then adjusting level of difficulty and workload to something that is managable for students — again, good advice in general and now in particular!

Of course, doing all those things is not a guarantee that students won’t cheat. But to me it feels like if I’ve paid attention to all this, I did what I could do, and that then it’s on them (which makes it easier to prosecute? Hopefully?).

What do you think? Any advice on how to deal with cheating, and especially how to prevent it?


Brimble, M. (2016). Why students cheat: an exploration of the motivators of student academic dishonesty in higher education. Handbook of academic integrity, 365.

Published in Oceanography: How to Teach Motivating and Hands-On Laboratory and Field Courses in a Virtual Setting

One of the few “behind the scenes” shots of me taking #WaveWatching pictures! See the super awesome current right at my feet? :-D⁠⠀
⁠⠀
Similar to kitchen oceanography, I believe that wave watching is a great tool in education and outreach, especially during times where activites have to be socially distant. My article “How to Teach Motivating and Hands-On Laboratory and Field Courses in a Virtual Setting”, where I am elaborating on both, just came out in Oceanography, the official magazine of The Oceanography Society. Check it out at https://tos.org/oceanography/article/how-to-teach-motivating-and-hands-onlaboratory-and-field-courses-in-a-virtual-setting⁠!

Glessmer, M.S. 2020. How to teach motivating and hands-on laboratory and field courses in a virtual setting. Oceanography 33(4):130–132, https://doi.org/10.5670/oceanog.2020.417.

Mind-set interventions

Two years ago, I was really into daily writing in my bullet journal. I used it to plan out my day, week, month, year, but also to set goals and reflect on how I was doing achieving them. During that year I felt really efficient, accomplished, capable, and it definitely felt related to all that reflection and goal-setting going on. In 2019 I continued, but not with the same regularity, and this year I’m only on page 40 of my 2020 bullet journal. But as I felt frustrated about not moving towards a specific goal a little while ago (and, in fact, effectively moving away from it), I decided that it was time to bring out the bullet journal and write down what I wanted, and why. I instantly felt better and more motivated, and this reminded me of an article that I had wanted to blog about for a while now. Because even though I don’t know if bullet journaling is what really helps me stay on the track I want to be on, or if there are other mechanisms at play, there is good evidence that short, written exercises can help students transform their mindset and achieve more.

What is an academic mind-set, and why does it matter?

What is referred to as the “academic mind-set” is a collection of core beliefs around how capable someone is and how relevant the effort that person puts into something is for their bigger picture, both related in an academic context. So for example students might believe that their intelligence and capabilities are static (“I am just too stupid for maths”) or alternatively, that anything can be learned if you just put your mind to it and enough effort into learning it. Or students might believe that they are learning for the teacher or to achieve a certain grade, rather than because they are actually learning something that will improve their own lives (or those of others).

Obviously some of those beliefs are more conductive to learning than others, and therefore the idea is of academic mind-set interventions is to change beliefs to help students become more successful in their academic lives, for example by helping them see that intelligence is not fixed but rather a matter of training, helping them develop a “growth mindset”. Or recognizing that classes — no matter how boring — might be a useful tool to bring them closer to what really matters to them, which helps give them a sense of purpose. If those beliefs are successfully addressed through interventions, that can change how students react to challenges that come their way because they interpret for example effort not as a sign of weakness but rather as a sign of effort that leads to learning. Ideally, this leads to a “positive viciuos circle” where they recognize more and more how true those beliefs are because they are becoming more successful academically. And whether this works on a large scale was tested in the article I want to tell you about:

Paunesku et al. (2015): Mind-set interventions are a scalable treatment for academic underachievement.

In the article, Paunesku et al. (2015) describe how academic-mind-set interventions can increase academic outcomes even when they are administered online and not specifically targeted to the students’ individual contexts. That way, those interventions become easily applicable everywhere and are not only available to students who are likely advantaged already, e.g. those where the parents and/or school invest extra time and money into their development.

In this case, high school students participated in two 45 minute sessions online (which is really not a lot of time in the big scheme of things!) and both interventions showed a positive impact. And, it turns out, that students who received both interventions (in contrast to one intervention and one control treatment) did not show greater benefit than from just one intervention (So if we wanted to do an intervention with our class, we wouldn’t even need to commit to twice 45 minutes).

Growth-mind-set interventions

One of the 45 minute sessions was dedicated to “growth-mind-set interventions”, designed to help students recognize that their intelligence can increase when they work hard on difficult tasks, and that the difficulty they are having is the opportunity for growth and not a sign that they are not good enough.

For this intervention, students read an article on how the brain can grow through hard work. Additionally, students did two writing exercises: Summarizing the article in their own words, and then writinga letter to a student who felt not smart enough to do well, and advising them on what they could do, based on the article the students had read.

Sense-of-purpose interventions

The second 45 minute session dealt with a “sense-of-purpose intervention”. This was done by first asking students to reflect briefly about their vision of a better world, and then helping the students reflect on what meaningful goals beyond themselves they could, and and would want to, contribute to if they learned a lot in school, and how schoolwork could help them there. This intervention is designed to help students stay motivated during boring or frustrating times because they are working towards a bigger goal.

Intervening online; and should we try it, too?

The interventions discussed in the article were specifically designed to work well online: They targeted only one single core belief each, they took only very little time, and they could be done with standardized materials because they used common stories and science concepts, i.e. they did not require tailoring to the specific course or context. This makes them — or something similar — a viable tool in other instruction, too. Seeing that having two interventions didn’t yield larger gains than just having one, I would tend to do something along the lines of the second intervention: Have students describe their vision of an ideal world, and then write about how studying will let them contribute to making it become a reality.

Granted, this research was done on highschool students and is more of a proof-of-concept than a blueprint that we can copy. But I still think that we could have our students do something similar. There is a lot of research on how applying learning to students’ lives is a really important step in the learning process, and reflecting about how that learning is contributing to their lives is one part of that. And if they grow their academic mind-set and are thus more successful even beyond the specific course we are teaching, how awesome would that be?

Even just thinking about writing about my vision for the world and how my learning of new things can open up ways I can contribute to making that vision become a reality makes me feel motivated and like the world is opening up to all these exciting new possibilities that I can’t wait to get started with. Can you feel it? I think it would be amazing to give this to our students!

Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mind-set interventions are a scalable treatment for academic underachievement. Psychological science, 26(6), 784-793. [link]

Even though students in the active classroom learn more, they feel like they learn less

If you’ve been trying to actively engage students in your classes, I am sure you’ve felt at least some level of resistance. Even though we know from literature (e.g. Freeman et al., 2014) that active learning increases student performance, it’s sometimes difficult to convince students that we are asking them to do all the activities for their own good.

But I recently came across an article that I think might be really good to help convince students of the benefits of active learning: Deslauriers et al. (2019) are “measuring actual learning versus feeling of learning in response to being actively engaged in the classroom” in different physics classes. They compare active learning (which they base on best practices in the given subject) and passive instruction (where lectures are given by experienced instructors that have a track record of great student evaluations). Apart from that, both groups were treated equally, and students were randomly assigned to one or the other group.

Figure from Deslauriers et al. (2019), showing a comparison of performance on the test of learning and feeling of learning responses between students taught with a traditional lecture (passive) and students taught actively for the statics class

As expected, the active case led to more learning. But interestingly, despite objectively learning more in the active case, students felt that they learned less than the students in the passive group (which is another example that confirms my conviction that student evaluations are really not a good measure of quality of instruction), and they said they would choose the passive learning case given the choice. One reason might be that students interpret the increased effort that is required in active learning as a sign that they aren’t doing as well. This might have negative effects on their motivation as well as engagement with the material.

So how can we convince students to engage in active learning despite their reluctance? Deslauriers et al. (2019) give a couple of recommendations:

  • Instructors should, early on in the semester, explicitly explain the value of active learning to students, and explicitly point out that increased cognitive effort means that more learning is taking place
  • Instructors should also have students take some kind of assessment early on, so students get feedback on their actual learning rather than relying only on their perception
  • Throughout the semester, instructors should use research-based strategies for their teaching
  • Instructors should regularly remind students to work hard and point out the value of that
  • Lastly, instructors should ask for frequent student feedback throughout the course (my favourite method here) and respond to the points that come up

I think that showing students data like the one above might be really good to get them to consider that their perceived learning is actually not a good indicator for their actual learning, and convincing them that putting in the extra effort that comes with active learning is helping them learn even though it might not feel like it. I’ve always explicitly talked to students about why I am choosing certain methods, and why I might continue doing that even when they told me they didn’t like it. And I feel that that has always worked pretty well. Have you tried that? What are your experiences?

Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom
Louis Deslauriers, Logan S. McCarty, Kelly Miller, Kristina Callaghan, Greg Kestin
Proceedings of the National Academy of Sciences
Sep 2019, 16 (39) 19251-19257; DOI: 10.1073/pnas.1821936116