Tag Archives: literature

A tool to understand students’ previous experience and adapt your practical courses accordingly — by Kirsty Dunnett

Last week, I wrote about increasing inquiry in lab-based courses and mentioned that it was Kirsty who had inspired me to think about this in a new-to-me way. For several years, Kirsty has been working on developing practical work, and a central part of that has been finding out the types and amount of experiences incoming students have with lab work. Knowing this is obviously crucial to adapt labs to what students do and don’t know and avoid frustrations on all sides. And she has developed a nifty tool that helps to ask the right questions and then interpret the answers. Excitingly enough, since this is something that will be so useful to so many people and, in light of the disruption to pre-univeristy education caused by Covid-19, the slow route of classical publication is not going to help the students who need help most, she has agreed to share it (for the first time ever!) on my blog!

Welcome, Kirsty! :)

A tool to understand students’ previous experience and adapt your practical courses accordingly

Kirsty Dunnett (2021)

Since March 2020, the Covid-19 pandemic has caused enormous disruption across the globe, including to education at all levels. University education in most places moved online, while the disruption to school students has been more variable, and school students may have missed entire weeks of educational provision without the opportunity to catch up.

From the point of view of practical work in the first year of university science programmes, this may mean that students starting in 2021 have a very different type of prior experience to students in previous years. Regardless of whether students will be in campus labs or performing activities at home, the change in their pre-university experience could lead to unforeseen problems if the tasks set are poorly aligned to what they are prepared for.

Over the past 6 years, I have been running a survey of new physics students at UCL, asking about their prior experience. It consists of 5 questions about the types of practical activities students did as part of their pre-universities studies. By knowing students better, it is possible to introduce appropriate – and appropriately advanced – practical work that is aligned to students when they arrive at university (Dunnett et al., 2020).

The question posed is: “What is your experience of laboratory work related to Physics?”, and the five types of experience are:
1) Designed, built and conducted own experiments
2) Conducted set practical activities with own method
3) Completed set practical activities with a set method
4) Took data while teacher demonstrated practical work
5) Analysed data provided
For each statement, students select one of three options: ‘Lots’, ‘Some’, ‘None’, which, for analysis, can be assigned numerical values of 2, 1, 0, respectively.

The data on its own can be sufficient for aligning practical provision to students (Dunnett et al., 2020).

More insight can be obtained when the five types of experience are grouped in two separate ways.

1) Whether the students would have been interacting with and manipulating the equipment directly. The first three statements are ‘Active practical work’, while the last two are ‘Passive work’ on the part of the student.

2) Whether the students have had decision making control over their work. The first two statements are where students have ‘Control’, while the last three statements are where students are given ‘Instructions’.

Using the values assigned to the levels of experience, four averages are calculated for each student: ‘Active practical work’, ‘Passive work’; ‘Control’, ‘Instructions’. The number of students with each pair of averages is counted. This leads to the splitting of the data set, into one that considers ‘Practical experience’ (the first two averages) and one that considers ‘Decision making experience’ (the second pair of averages). (Two students with the same ‘Practical experience’ averages can have different ‘Decision making experience’ averages; it is convenient to record the number of times each pair of averages occurs in two separate files.)

To understand the distribution of the experience types, one can use each average as a co-ordinate – so each pair gives a point on a set of 2D axes – with the radius of the circle determined by the fraction of students in the group who had that pair of averages. Examples are given in the figure.

Prior experience of Physics practical work for students at UCL who had followed an A-level scheme of studies before coming to university. Circle radius corresponds to the fraction of responses with that pair of averages; most common pairs (largest circles, over 10% of students) are labelled with the percentages of students. The two years considered here are students who started in 2019 and in 2020. The Covid-19 pandemic did not cause disruption until March 2020, and students’ prior experience appears largely unaffected.

With over a year of significant disruption to education and limited catch up opportunities, the effects of the pandemic on students starting in 2021 may be significant. This is a quick tool that can be used to identify where students are, and, by rephrasing the statements of the survey to consider what students are being asked to to in their introductory undergraduate practical work – and adding additional statements if necessary, provide an immediate check of how students’ prior experience lines up with what they will be asked to do in their university studies.

With a small amount of adjustment to the question and statements as relevant, it should be easy to adapt the survey to different disciplines.

At best, it may be possible to actively adjust the activities to students’ needs. At worst, instructors will be aware of where students’ prior experience may mean they are ill-prepared for a particular type of activity, and be able to provide additional support in session. In either case, the student experience and their learning opportunities at university can be improved through acknowledging and investigating the effects of the disruption caused to education by the Covid-19 pandemic.


K. Dunnett, M.K. Kristiansson, G. Eklund, H. Öström, A. Rydh, F. Hellberg (2020). “Transforming physics laboratory work from ‘cookbook’ to genuine inquiry”. https://arxiv.org/abs/2004.12831

Increasing inquiry in lab courses (inspired by @ks_dnnt and Buck et al., 2008)

My new Twitter friend Kirsty, my old GFI-friend Kjersti and I have been discussing teaching in laboratories. Kirsty recommended an article (well, she did recommend many, but one that I’ve read and since been thinking about) by Buck et al. (2008) on “Characterizing the level of inquiry in the undergraduate laboratory”.

In the article, they present a rubric that I found intriguing: It consists of six different phases of laboratory work, and then assigns 5 levels ranging from a “confirmation” experiment to “authentic inquiry”, depending on whether or not instruction is giving for the different phases. The “confirmation” level, for example, prescribes everything: The problem or question, the theoretical background, which procedures or experimental designs to use, how the results are to be analysed, how the results are to be communicated, and what the conclusions of the experiment should be. For an open inquiry, only the question and theory are provided, and for authentic inquiry, all choices are left to the student.

The rubric is intended as a tool to classify existing experiments rather than designing new ones or modifying existing, but because that’s my favourite way to think things through, I tried plugging my favourite “melting ice cubes” experiment into the rubric. Had I thought about it a little longer before doing that, I might have noticed that I would only be copying fewer and fewer cells from the left going to the right, but even though it sounds like a silly thing to do in retrospect, it was actually still helpful to go through the exercise.

It also made me realize the implications of Kirsty’s heads-up regarding the rubric: “it assumes independence at early stages cannot be provided without independence at later stages”. Which is obviously a big limitation; one can think of many other ways to use experiments where things like how results are communicated, or even the conclusion, are provided, while earlier steps are left open for the student to decide. Also providing guidance on how to analyse results without prescribing the experimental design might be really interesting! So while I was super excited at first to use this rubric to povide an overview over all the different ways labs can possibly be structured, it is clearly not comprehensive. And a better idea than making a comprehensive rubric would probably be to really think about why instruction for any of phases should or should not be provided. A little less cook-book, a little more thought here, too! But still a helpful framework to spark thoughts and conversations.

Also, my way of going from one level to the next by simply withholding instruction and information is not the best way to go about (even though I think it works ok in this case). As the “melting ice cubes” experiment shows unexpected results, it usually organically leads into open inquiry as people tend to start asking “what would happen if…?” questions, which I then encourage them to pursue (but this usually only happens in a second step, after they have already run the experiment “my way” first). This relates well to “secret objectives” (Bartlett and Dunnett, 2019), where a discrepancy appears between what students expect based on previous information and what they then observe in reality (for example in the “melting ice cube” case, students expect to observe one process and find out that another one dominates), and where many jumping-off points exist for further investigation, e.g. the condensation pattern on the cups, or the variation of parameters (what if the ice was forced to the bottom of the cup? what’s the influence of the exact temperatures or the water depth, …?).

Introducing an element of surprise might generally be a good idea to spark interest and inquiry. Huber & Moore (2001) suggest using “discrepant events” (their example is dropping raisins in carbonated drinks, where they first sink to the bottom and then raise as gas bubbles attach to them, only to sink again when the bubbles break upon reaching the surface) to initiate discussions. They then  suggest following up the observation of the discrepant event with a “can you think of a way to …?” question (i.e. make the raisin raise faster to the surface). The “can you think of a way to…?” question is followed by brainstorming of many different ideas. Later, students are asked “can you find a way to make it happen?”, which then means that they pick one of their ideas and design and conduct an experiment. Huber & Moore (2001) then suggest a last step, in which students are asked to do a graphical representation or of their results or some other product, and “defend” it to their peers.

In contrast to how I run my favourite “melting ice cubes” experiment when I am instructing it in real time, I am using a lot of confirmation experiences, for example in my advent calendar “24 days of #KitchenOceanography”. How could they be re-imagined to lead to more investigation and less cook-book-style confirmation, especially when presented on a blog or social media? Ha, you would like to know, wouldn’t you? I’ve started working on that, but it’s not December yet, you will have to wait a little! :)

I’m also quite intrigued by the “product” that students are asked to produce after their experimentation, and by what would make a good type of product to ask for. In the recent iEarth teaching conversations, Torgny has been speaking of “tangible traces of learning” (in quotation marks which makes me think there is definitely more behind that term than I realize, but so far my brief literature search has been unsuccessful). But maybe that’s why I like blogging so much, because it makes me read articles all the way to the end, think a little more deeply about them, and put the thought into semi-cohesive words, thus giving me tangible proof of learning (that I can even google later to remind me what I thought at some point)? Then, maybe everybody should be allowed to find their own kind of product to produce, depending on what works best for them. On the other hand, for the iEarth teaching conversations, I really like the format of one page of text, maximum, because I really have to focus and edit it (not so much space for rambling on as on my blog, but a substantially higher time investment… ;-)). Also I think giving some kind of guidance is helpful, both to avoid students getting spoilt for choice, and to make sure they focus their time and energy on things that are helping the learning outcomes. Cutting videos for example might be a great skill to develop, but it might not be the one you want to develop in your course. Or maybe you do, or maybe the motivational effects of letting them choose are more important, in which case that’s great, too! One thing that we’ve done recently is to ask students to write blog or social media posts instead of classical lab reports and that worked out really well and seems to have motivated them a lot (check out Johanna Knauf’s brilliant comic!!!).

Kirsty also mentioned a second point regarding the Buck et al. (2008) rubric to keep in mind: it is just about what is provided by the teacher, not about the students’ role in all this. That’s an easy trap to fall into, and one that I don’t have any smart ideas about right now. And I am looking forward to discussing more thoughts on this, Kirsty :)

In any case, the rubric made me think about inquiry in labs in a new way, and that’s always a good thing! :)


Bartlett, P. A. and K. Dunnett (2019). Secret objectives: promoting inquiry and tackling preconceptions in teaching laboratories. arXiv:1905.07267v1 [physics.ed-ph]

Buck, L. B., Bretz, S. L., & Towns, M. H. (2008). Characterizing the level of inquiry in the undergraduate laboratory. Journal of college science teaching, 38(1), 52-58.

Huber, R.A., and C.J. Moore. 2001. A model for extending hands-on science to be inquiry based. School Science and Mathematics 101 (1): 32–41.

“Wonder questions” and geoscience misconceptions.

Recently, as part of the CHESS/iEarth Summer School, Kikki Kleiven lead a workshop on geoscience teaching. She gave a great overview over how to approach teaching and presented many engaging methods (like, for example, concept cartoons and role plays), but two things especially sparked my interest, so that I read up on them a little more: “wonder questions” and misconceptions in geosciences.

“Wonder questions”

The first topic that prompted a little literature search were “wonder questions”, and I found a recent article by Lindstrøm (2021) on the topic that describes the three ways in which “wonder questions” are a powerful pedagogical tool:

  1. they support and stimulate student learning: When students are asked to come up with  “wonder questions”, they need to consider what they just learned and how it fits (or doesn’t fit) with what they already knew before. They need to think new thoughts and actively look for connections, both helping them learn.
  2. they models scientists’ behavior: Asking good questions is a skill that needs practice!
  3. they can be a powerful motivator for students and teachers alike: As a teacher, it’s great to see what questions students come up with and it helps tailor the teaching to what’s really relevant to the students. Seeing their questions taken up in teaching, on the other hand, is giving students agency and makes them feel heard.

Lindstrøm distinguishes four types of wonder questions that she typically encounters, and which are useful in different ways:

  • Questions where students rephrase a concept and want confirmation that they understood something correctly are helping them make sure they are on the right track, but also confirm it to the teacher. Those questions can also be used in future teaching to paraphrase the material in the students’ own words.
  • Questions that are very close to course content and bring in real-world examples are great to make sure the examples used in (future) classes are actually relevant to students’ lives.
  • Questions that go beyond the course content are also useful to clarify what is going to be taught in this specific course and what other courses will build on it. They can also open up doors for future (student) research projects.
  • Questions that reveal misconceptions are great because we can only address misconceptions if we know about them in the first place.

Which brings us to the next topic Kikki inspired me to revisit:

Geoscience misconceptions

Kikki mentioned the article “A compilation and review of over 500 geoscience misconceptions” by Francek (2013). I’m familiar with misconceptions in physics (especially the ones related to hydrostatics and rotating systems & Coriolis force that I’ve worked with), and within iEarth there has been a lot of talk about how students don’t understand geological time (which I don’t have a good grasp of, either). But reading the “500” in the title was enough to make me want to check out the article to get an idea of what other misconceptions might be relevant for my own teaching. And it turns out there are plenty to choose from!

Many of the misconceptions that are particularly relevant for my own interests were originally collected by Kent Kirkby (2008) as “easier to address” misconceptions, for example on science, ocean systems, glaciers, climate:

  • “Upwelling occurs as deeper water layers warm and rise ([…] tied to students’ knowledge of how air masses are affected by temperature).”
  • “Upwelling occurs as deeper water layers lose their salinity and rise (students like symmetry!).”
  • “Glacial ice moves backwards during glacial ‘retreats’ (like everything that retreats in real life)”
  • “Glacial ice is stationary during times when front is neither advancing or retreating.”
  • “Earth’s climate is controlled primarily by the atmosphere circulation, rather than ocean circulation (real life experiences as a terrestrial animal, TV weather reports)”

Reading through that list is really interesting and a good reminder that there are a lot of things that we take for granted but that are really not as obvious as we have might come to believe over the years. And the misconceptions are only “easy to address” (and one way of addressing them is through “elicit, confront, resolve“) when we are aware of them in the first place.

Francek, M. (2013). A compilation and review of over 500 geoscience misconceptions. International Journal of Science Education, 35(1), 31-64.

Lindstrøm, C. (2021). The pedagogical power of Wonder Questions. The Physics Teacher, 59(4), 275-277.

Why should students want engage in something that changes their identity as well as their view of themselves in relation to friends and family?

Another iEarth Teaching Conversation with Kjersti Daae and Torgny Roxå, summarized by Mirjam Glessmer

“Transformative experiences” (Pugh et al., 2010) are those experiences that change the way a person looks at the world, so that they henceforth voluntarily engage in a new-to-them practice of sensemaking on this new topic, and perceive it as valuable. There are methods to facilitate transformative experiences for teaching purposes (Pugh et al., 2010), and discovering this felt like the theoretical framework I had been looking for for #WaveWatching just fell into my lap. But then Torgny asked the question in the title above. For many academics, seeing the world through new eyes, being asked questions they haven’t asked themselves before, discovering gaps in their argumentations, surrendering to a situation (Pugh 2011), engaging in sensemaking (Odden and Russ, 2019), being part of a community of practice (Wenger, 2011) is fun. Not in all contexts and on all topics, of course, but at least in many contexts. But can we assume it’s the same for students?

In order to feel that you want to take on a challenge in which you don’t know whether or not you’ll succeed, a crucial condition is that you believe that your intelligence and your skills can be developed (Dweck, 2015). A growth mindset can be cultivated by the kind of feedback we give students (Dweck, 2015). The scaffolding (Wood et al., 1976) we provide, and the opportunities for creating artefacts as tangible proof of learning* can support this. But how do we get students to engage in the first place?

One approach, the success of which I have anecdotal evidence for, could be to use surprising gimmicks like a DIY fortune teller or a paper clip to be shaped into a spinning top to raise intrigue, if not for the topic itself right away, then for something that will later be related to the topic, hoping that the engagement with the object can be transferred to the topic.

Another approach, which also aligns with my personal experience, might be to let students experience the relevance of a situation vicariously, infecting students with the teacher’s enthusiasm for a topic (Hodgson, 2005). However, Torgny raised the point that sometimes the (overly?) enthusiastic teacher themselves could become the subject of student fascination, thus diverting attention from the topic they wanted the students to engage with.

A third way might be to point out alignment of tasks with the students’ own goals & identities. Growth mindset interventions can increase domain-specific desire to learn (Burette et al., 2020), identity interventions increase the likelihood of engagement, for example targeting physics identity (Wulff et al., 2018). Goal-setting intervention can improve academic performance (Morisano et al., 2010).

I want to relate these three ideas to feelings of competence, relatedness and autonomy, which are the three basic requirements for intrinsic motivation (Ryan & Deci, 2017), but I am sadly out of space. But I think that self-determination theory is a useful lens to keep in mind when developing teaching.

References:

  • Burnette, J. L., Hoyt, C. L., Russell, V. M., Lawson, B., Dweck, C. S., & Finkel, E. (2020). A growth mind-set intervention improves interest but not academic performance in the field of computer science. Social Psychological and Personality Science11(1), 107-116.
  • Dweck, C. (2015). Carol Dweck revisits the growth mindset. Education Week35(5), 20-24.
  • Hodgson, V. 2005. Lectures and the experience or relevance. In Experience of learning: Implications for teaching and studying in higher education, F. Marton, D. Hounsell, and N. Entwistle, vol. 3, 159–71. Edinburgh: University of Edinburgh, Centre for Teaching, Learning and Assessment
  • Odden, T. O. B., & Russ, R. S. (2019). Defining sensemaking: Bringing clarity to a fragmented theoretical construct. Science Education103(1), 187-205.
  • Morisano, D., Hirsh, J. B., Peterson, J. B., Pihl, R. O., & Shore, B. M. (2010). Setting, elaborating, and reflecting on personal goals improves academic performance.Journal of Applied Psychology, 95(2), 255–264. https://doi.org/10.1037/a0018478
  • Pugh, K. J., Linnenbrink-Garcia, L., Koskey, K. L., Stewart, V. C., & Manzey, C. (2010). Teaching for transformative experiences and conceptual change: A case study and evaluation of a high school biology teacher’s experience. Cognition and Instruction28(3), 273-316.
  • Pugh, K. J. (2011). Transformative experience: An integrative construct in the spirit of Deweyan pragmatism. Educational Psychologist46(2), 107-121.
  • Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. New York: Guilford
  • Wenger, E. (2011). Communities of practice: A brief introduction.
  • Wood, D., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving. Journal of child psychology and psychiatry17(2), 89-100.
  • Wulff, P., Hazari, Z., Petersen, S., & Neumann, K. (2018). Engaging young women in physics: An intervention to support young women’s physics identity development. Physical Review Physics Education Research14(2), 020113.

*Very nice example by Kjersti: Presenting students (or fathers-in-laws) with a few simple ideas about rotating fluid dynamics enables them to combine the ideas to draw a schematic of the Hadley cell circulation. Which is a lot more engaging and satisfying that being presented with a schematic and someone talking you through it. If you are willing to surrender to the experience in the first place…

#WaveWatching as “transformative experience”? (Based on articles by Pugh et al. 2019, 2011, 2010)

I was reading an article on “active learning” by Lombardi et al. (2021), when the sentence “In undergraduate geoscience, Pugh et al. (2019) found that students who made observations of the world and recognized how they might be explained by concepts from their classes were more likely to stay in their major than those who do not report this experience” jumped at me. Something about observing the world and connecting it to ideas from class was so intriguing, that I had to go down that rabbit hole and see where this statement was coming from, and if it might help me as a theoretical framework for thinking about #WaveWatching (which I’ve been thinking about a lot since the recent teaching conversation).

Going into that Pugh et al. (2019) article, I learned about a concept called “transformative experience”, which I followed back to Pugh (2011): A transformative experience happens when students see the world with new eyes, because they start connecting concepts from class with their real everyday lives. There is quote at the beginning of that article which reminds me very much of what people say about wave watching (except that in the quote the person talks about clouds): that once they’ve started seeing pattern because they understood that what they look at isn’t chaotic but can be explained, they cannot go back to just looking at the beauty of it without questioning why it came to be that way. They now feel the urge to make sense of the pattern they see, everytime they come across anything related to the topic.

This is described as the three characteristics of transformative experiences:

  • they are done voluntarily out of intrinsic motivation (meaning that the application of class concepts is not required by the teacher or some other authority),
  • they expand peception (when the world is now seen through the subject’s lens and looks different than before), and
  • they have experiential value (meaning the person experiencing them perceives them as adding value to their lives).

And it turns out that facilitating such transformative experiences might well be what distinguishes schools with higher student retention from those with lower student retention in Pugh et al.’s 2019 study!

But how can we, as teachers, facilitate transformative experiences? Going another article further down the rabbit hole to Pugh et al. (2010), this is how!

The “Teaching for Transformative Experiences” model consists of three methods acting together:

  • framing content in a way that the “experiential value” becomes clear, meaning making an effort to explain the value that perceiving the world in such a way adds to our lives. This can be done by expressing the feelings it evokes or usefulness that it adds. For #WaveWatching, I talk about how much I enjoy the process, but also how making sense of an aspect of the world that first seemed chaotic is both satisfying and calming to me. But framing in terms of the value of the experience can also be done by metaphors, for example about the tales that rocks, trees, or coastlines could tell. Similarly, when I speak about “kitchen oceanography”, I hope that it raises curiosity about how we can learn about the ocean in a kitchen.
  • scaffolding how students look at the world by helping them change lenses step by step, i.e. “re-seeing”, for example by pointing out specific features, observing them together, talking through observations or providing opportunities to share and discuss observations (so pretty much my #WaveWatching process!).
  • modeling transformative experiences, i.e. sharing what and how we perceive our own transformative experiences, in order to show students that it’s both acceptable and desirable to see the world in a certain way, and communicate about it. I do this both in person as well as whenever I post about #WaveWatching online.

So it seems that I have been creating transformative experiences with #WaveWatching all this time without knowing it! Or at least that this framework works really well to describe the main features of #WaveWatching.

Obviously I have only just scratched the literature on transforming experiences, but I have a whole bunch of articles open on my desktop already, about case studies of facilitating transformative experiences in teaching. And I cannot wait to dig in and find out what I can learn from that research and apply it to improve #WaveWatching! :)

Lombardi, D., Shipley, T. F., & Astronomy Team, Biology Team, Chemistry Team, Engineering Team, Geography Team, Geoscience Team, and Physics Team. (2021). The curious construct of active learning. Psychological Science in the Public Interest, 22(1), 8-43.

Pugh, K. J., Phillips, M. M., Sexton, J. M., Bergstrom, C. M., & Riggs, E. M. (2019). A quantitative investigation of geoscience departmental factors associated with the recruitment and retention of female students. Journal of Geoscience Education, 67(3), 266-284.

Pugh, K. J. (2011). Transformative experience: An integrative construct in the spirit of Deweyan pragmatism. Educational Psychologist, 46(2), 107-121.

Pugh, K. J., Linnenbrink-Garcia, L., Koskey, K. L., Stewart, V. C., & Manzey, C. (2010). Teaching for transformative experiences and conceptual change: A case study and evaluation of a high school biology teacher’s experience. Cognition and Instruction, 28(3), 273-316.

What does “sensemaking” really mean in the context of learning about science? (Reading Odden & Russ, 2019)

I read the article “Defining sensemaking: Bringing clarity to a fragmented theoretical construct” by Odden and Russ (2019) and what I loved about the article are two main things: I realized that “sensemaking” is the name of an activity I immensely enjoy under certain conditions, and being able to put words to that activity made me really happy! And I found it super helpful that the differences between “sensemaking” and other concepts like “explaining” or “thinking” were pointed out, because that gave me an even clearer idea of what is meant by “sensemaking”.

What is sensemaking? The definition given in the Odden and Russ (2019) article is simple:

Sensemaking is a dynamic process of building or revising an explanation in order to “figure something out”—to ascertain the mechanism underlying a phenomenon in order to resolve a gap or inconsistency in one’s understanding.

Odden and Russ discuss that in the educational science literature, sensemaking has previously been used to mean three different things, that can all be reconceiled under this definition, but that have been discussed mostly independently before:

  1. An approach to learning: Sensemaking can mean really wanting to figure something out by yourself — making sense of an intriguing problem by bringing together what you know, asking yourself questions, building and testing hypotheses, but not asking other people for the correct solution. This is my approach to escape games, for example — I hate using the help cards! I know that it should be possible to figure the puzzles out, so I want to do it myself! This approach is obviously desirable in science learners, since they are not just relying on memorizing responses or assembling surface-level knowledge. They really want to make sense out of something that did not make sense before.
  2. A cognitive process: In this sense, sensemaking is really about how students bring together pieces of previous knowledge and experiences, and new knowledge, and how they integrate them to form a new and bigger coherent structure, for example by using analogies or metaphors.
  3. A way of communicating: Sensemaking then is the collaborative effort to make sense by bringing together different opinions or to construct an explanation, and than critiquing it in order to make sure the arguments are watertight. This can happen both using technical terms and everyday language.

And now how is “sensemaking” different from other, seemingly similar terms? (Or, as the authors say, how can we differentiate sensemaking “from other <good> things to do when learning science”?) This is my summary of the arguments from the article:

Thinking. Compared with sensemaking, thinking is a lot broader. One can do a lot of thinking without attempting to create any new sense. Thinking does not require the critical approach that is essential to sensemaking.

Learning. While sensemaking is a form of learning, there are a lot of other forms that don’t include sensemaking, for example memorization.

Explaining. Sensemaking requires the process of “making sense” of something that previously did not make sense, explanating does not necessarily require that. Depending on the context, explanations can sometimes well be generated out of previous knowledge without building new relationships or anything.

Argumentation. Argumentation is a much wider term than sensemaking — one can for example argue with the goal of persuading someone else rather than building a common understanding and making sense out of information.

Modeling. There is a great overlap between modeling and sensemaking, but sensemaking is typically more dynamic and short-term, whereas modeling is a more formal activity that can take place over days and weeks, sometimes with the purpose of communicating ideas.

I found reading this article enlightening because it is giving me a language to talk about sensemaking, to articulate nuances, that I previously did not have. By reflecting on situations where I really enjoy sensemaking (another example is wave watching: I am trying to make sense of what I see by running through questions in my head. Can I observe what causes the waves? Is their behavior consistent with what I would expect given what I can observe about the topography? If not, what does that tell me about the topogaphy in places where I can’t observe it?) and on others where I don’t (thinking of times in school when I did not see the point of trying to make sense out of something [as in make all the individual pieces of previous knowledge and new information fit together coherently without conflict] and just needed to go though the motions of it to pass a test or something), I find it intriguing to think about why I sometimes engage in the process and enjoy it, and sometimes I don’t even try to engage.

How does it work for you, do you know under what conditions you engage in sensemaking, and under which don’t you?

Odden, T. O. B., & Russ, R. S. (2019). Defining sensemaking: Bringing clarity to a fragmented theoretical construct. Science Education, 103(1), 187-205.

“Invisible learning” by David Franklin

Several things happened today.

  1. I had a lovely time reading in the hammock
  2. I tried to kill two birds with one stone (figuratively of course): writing a blog post about the book I read (which I really loved) and try a new-to-me format of Instagram posts: A caroussel, where one post slides into the next as you swipe (so imagine each of the images below as three square pictures that you slide through as you look at the post)

Turns out that even though I really like seeing posts in this format on other people’s Instagram, it’s way too much of a hassle for me to do it regularly :-D

Also a nightmare in terms of accessibility without proper alt-text, and for google-ability of the blog post. So I won’t be doing this again any time soon! But I’m still glad I tried!

And also: check out the book!

Invisible Learning: The magic behind Dan Levy’s legendary Harvard statistics course. David Franklin (2020)

Metaphors of learning (after Ivar Nordmo and the article by Sfard, 1998)

On Thursday, I attended a workshop by Ivar Nordmo, in which he talked about two metaphors of learning: “learning as acquisition” and “learning as participation”. He referred to an article by Sfard (1998), and here is my take-away from the combination of both.

When we talk about new (or new-to-us) concepts, we often describe them with words that have previously been used in other contexts. As we bring the words into a new domain, their meaning might change a little, but the first assumption will be that the new concept we describe by those old words is, indeed, described by those words carrying the same old, familiar meaning.

When concepts are described by metaphors that developed in a different context, or are commonly used in different contexts, an easy assumption is that all their properties are transferrable between contexts. On the one hand that makes it easy to quickly grasp new contexts, on the other hand that easy assumption is most likely not entirely correct, which can lead us to misunderstanding the new concept if we don’t examine our implicit assumptions. And usually we don’t stop to consider whether the words we are using that were borrowed from a different context, are actually leading our thinking on a separate context without us realizing that this might not be appropriate.

The way we think about learning, for example, depends on the language we use to conceptualize it, and there are two metaphores who lead to substantially different ways of understanding learning, with far-reaching consequences.

Learning as acquisition

Learning is commonly defined as “gaining knowledge”. Facts or concepts are building blocks of knowledge that we acquire, accumulate, and construct meaning from. We can test whether people posess knowledge or skills (we might even be able to assess someone’s potential based on their performance). Someone might have a wealth of knowledge. They might be providing teaching and knowledge to someone else, who is receiving instruction and might share it with others. We can transfer knowledge to different applications. We might be academically gifted. In all these cases, we gain posession of something.

We think of knowledge as something we posess, intellectual property rights clearly assign ownership to ideas, and stealing ideas is a serious offence. As any other expression of wealth, knowledge is guarded and passed on from parents to children, or maybe shared as a special favor, making access to those from less knowledge-affluent circles difficult. It is perfectly fine to admit to wanting to accumulate knowledge just for the fun of it, without intending to use it for anything, same as it is socially accepted to get rich without considering what that money could and maybe should be used for.

Learning as participation

Changing the language we use to talk about things might also change how we think about the things themselves.

An alternative metaphor to “learning as aquisition” is “learning as participation”. In that metaphor, learning is described as a process that happens in specific contexts and without a clear end point. The focus then is on communicating in the language that a community communicates in, in taking part in the community’s rituals, but simultaneously influencing the community’s language and rituals in a shared negotiation with the goal of building community.

When learning is about participation, it is not a private property but a shared activity. This means that the status that, in the acquisition metaphor, comes with being knowledge-rich, is now gone. Actions can be successful or failures, but that does not make the actors inherently smart or stupid. They can act one way in one context on a given day, and could act differently at any time.

While the participation metaphor brings up all the positive associations of a growth mindsets on the individual level and equal access to learning in society, it is hard to imagine it without preserving parts of the acquisition metaphor. If knowledge is not something we possess within us, how can we even bring it from one situation into the next? How do individual learning biographies contribute to the shared activities? Can someone still be a teacher and someone else a learner?

I find considering these two metaphors really eye-opening as to how much the language we use shapes how we think about the world. Which I was aware of for example in the debate on how to use gender-neutral language, but which I never applied to learning before.

The recommendation by Sfard (1998) is not to choose one metaphor, but to carefully consider what is inadvertently implied by the language we use. Meaning transported in metaphors between domains might be buried so deeply that we are unaware of it, yet it can lead us to think about one domain wrongly and unknowingly assuming properties or causalities from a completely different domain, and to making sense in that second domain based on a faulty, assumed understanding. So awareness of the metaphors we use, and reflexion on what that does to our thinking, is not only useful but neccessary.

I don’t claim to have gotten far with these thoughts yet, but it was definitely eye-opening!

Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational researcher, 27(2), 4-13.

Student evaluations of teaching are biased, sexist, racist, predjudiced. My summary of Heffernan’s 2021 article

One of my pet peeves are student evaluations that are interpreted way beyond what they can actually tell us. It might be people not considering sample sizes when looking at statistics (“66,6% of students hated your class!”, “Yes, 2 out of 3 responses out of 20 students said something negative”), or not understanding that student responses to certain questions don’t tell us “objective truths” (“I learned much more from the instructor who let me just sit and listen rather than actively engaging me” (see here)). I blogged previously about a couple of articles on the subject of biases in student evaluations, which were then basically a collection of all the scary things I had read, but in no way a comprehensive overview. Therefore I was super excited when I came a sytematic review of the literature this morning. And let me tell you, looking at the literature systematically did not improve things!

In the article “Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching.” (2021), Troy Heffernan reports on a systematic analysis of the existing literature of the last 30 years represented in the major databases, published in peer-reviewed English journals or books, and containing relevant terms like “student evaluations” in their titles, abstracts or keywords. This resulted in 136 publications being included in the study, plus an initial 47 that were found in the references of the other articles and deemed relevant.

The conclusion of the article is clear: Student evaluations of teaching are biased depending on who the students are that are evaluating, depending on the instructor’s person and prejudices that are related to characteristics they display, depending on the actual course being evaluated, and depending on many more factors not related to the instructor or what is going on in their class. Student evaluations of teaching are therefore not a tool that should be used to determine teaching quality, or to base hiring or promotion decisions on. Additionally, those groups that are already disadvantaged in their evaluation results because of personal characteristics that students are biased against, also receive abusive comments in student evaluations that are harmful to their mental health and wellbeing, which should be reason enough to change the system.

Here is a brief overview over what I consider the main points of the article:

It matters who the evaluating students are, what course you teach and what setting you are teaching in.

According to the studies compiled in the article, your course is evaluated differently depending on who the students are that are evaluating it. Female students evaluate on average 2% more positively than male students. The average evaluation improves by up to 6% when given by international students, older students, external students or students with better grades.

It also depends on what course you are teaching: STEM courses are on average evaluated less positively than courses in the social sciences and humanities. And comparing quantitative and qualitative subjects, it turns out that subjects that have a right or wrong answer are also evaluated less positively than courses where the grades are more subjective, e.g. using essays for assessment.

Additionally, student evaluations of teaching depend on even more factors beside course content and effectiveness, for example class size and general campus-related things like how clean the university is, whether there are good food options available to students, what the room setup is like, how easy to use course websites and admission processes are.

It matters who you are as a person

Many studies show that gender, ethnicity, sexual identity, and other factors have a large influence on student evaluations of teaching.

Women (or instructors wrongly perceived as female, for example by a name or avatar) are rated more negatively than men and, no matter the factual basis, receive worse ratings at objective measures like turnaround time of essays. Also the way students react to their grades depends on their instructor’s gender: When students get the grades they expected, male instructors get rewarded with better scores, when their expectations are not met, men get punished less than women. The bias is so strong that for young (under 35 years old) women teaching in male-dominated subjects, this has been shown to have an effect of up to 37% lower ratings for women.

These biases in student evaluations result in strengthening the position of an already privileged group: white, able-bodied, heterosexual, men of a certain age (ca 35-50 years old), who the students believe to be heterosexual and who are teaching in their (and their students’) first language get evaluated a lot more favourable than anybody who does not meet one or several of the criteria.

Abuse disguised as “evaluation”

Sometimes evaluations are also used by students to express anger or frustration, and this can lead to abusive comments. Those comments are not distributed equally between all instructors, though, they are a lot more likely to be directed at women and other minorities, and they are cummulative. The more minority characteristics an instructor shows, the more abusive comments they will receive. This racist, sexist, ageist, homophobic abuse is obviously hurtful and harmful to an already disadvantaged population.

My 2 cents

Reading the article, I can’t say I was surprised by the findings — unfortunately my impression of the general literature landscape on the matter was only confirmed by this systematic analysis. However, I was positively surprised to read the very direct way in which problematic aspects are called out in many places: “For example, women receive abusive comments, and academics of colour receive abusive comments, thus, a woman of colour is more likely to receive abuse because of her gender and her skin colour“. This is really disheartening to read on the one hand because it becomes so tangible and real, especially since in addition to being harmful to instructors’ mental health and well-being when they contain abuse, student evaluations are also still an important tool in determining people’s careers via hiring and promotion decisions. But on the other hand it really drives home the message and call to action to change these practices, which I really appreciate very much: “These practices not only harm the sector’s women and most underrepresented and vulnerable, it cannot be denied that [student evaluations of teaching] also actively contribute to further marginalising the groups universities declare to protect and value in their workforces.”.

So let’s get going and change evaluation practices!


Heffernan, T. (2021). Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching. Assessment & Evaluation in Higher Education, 1-11.

An overview over what we know about what works in university teaching (based on Schneider & Preckel, 2017)

I’ve been leading a lot of workshops and doing consulting on university teaching lately, and one request that comes up over and over again is “just tell me what works!”. Here I am presenting an article that is probably the best place to start.

The famous “visible learning” study by Hattie (2009) compiled pretty much all available articles on teaching and learning, for a broad range of instructional settings. Their main conclusion was that the focus should be on visible learning, which means learning where learning goals are explicit, there is a lot of feedback happening between students and teachers throughout the interactions, and the learning process is an active and evolving endeavour, which both teachers and students reflect on and constantly try to improve.

However, what works at schools does not necessarily have to be the same that works at universities. Students are a highly select group of the general population, the ones that have been successful in the school system. For that group of people, is it still relevant what teaching methods are being used, or is the domain-specific expertise of the instructors combined with skilled students enough to enable learning?

The article “Variables associated with achievement in higher education: A systematic review of meta-analyses” by Schneider & Preckel (2017) systematically brings together what’s known about what works and what doesn’t work in university teaching, and their main findings.

Below, I am presenting the headings of the “ten cornerstone findings” as quotes from the article, but I am providing my own interpretations and thoughts based on their findings.

1. “There is broad empirical evidence related to the question what makes higher education effective.”

Even though instructors might not always be aware of it because literature on university teaching has been theoretical for a long time (or they just don’t have the time to read enough to gain an overview over the existing literature), but these days there is a lot of empirical evidence of what makes university teaching effective!

There is a HUGE body of literature on studies investigating what works and what does not, but results always depend on the exact context of the study: who taught whom where, using what methods, on what topic, … Individual studies can answer what worked in a very specific context, but they don’t usually allow for generalizations.

To help make results of studies more generally valid, scientists bring together all available studies on a particular teaching method, “type” of student or teacher in meta studies. By comparing studies in different context, they can identify success factors of applying that specific method across different contexts, thus making it easier to give more general recommendations of what methods to use, and how.

But then if you aren’t just interested in how to use one method, but what design principles you should be applying in general, you might want to look at systematic reviews of meta-studies. Systematic review of meta-studies bring together everything that has been published on a given topic and try to distill the essence from that. One such systematic review of meta-studies is the one I am presenting here, where the authors have compiled 38 meta-analyses (which were found to be all available meta-analyses relevant to higher education) and thus provide “a broad overview and a general orientation of the variables associated with achievement in higher education”.

2. “Most teaching practices have positive effect sizes, but some have much larger effect sizes than others.”

A big challenge with investigations of teaching effectiveness is that most characteristics of teaching and of learners are related to achievement. So great care needs to be taken in order to not interpret the effect one measures for example in a SoTL project as the optimal effect, because some characteristics and their related effects are much larger than others: “The real question is not whether an instructional method has an effect on achievement but whether it has a higher effect size than alternative approaches.”

This is really important to consider especially for instructors who are (planning on) trying to measure how effective they or their methods are, or who are looking in the literature for hints on what might work for them — it’s not enough to just look if a method does have a positive effect, but to consider whether even more effective alternatives might exist.

3. “The effectivity of courses is strongly related to what teachers do.”

Great news! What we do as teachers does influence how much students learn! And often times it is through really tiny things we do or don’t do, like asking open-ended questions instead of closed-ended ones, writing keywords instead of full sentences on our slides or the blackboard (for more examples, see point 5).

And there are general things within our influence as teachers that positively contribute to student learning, for example showing enthusiasm about the content we are teaching, being available to students and being helpful, and treating the students respectfully and friendly. All these behaviours help create an atmosphere in which students feel comfortable to speak their minds and interact, both with their teacher and among each others.

But it is, of course, also about what methods we chose. For example, choosing to have students work in small groups is on average more effective than having them learn both individually or as the whole group together. And small groups become most effective when students have clear responsibilities for tasks and when the group depends on all students’ inputs in order to solve the task. Cooperation and social interaction can only work when students are actively engaged, speak about their experiences, knowledge and ideas, discuss and evaluate arguments. This is what makes it so successful for learning.

4. “The effectivity of teaching methods depends on how they are implemented.”

It would be nice to know that just by using certain methods, we can increase teaching effectivity, but unfortunately they also need to be implemented in the right way. Methods can work better or not so well, depending on how they are done. For example, asking questions is not enough, we should be asking open instead of closed questions. So it is not only about using large methods, but to tweak the small moments to be conductive to learning (examples for how to do that under point 5)

Since microstructure (all the small details in teaching) is so important, it is not surprising that the more time teachers put into planning details of their courses, the higher student achievement becomes. Everything needs to be adapted to the context of each course: who the students are and what the content is. This is work!

5. “Teachers can improve the instructional quality of their courses by making a number of small changes.”

So now that we know that teachers can increase how much students learn in their classes, here is a list of what works (and many of those points are small and easy to implement!)

  • Class attendance is really important for student learning. Encourage students to attend classes regularly!
  • Make sure to create the culture of asking questions and engaging in discussion, for example by asking open-ended questions.
  • Be really clear about the learning goals, so you can plan better and students can work towards the correct goals, not to wrong ones that they accidentally assumed.
  • Help students see how what you teach is relevant to their lives, their goals, their dreams!
  • Give feedback often, and make sure it is focussed on the tasks at hand and given in a way that students can use it in order to improve.
  • Be friendly and respectful towards students (duh!),
  • Combine spoken words with visualizations or texts, but
    • When presenting slides, use only a few keywords, not half or full sentences
    • Don’t put details in a presentation that don’t need to be there, not for decoration or any other purpose. They are only distracting from what you really want to show
    • When you are showing a dynamic visualization (simulation or movie), give an oral rather than a written explanation with it, so the focus isn’t split between two things to look at. For static pictures, this isn’t as important.
  • Use concept maps! Let students construct them themselves to organize and discuss central ideas of the course. If you provide concept maps, make sure they don’t contain too many details.
  • Start each class with some form of “advance organizer” — give an overview over the topics you want to go through and the structure in which that will happen.

Even though all these points are small and easy to implement, their combined effect can be large!

6. “The combination of teacher-centered and student-centered instructional elements is more effective than either form of instruction alone.”

There was no meta-analysis directly comparing teacher-centered and student-centered teaching methods, but elements of both have high effects on student learning. The best solution is to use a combination of both, for example complementing teacher presentations by interactive elements, or having the teacher direct parts of student projects.

Social interaction is really important and maximally effective when teachers on the one hand take on the responsibility to explicitly prepare and guide activities and steer student interactions, while on the other hand giving students the space to think for themselves, choose their own paths and make their own experiences. This means that ideally we would integrate opportunities for interaction in more teacher-centered formats like lectures, as well as making sure that student-centered forms of learning (like small groups or project-based learning) are supervised and steered by the instructor.

7. “Educational technology is most effective when it complements classroom interaction.”

We didn’t have a lot of choice in the recent rise of online learning, but the good news is that it can be pretty much as effective as in-person learning in the classroom. Blended learning, i.e. combining online and in-class instruction, is even more effective, especially when it is used purposefully for visualizations and such.

Blended learning is not as successful as in-person learning when used mainly to support communication; compared to in-person, online communication is limiting social interaction (or at least it was before everybody got used to it during covid-19? Also, the article points out explicitly that instructional technologies are developing quickly and that only studies were included that were published before 2014. Therefore MOOCs, clickers, social media and other newer technologies are not included).

8. “Assessment practices are about as important as presentation practices.”

Despite constructive alignment being one of the buzzwords that is everywhere these days, the focus of most instructors is still on the presentation part of their courses, and not equally on assessment. But the results presented in the article indicate that “assessment practices are related to achievement about as strongly as presentation practices”!

But assessment does not only mean developing exam questions. It also means being explicit about learning goals and what it would look like if they were met. Learning outcomes are so important! For the instructor to plan the whole course or a single class, to develop meaningful tests of learning and then actually evaluating it, in order to give feedback to students. Students, on the other hand, need guidance on what they should focus on both in reflecting on what they learned during past lessons, preparing for future lessons, and preparing for the exam.

Assessment also means giving formative feedback (feedback with the explicit and only purpose of helping students learn or teachers improve teaching, not giving a final evaluation after the fact) throughout the whole teaching process. 

Assessment also doesn’t only mean the final exam, it can also mean smaller exercises or tasks throughout the course. Testing frequently (more than two or three times per semester) helps students learn more. Requiring that students show they’ve learnt what they were supposed to learn before the instructor moves on to the next topic has a large influence on learning. And the frequent feedback that can be provided on that basis helps them learn even more.

And: assessment can also mean student-peer assessment or student self-assessment, which agree on average fairly well with assessment by the instructor but have the added benefit of explicitly thinking about learning outcomes and whether they have been achieved. Of course, this is only possible when learning outcomes are made explicit.

The assessment part is so important, because students optimize where to spend their time based on what they perceive as important, which is often related to what they will need to be able to do in order to pass an exam. The explicit nature of the learning outcomes (and their alignment with the exam) are what students use to decide what to spend time and attention on.

9. “Intelligence and prior achievement are closely related to achievement in higher education.”

Even though we as instructors have a large influence on student achievement by all the means described above, there are also student characteristics that influence how well students can achieve. Intelligence and prior achievement are correlated to how well pupils will do at university (although both are not fixed characteristics that students are born with, but formed by how much and what quality of education students attended up to that point). If we want better students, we need better schools.

10. “Students’ strategies are more directly associated with achievement than students’ personality or personal context.”

Despite student backgrounds and personalities being important for student achievement, even more important are what strategies they are using to learn, to prepare for exams, to set goals and regulate how much effort they put on what task. Successful strategies are frequent class attendance as well as a strategic approach to learning, meaning that instead of working hard non stop, students allocate time and effort to those topics and problems that are most important. But also on the small scale, what students do matters: Note taking, for example, is a much more successful strategy when students are listening to a talk without slides. When slides are present, the back-and-forth between slides and notes seems to distract students from learning.

Training strategies works best in class rather than outside in extra courses with artificial problems.

So where do we go from here?

There you have it, that was my summary of the Schneider & Preckel (2017) systematic review of meta-analyses of what works in higher education. We know now of many things that work pretty much universally, but even though many of the small practices are easy to implement, it still doesn’t tell us what methods to use for our specific class and topic. So where do we go from here? Here are a couple of points to consider:

Look for examples in your discipline! What works in your discipline might be published in literature that was either not yet used in meta-studies, or published in a meta-study after 2014 (and thus did not get included in this study). So a quick literature search might be very useful! In addition to published scientific studies, there is a wealth of information available online of what instructors perceive to be best practice (for example SERC’s Teach the Earth collection, blogs like this one, tweets collected under hashtags like #FieldWorkFix, #HigherEd). And of course always talk to people teaching the same course at a different institution or who taught it previously at yours!

Look for examples close to home! What works and what doesn’t is also culture dependent. Try to find out what works in similar courses at your institution or a neighboring one with the same or a similar student body and similar learning outcomes?

And last not least: Share your own experiences with colleagues! Via twitter, blogs, workshops, seminars. It’s always good to share experiences and discuss! And on that note — do you have any comments on this blog post? I’d love to hear from you! :)


Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychological bulletin, 143(6), 565.