Tag Archives: Kirsty Dunnett

Guest post by Kirsty Dunnett: “Thinking about my positionality as a teacher and researcher in physics education and academic development”

I thought a lot, and wrote (e.g. here), about my positionality in relation to my work recently, inspired by conversations with, and nudging by, Kirsty. Below, I am posting her response to my blog post, where she shares her reflections on her own positionality and also on why and how we need to be careful with demanding, or expecting, or even just implying that we think people should be, sharing that kind of information. Definitely worth a read, thank you so much for sharing, Kirsty!

Continue reading

Guest post by Kirsty Dunnett: The strength of evidence in (geosciences) education research: might a hierarchy do more harm than good?

Below, Kirsty is discussing how it can potentially discourage efforts to improve teaching and teachers when we focus on the strength of evidence too much, and don’t value the developmental process itself enough. Definitely worth reading! :-)

Continue reading

Kirsty Dunnett’s addition to my post on “A conceptual framework for the teaching and learning of generic graduate attributes” (Barrie, 2007)

Haha, I ended my post this morning with “…but at that point I lost interest”, and apparently that’s a great call to action! Kirsty Dunnett, faithful guest blogger on my blog, volunteered to send me the summary of what I had missed! Thanks for sharing, Kirsty, the floor is yours:

Continue reading

“Supporting students in higher education: proposal for a theoretical framework” Kirsty Dunnett summarizes De Ketele (2014)

Who are you travelling with? A guest post by Kirsty Dunnett.

A summary and some thoughts on:

Supporting students in higher education: proposal for a theoretical framework
By J.-M. De Ketele (Université de Louvain, Belgium)

Continue reading

The Curious Construct of Active Learning: A guest post by K. Dunnett (UiO) on Lombardi et al. (2021)

‘Active Learning’ is frequently used in relation to university teaching, especially in Science, Technology, Engineering and Mathematics (STEM) subjects where expository lecturing is still a common means of instruction, especially in theoretical courses. However, many different activities and types of activities can be assigned this label. This review article examines the educational research and development literature in 7 subject areas (Astronomy, Biology, Chemistry, Engineering, Geography, Geosciences and Physics) to explore exactly what is meant by ‘active learning’, its core principles and defining characteristics.

Active Learning is often presented or described as a means of increasing student engagement in a teaching situation. ‘Student engagement’ is another poorly defined term, but is usually taken to involve four aspects: social-behavioural (participation in sessions and interactions with other students); cognitive (reflective thought); emotional and agentic (taking responsibility). In this way, ‘Active Learning’ relates to the opportunities that students have to construct their knowledge. On the other hand, and in relation to practice, Active Learning is often presented as the antithesis of student passivity and traditional expository lecturing in which student activity is limited to taking notes. This characterisation is related the behaviour of students in a session.

Most articles and reviews reporting the positive impact of Active Learning on students’ learning don’t define what Active Learning is. Instead, most either list example activities or specify what Active Learning is not. This negative definition introduces an apparent dichotomy which is not as clear as it may initially appear. In fact, short presentations are an important element of many ‘Active Learning’ scenarios: it is the continuous linear presentation of information that is problematic. Most teaching staff promote interactivity and provide opportunities for both individual and social construction of knowledge while making relatively small changes to previously presentation-based lectures.

That said, the amount of class time in which students are interacting directly with the material does matter. One example of measurement of the use and impact of Active Learning strategies (or activities that require students to interact with the material they are learning) in relation to conceptual understanding of Light and Spectroscopy found that high learning gains occur when at least 25% of scheduled class time is spent by students on Active Learning strategies. Moreover, the quality of the activities and their delivery, and the commitment of both students and staff to their use, are also seen as potentially important elements in achieving improved learning.

In order to develop an understanding of what Active Learning actually means, groups in seven disciplinary areas reviewed the discipline-specific literature, and the perspectives were then integrated into a common definition. The research found that presentations of Active Learning in terms of either students’ construction of knowledge via engagement, or in contrast to expository lecturing were used within the disciplines, although the discipline-specific definitions varied. For example, the geosciences definition of Active Learning was:

”Active learning involves situations in which students are engaged in the knowledge-building process. Engagement is manifest in many forms, including cognitive, emotional, behavioural, and agentic, with cognitive engagement being the primary focus in effective active learning,”

while the physics definition was that:

”Active learning encompasses any mode of instruction that does not involve passive student lectures, recipe labs, and algorithmic problem solving (i.e., traditional forms of instruction in physics). It often involves students working in small groups during class to interact with peers and/or the instructor.”

The composite definition to which these contributed is that:

”Active learning is a classroom situation in which the instructor and instructional activities explicitly afford students agency for their learning. In undergraduate STEM instruction, it involves increased levels of engagement with (a) direct experiences of phenomena, (b) scientific data providing evidence about phenomena, (c) scientific models that serve as representations of phenomena, and (d) domain-specific practices that guide the scientific interpretation of observations, analysis of data, and construction and application of models.”

The authors next considered how teaching and learning situations could be understood in terms of the participants and their actions (Figure 1 of the paper). ‘Traditional, lecture-based’ delivery is modelled as a situation where the teacher has direct experience of disciplinary practices, access to data and models, and then filters these into a simplified form presented to the students. Meanwhile, in an Active Learning model students construct their knowledge of the discipline through their own interaction with the elements of the discipline: its practices, data and models. This knowledge is refined through discussion with peers and teaching staff (relative experts within the discipline), and self-reflection.

The concluding sections remark on the typical focus of Discipline Based Educational Research, and reiterate that student isolation (lack of opportunities to discuss concepts and develop understanding) and uninterrupted expository lecturing are both unhelpful to learning, but that ”there is no single instructional strategy that will work across all situations.”


The Curious Constrauct of Active Learning
D. Lombardi, T. F. Shipley and discipline teams.
Psychological Science in the Public Interest. 2021, 22 (1) 8-43
https://doi.org/10.1177%2F1529100620973974

A tool to understand students’ previous experience and adapt your practical courses accordingly — by Kirsty Dunnett

Last week, I wrote about increasing inquiry in lab-based courses and mentioned that it was Kirsty who had inspired me to think about this in a new-to-me way. For several years, Kirsty has been working on developing practical work, and a central part of that has been finding out the types and amount of experiences incoming students have with lab work. Knowing this is obviously crucial to adapt labs to what students do and don’t know and avoid frustrations on all sides. And she has developed a nifty tool that helps to ask the right questions and then interpret the answers. Excitingly enough, since this is something that will be so useful to so many people and, in light of the disruption to pre-univeristy education caused by Covid-19, the slow route of classical publication is not going to help the students who need help most, she has agreed to share it (for the first time ever!) on my blog!

Welcome, Kirsty! :)

A tool to understand students’ previous experience and adapt your practical courses accordingly

Kirsty Dunnett (2021)

Since March 2020, the Covid-19 pandemic has caused enormous disruption across the globe, including to education at all levels. University education in most places moved online, while the disruption to school students has been more variable, and school students may have missed entire weeks of educational provision without the opportunity to catch up.

From the point of view of practical work in the first year of university science programmes, this may mean that students starting in 2021 have a very different type of prior experience to students in previous years. Regardless of whether students will be in campus labs or performing activities at home, the change in their pre-university experience could lead to unforeseen problems if the tasks set are poorly aligned to what they are prepared for.

Over the past 6 years, I have been running a survey of new physics students at UCL, asking about their prior experience. It consists of 5 questions about the types of practical activities students did as part of their pre-universities studies. By knowing students better, it is possible to introduce appropriate – and appropriately advanced – practical work that is aligned to students when they arrive at university (Dunnett et al., 2020).

The question posed is: “What is your experience of laboratory work related to Physics?”, and the five types of experience are:
1) Designed, built and conducted own experiments
2) Conducted set practical activities with own method
3) Completed set practical activities with a set method
4) Took data while teacher demonstrated practical work
5) Analysed data provided
For each statement, students select one of three options: ‘Lots’, ‘Some’, ‘None’, which, for analysis, can be assigned numerical values of 2, 1, 0, respectively.

The data on its own can be sufficient for aligning practical provision to students (Dunnett et al., 2020).

More insight can be obtained when the five types of experience are grouped in two separate ways.

1) Whether the students would have been interacting with and manipulating the equipment directly. The first three statements are ‘Active practical work’, while the last two are ‘Passive work’ on the part of the student.

2) Whether the students have had decision making control over their work. The first two statements are where students have ‘Control’, while the last three statements are where students are given ‘Instructions’.

Using the values assigned to the levels of experience, four averages are calculated for each student: ‘Active practical work’, ‘Passive work’; ‘Control’, ‘Instructions’. The number of students with each pair of averages is counted. This leads to the splitting of the data set, into one that considers ‘Practical experience’ (the first two averages) and one that considers ‘Decision making experience’ (the second pair of averages). (Two students with the same ‘Practical experience’ averages can have different ‘Decision making experience’ averages; it is convenient to record the number of times each pair of averages occurs in two separate files.)

To understand the distribution of the experience types, one can use each average as a co-ordinate – so each pair gives a point on a set of 2D axes – with the radius of the circle determined by the fraction of students in the group who had that pair of averages. Examples are given in the figure.

Prior experience of Physics practical work for students at UCL who had followed an A-level scheme of studies before coming to university. Circle radius corresponds to the fraction of responses with that pair of averages; most common pairs (largest circles, over 10% of students) are labelled with the percentages of students. The two years considered here are students who started in 2019 and in 2020. The Covid-19 pandemic did not cause disruption until March 2020, and students’ prior experience appears largely unaffected.

With over a year of significant disruption to education and limited catch up opportunities, the effects of the pandemic on students starting in 2021 may be significant. This is a quick tool that can be used to identify where students are, and, by rephrasing the statements of the survey to consider what students are being asked to to in their introductory undergraduate practical work – and adding additional statements if necessary, provide an immediate check of how students’ prior experience lines up with what they will be asked to do in their university studies.

With a small amount of adjustment to the question and statements as relevant, it should be easy to adapt the survey to different disciplines.

At best, it may be possible to actively adjust the activities to students’ needs. At worst, instructors will be aware of where students’ prior experience may mean they are ill-prepared for a particular type of activity, and be able to provide additional support in session. In either case, the student experience and their learning opportunities at university can be improved through acknowledging and investigating the effects of the disruption caused to education by the Covid-19 pandemic.


K. Dunnett, M.K. Kristiansson, G. Eklund, H. Öström, A. Rydh, F. Hellberg (2020). “Transforming physics laboratory work from ‘cookbook’ to genuine inquiry”. https://arxiv.org/abs/2004.12831

Increasing inquiry in lab courses (inspired by @ks_dnnt and Buck et al., 2008)

My new Twitter friend Kirsty, my old GFI-friend Kjersti and I have been discussing teaching in laboratories. Kirsty recommended an article (well, she did recommend many, but one that I’ve read and since been thinking about) by Buck et al. (2008) on “Characterizing the level of inquiry in the undergraduate laboratory”.

In the article, they present a rubric that I found intriguing: It consists of six different phases of laboratory work, and then assigns 5 levels ranging from a “confirmation” experiment to “authentic inquiry”, depending on whether or not instruction is giving for the different phases. The “confirmation” level, for example, prescribes everything: The problem or question, the theoretical background, which procedures or experimental designs to use, how the results are to be analysed, how the results are to be communicated, and what the conclusions of the experiment should be. For an open inquiry, only the question and theory are provided, and for authentic inquiry, all choices are left to the student.

The rubric is intended as a tool to classify existing experiments rather than designing new ones or modifying existing, but because that’s my favourite way to think things through, I tried plugging my favourite “melting ice cubes” experiment into the rubric. Had I thought about it a little longer before doing that, I might have noticed that I would only be copying fewer and fewer cells from the left going to the right, but even though it sounds like a silly thing to do in retrospect, it was actually still helpful to go through the exercise.

It also made me realize the implications of Kirsty’s heads-up regarding the rubric: “it assumes independence at early stages cannot be provided without independence at later stages”. Which is obviously a big limitation; one can think of many other ways to use experiments where things like how results are communicated, or even the conclusion, are provided, while earlier steps are left open for the student to decide. Also providing guidance on how to analyse results without prescribing the experimental design might be really interesting! So while I was super excited at first to use this rubric to povide an overview over all the different ways labs can possibly be structured, it is clearly not comprehensive. And a better idea than making a comprehensive rubric would probably be to really think about why instruction for any of phases should or should not be provided. A little less cook-book, a little more thought here, too! But still a helpful framework to spark thoughts and conversations.

Also, my way of going from one level to the next by simply withholding instruction and information is not the best way to go about (even though I think it works ok in this case). As the “melting ice cubes” experiment shows unexpected results, it usually organically leads into open inquiry as people tend to start asking “what would happen if…?” questions, which I then encourage them to pursue (but this usually only happens in a second step, after they have already run the experiment “my way” first). This relates well to “secret objectives” (Bartlett and Dunnett, 2019), where a discrepancy appears between what students expect based on previous information and what they then observe in reality (for example in the “melting ice cube” case, students expect to observe one process and find out that another one dominates), and where many jumping-off points exist for further investigation, e.g. the condensation pattern on the cups, or the variation of parameters (what if the ice was forced to the bottom of the cup? what’s the influence of the exact temperatures or the water depth, …?).

Introducing an element of surprise might generally be a good idea to spark interest and inquiry. Huber & Moore (2001) suggest using “discrepant events” (their example is dropping raisins in carbonated drinks, where they first sink to the bottom and then raise as gas bubbles attach to them, only to sink again when the bubbles break upon reaching the surface) to initiate discussions. They then  suggest following up the observation of the discrepant event with a “can you think of a way to …?” question (i.e. make the raisin raise faster to the surface). The “can you think of a way to…?” question is followed by brainstorming of many different ideas. Later, students are asked “can you find a way to make it happen?”, which then means that they pick one of their ideas and design and conduct an experiment. Huber & Moore (2001) then suggest a last step, in which students are asked to do a graphical representation or of their results or some other product, and “defend” it to their peers.

In contrast to how I run my favourite “melting ice cubes” experiment when I am instructing it in real time, I am using a lot of confirmation experiences, for example in my advent calendar “24 days of #KitchenOceanography”. How could they be re-imagined to lead to more investigation and less cook-book-style confirmation, especially when presented on a blog or social media? Ha, you would like to know, wouldn’t you? I’ve started working on that, but it’s not December yet, you will have to wait a little! :)

I’m also quite intrigued by the “product” that students are asked to produce after their experimentation, and by what would make a good type of product to ask for. In the recent iEarth teaching conversations, Torgny has been speaking of “tangible traces of learning” (in quotation marks which makes me think there is definitely more behind that term than I realize, but so far my brief literature search has been unsuccessful). But maybe that’s why I like blogging so much, because it makes me read articles all the way to the end, think a little more deeply about them, and put the thought into semi-cohesive words, thus giving me tangible proof of learning (that I can even google later to remind me what I thought at some point)? Then, maybe everybody should be allowed to find their own kind of product to produce, depending on what works best for them. On the other hand, for the iEarth teaching conversations, I really like the format of one page of text, maximum, because I really have to focus and edit it (not so much space for rambling on as on my blog, but a substantially higher time investment… ;-)). Also I think giving some kind of guidance is helpful, both to avoid students getting spoilt for choice, and to make sure they focus their time and energy on things that are helping the learning outcomes. Cutting videos for example might be a great skill to develop, but it might not be the one you want to develop in your course. Or maybe you do, or maybe the motivational effects of letting them choose are more important, in which case that’s great, too! One thing that we’ve done recently is to ask students to write blog or social media posts instead of classical lab reports and that worked out really well and seems to have motivated them a lot (check out Johanna Knauf’s brilliant comic!!!).

Kirsty also mentioned a second point regarding the Buck et al. (2008) rubric to keep in mind: it is just about what is provided by the teacher, not about the students’ role in all this. That’s an easy trap to fall into, and one that I don’t have any smart ideas about right now. And I am looking forward to discussing more thoughts on this, Kirsty :)

In any case, the rubric made me think about inquiry in labs in a new way, and that’s always a good thing! :)


Bartlett, P. A. and K. Dunnett (2019). Secret objectives: promoting inquiry and tackling preconceptions in teaching laboratories. arXiv:1905.07267v1 [physics.ed-ph]

Buck, L. B., Bretz, S. L., & Towns, M. H. (2008). Characterizing the level of inquiry in the undergraduate laboratory. Journal of college science teaching, 38(1), 52-58.

Huber, R.A., and C.J. Moore. 2001. A model for extending hands-on science to be inquiry based. School Science and Mathematics 101 (1): 32–41.