I attended iEarth’s GeoLearning Forum today and yesterday, and had a lot of great conversations with amazing students from the four iEarth institutions: Universities of Bergen, Oslo, Tromsø, and UNIS. One student, Sverre, told me about having read articles on learning and teaching as part of a normal geoscience class (how awesome is that? Hat tip to Bjarte!), and one of those articles was “the story of Robert and Susan”. Or: “What the student does: teaching for enhanced learning” (Biggs, 1999).
That article describes two different types of students: Susan, who learns well from traditional university teaching, i.e. lectures and exercises, and Robert, who does not. Susan has a deep approach to learning: She comes to class prepared and with her own questions that help her integrate what she learned with what she already knows and what she wants to learn to reach her academic and career goals. Robert, on the other hand, is not as invested in the subject and attends university to obtain a degree that he needs for a job. He has a surface approach to learning: He collects individual bricks and delivers them at the exam. Susan does really well on the exam, Robert does not. Or at least, that is the situation when both are taught in a conventional way. But there are ways to get Robert to learn similarly well as Susan.
The first step is that their teachers need to understand that Susan and Robert’s performance are not inherent in their personalities, but that they as teachers can influence how well both learn. For that, there need to be clear learning objectives, and it needs to be clear how the learning objectives and the assessment correspond. Also, students need to want to learn: ““Motivation” is a product of good teaching, not its prerequisite.” Additionally, students need to have the opportunity to focus on the task without feeling the pressure to put all their focus on passing the test. And they need to be able to collaborate with their peers and teachers.
Teachers usually come to that understanding by undergoing two developmental steps.
Initially, many teachers believe that what Robert and Susan do is determined by who they are. Once teachers recognise this is not the case, they commonly believe that what Robert and Susan do mainly depends on how well the teacher taught (which often results in a focus on class management). But upon reflecting on that, teachers recognise that learning depends on what activities students actually do when they learn.
When teachers have reached that step, they employ what is called “backwards design” or “constuctive alignment”: First, they consider what the learning outcomes are; what students should be able to DO after instruction. Then, building on that, the teacher comes up with assessments that check whether or not, or to what extent, students are able to do it. And lastly, the teacher develops learning activities in which students learn and practice exactly what they will later do on the exam.
Constructive alignment of a course can happen independently of the methods used in that course, but there are methods that make it particularly easy to achieve constructive alignment: using problem-based learning or a learning portfolio.
In constructively aligned courses, Robert is learning in much the same ways as Susan already did in conventional teaching: He is integrating new knowledge with what he knew before, he asks questions that help him connect new ideas with old ones, he evaluates information, does all the higher-level thinking, because the tasks in class require them. This means that the gap between Robert and Susan gets smaller and smaller, and that we are teaching both equally well. And should that not always be our goal?
I really enjoyed re-discovering this article, thanks, Sverre!
John Biggs (1999) What the Student Does: teaching for enhanced learning, Higher Education Research & Development, 18:1, 57-75, DOI: 10.1080/0729436990180105
Yesterday morning on Twitter, I saw this quote: “The sudden grounding of academics has demonstrated that air travel ‒ previously deemed a necessary part of a successful academic career and university internationalization ‒ was not in fact essential.”. This, naturally, led me directly to the original article on “Online conferencing in the midst of COVID-19: an “already existing experiment” in academic internationalization without air travel” by Jack & Glover (2021).
In the article, the authors use the “already existing experiment” — academics around the world being grounded due to covid19 — to look at what alternatives for traditionally physical conferences exist and how they compare both to the traditional physical conferences, and among each other. And their bottom line is quite clear: Even though many people felt they “had” to fly to conferences to stay competitive in academia, there are other ways than physical presence to network and lead scientific discussions than being physically present (or, and that’s my comment here, at least as long as that’s what most people do).
Both synchronous and asynchronous virtual conferences have many benefits over actual physical conferences, for example that they are a lot easier to attend: they are cheaper, reduced travel time makes it “worthwhile” to attend more events, combining them with e.g. caring responsibilities is a lot easier since they can be attended from home. This leads to academics attending more, and more diverse, events, possibly organised in regions of the world that they would otherwise not have considered for physical conferences, which means that access to conferences (and all the scientific discussions and networking benefits ascribed to them) becomes a lot more accessible. Also, for some academics, the threshold to network and enter discussions are substantially lowered when they are happening in an online format.
At the same time, they do have challenges that are different from the ones experienced at physical conferences, e.g. for synchronous conferences the different time zones of participants need to be considered. Conference sessions outside of normal working hours might conflict with other responsibilities (or sleep!), creating a different set of problems, or a distribution of attendants based on what time zones are convenient given their physical location. In any case, boundaries between work and home might become blurred, and participants might not be as engaged in the conference if they can use the time to simultaneously (and without detection) do home chores or other things. Also, screen fatigue is real and can become a problem. Lastly, virtual interactions might be experienced “as less energizing and inspiring than face-to-face interactions”.
And while this is certainly all true, I want to offer my own perspective on this last point. I am running a workshop on “taking ownership of your own mentoring” quite regularly, and have done so both before and during the pandemic. The workshop is always advertised as “something to do with networking”, and participants freely choose to participate (or not, which is when they are not part of my sample). This means that my participants are usually people that feel like they want to learn more about networking, and while during the pandemic there has been a very much increased amount of questions regarding building and maintaining networks online via social media, there are still many questions and anxieties related to how to use physical conferences in order to make new contacts and engage in discussions with new people. So this assumption that seems to be generally out in the world that conferences are the best way to build networks, needs at least be qualified to include “…when participants know how to do it”. Just last week I heard from a quite deflated participant of a recent networking event, one of the first in-person events taking place again, where this participant did not talk to anyone they didn’t already know and were wondering how they could have approached some people that they would actually really have liked to meet, but then ended up only observing from across the room. So I would argue that there is a need for opportunities to learn how to use both formats, physical and virtual conferences, to their best advantage!
As for the energizing and inspiring face-to-face meetings: I feel like that also depends on the kind of interactions happening virtually. Since May 2020, I have started working with a new group of people, many of which I did not know before and have never met in person, and some of the (virtual!) meetings I have had with them have been the most significant, energizing, inspiring meetings I have ever had. So I see a huge potential in virtual meetings that for many others doesn’t have seem to materialized in the same way.
I also see many people waiting “to get back to normal”, meaning flying around the world like before covid19, and it worries me, especially when those flights then don’t result in all the networking and discussions people were hoping for, but in frustrated academics that wish they had talked to someone that they instead only saw from across the room. Jack & Glover(2021) make a strong case of the greenhouse gas emissions that can be avoided if academic travel is scaled down (which is actually an important part of their article that I just glossed over, it seems so obvious to me that we should be flying less or not at all for exactly that reason!), and call for those in charge (like funders) to make sure that air travel isn’t incentivized, but I am expecting things to pick up substantially once travel becomes easier again, unless we make sure that people don’t feel like it will be a huge hit to their careers if they opt out of flying when their peers don’t.
I think we need to work on two things: Create spaces to help fulfil people’s networking and discussion needs in virtual settings, and equip them with the skills to actually do efficient networking and discussing when they want to do it, both in virtual or in-person settings. Of course there are many awesome initiatives out there to do both, but how do we make sure people even know about them and feel comfortable and confident using them? And how do we do it before people are back to their old flying ways and feel again like they cannot opt out of it without hurting their careers?
Tullia Jack & Andrew Glover(2021)Online conferencing in the midst of COVID-19: an “already existing experiment” in academic internationalization without air travel,Sustainability: Science, Practice and Policy,17:1,293-307,DOI: 10.1080/15487733.2021.1946297
In the article, the goal is to teach about the movements or the Earth, Mars, and the Sun over a day or a year. Those are investigated from three different frames of references, a terrestrial, a geocentric, and a heliocentric one. Students investigated the movements by using either a printed model on which they trace the movements with their fingers, or a large version drawn on the ground, which several students walk on, representing the different objects. I find this idea intriguing — I know that in the one case where I’ve used a similar embodied experience before (to explain why sound is refracted towards the areas of lowest speed, or why waves turn towards the beach), it has left a lasting impression.
Unfortunately, the authors did not have a classical “non-embodied” control group, so we don’t know whether their two approaches work better than any of the classical ones. But what they do find is that both seem to work well, and that — contrary to their expectations — the large one where students actually walked on the diagrams did not work better than the smaller ones. They suggest that there might be several reasons for this: having to coordinate the whole body and with other people might constitute a high cognitive load, drawing resources away from otherwise processing what’s going on. Also having other people, and especially the teacher, looking at their bodies might make them self-conscious, again drawing capacities away from where they would be best allocated for learning.
But in any case, I find the suggestion of using embodied learning in such a way in geoscience education fascinating. It seems quite unusual, and might not be feasible in all cases, but it’s definitely something that I’ll keep in mind as one possible strategy to be considered in the future!
What do you think? Would that work for your topic and your students?
Rollinde, E., Decamp, N. & Derniaux, C. (2021). Should frames of reference be enacted in astronomy instruction?. Physical Review. Physics Education Research, 17(1). (pdf here)
Maybe it was because of the contexts in which I encountered it, but I always perceived “co-creation” as an empty buzzword without any actionable substance to it. I have only really started seeing the huge potential and getting excited about it since I met Catherine Bovill. Cathy and I are colleagues in the Center for Excellence in Education iEarth, and I have attended two of her workshops on “students as partners” and now recently read her book (Bovill, 2020). And here are my main takeaways:
Speaking about students as partners can mean very many, very different things. The partnership between students and teachers is “a collaborative, reciprocal process through which all participants have the opportunity to contribute equally, although not necessarily in the same ways, to curricular or pedagogical conceptualizations, decision-making, implementation, investigation, or analysis” (Cook-Sather et al., 2014). Fo me, understanding the part about contributing equally, although not necessarily in the same ways really helped to get over objections like “but I am responsible for what goes on in my course and that students have the best possible environment for their learning. How can I put part of that responsibility on students? And can they even contribute in a meaningful way when they are not experts yet?” and the key is that they are contributing as equals, but that does not mean that we are sharing responsibility or tasks (or anything necessarily!) 50/50.
Including students as partners to co-create their learning and teaching leads to many advantages: the forms of teaching and learning that are created in such a process are more engaging to students and more human in general. Since it feels more relevant to students, learning is enhanced, and becomes more inclusive. The student also experience new roles which helps them in becoming more independent, secure, and responsible. And it seems to be a lot of fun for the teacher, too, because a lot of new opportunities for positive interactions are created.
“Students as partners” does not mean that one necessarily has to jump into the pool at the deep end and re-design the whole curriculum from scratch. There is a whole continuum of increasing student participation where a teacher only gradually shares more and more control, and every small move towards more participation is a step in the right direction. This includes many smaller steps I’ve implemented in my teaching already, without even realising that that could be counted as working towards “students as partners”!
Some of those small steps suggested in the book and that can already have a positive impact include
Reserving one or two lessons at the end of the semester for perspectives or topics that students would like included (which I personally have really good experiences with!).
Giving student questions back into the group with the question “what do you think? and why?”, sharing the power to answer questions rather than claiming it solely for the teacher.
Doing a “note-taking relay”: at regular intervals, the teacher stops and gives time for students to take notes. Students do take notes and then pass them on to their neighbour. At the next note-taking break, they take notes on that piece of paper in front of them, and then pass it on to the next neighbour. They are thus creating a documentation of the class with and for each other.
Invite students to create study guides or resources for next year’s students.
Invite them to design infographics, slides, diagrams on important topics, or present their own role plays of different theories in fictitious situations, which then are used in teaching of their own class.
Especially this last point I think I might have underestimated until now. When I saw my name mentioned in the newsletters of my two favourite podcasts this week, it made me feel super proud! If students only feel a fraction of that pride when their work is featured in a course as something that other people can learn from, it is something we should be doing MUCH MORE!
Other things that come to my mind that share responsibility in small ways or strengthen relationships:
Taking the first couple of minutes of a session to go over students’ “wonder questions“
Spending more time on peer interaction or including peer feedback, to share space and responsibility with students
If you (and they!) so choose, students could also become partners on bigger parts of the course, and especially on designing their own assessment, and in evaluating the class. Here are some examples described in the book:
In one of her own courses on the topic of educational research (which probably included how to gather data in order to evaluate teaching and learning), Cathy invited students to pick aspects of her course which they wanted to evaluate, and then work with her to design an evaluation, analyse the data and present their findings.
She also describes how she invited Master students to co-design dissertation learning outcomes, and that it was possible to include it in the official university regulations: In addition to the ones that are prescribed for all students, each student gets to design one individually in collaboration with their supervisor.
Another idea she presents is to give students key words and let them create their own essay titles including those keywords. They have the freedom to choose what question they find most interesting related to a certain topic, while the teacher can make sure the important keywords from their point of view are included. But it is then important that students and teacher work together to make sure the scope is right and there is enough literature to answer that question!
And it is possible to let students vote on the weighting of different assessment components towards their final grade. This could even be done with boundary conditions that, e.g., each assignment will have to count for at least a certain percentage. Apparently the outcomes of such votes do not vary much from year to year, but still it is increasing student buy-in a lot!
Or, going further along that continuum of students as partners, students can get involved in the whole process of designing, conducting, evaluating and reporting on a course.
Cathy presents an example of a business course where student groups come up with business ideas in the beginning and then everybody discusses what students would need to learn in order to make those ideas become reality. Those topics are then presented to each other by different student groups.
The point above reminds me of something I heard on a podcast, where the students also got involved in presenting materials and the teacher gave them the choice of which topics they wanted to present themselves and which topics they would prefer taught by the teacher. This sounds like a great idea to give the students the opportunity to pick the topics they are really interested in and at the same time leave the seemingly less attractive topics (or those where they would really value the teacher’s experience in teaching them) to the teacher.
A project I am currently working on with Kjersti and Elin, where we bring together students that took a class the previous year with students who are taking it this year in order for them to do some tank experiments together, but working towards different learning outcomes depending on their level. Here the older students help the younger ones by engaging in dialogue with them and acting as role models, while also “learning through teaching”. We are working on engaging the students in designing the learning environment, and it is super exciting!
In a recent iEarth Digital Learning Forum, Mattias and Guro described the process of completely re-designing a course in dialogue between the teacher and a team of students. And not only did they co-design the course, they also presented it together (which is a step that is really easy to forget when the partnership isn’t fully internalized yet!).
I really like the framework of “students as partners” as a reminder to think about including students in a different way, and especially to think about it as a continuum where it’s ok — and even encouraged! — to start small, and then gradually build on it. And I am excited about trying more radical forms of “students as partners” in the future!
Bovill, C. (2020). Co-creating learning and teaching: Towards relational pedagogy in higher education. Critical Publishing.
Cook-Sather, A., Bovill, C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty. John Wiley & Sons.
Kjersti and I have been talking about asking students to take turns and write summaries of lectures throughout the whole semester. We would then give feedback on them to make sure we get a final result that is correct (and that the student learns something, obviously). The summaries are then collected into a booklet that students can use to study for the exam. I did that when I was teaching the “introduction to oceanography” 10 years ago and liked it (also great feedback for me on what students thought was important!), but in the end it is just one more thing we are “asking” the students to do, so is it really such a good idea?
Then on my lunchtime walk today, I listened to “lecture breakers” episode 78. Great episode as always! Early in the podcast several design criteria are mentioned, for example for intrinsic motivation it’s important to give students choice and show the relevance of what they are doing to their real life (more on the self-determination theory here), and that from an equity perspective, it’s important to provide different perspectives on a topic. Those stuck with me, and then one piece of advice was given: to let students adopt roles. Generic roles like a facilitator, researcher, devils advocate; or roles that are specific to the topic of discussion. They did not really elaborate on it very much, but what happened in my head is this: What if we combined our summaries with the idea of students choosing roles?
There are so many stakeholders in science, and students might have preferred approaches or might want to try on potential future roles. For example, someone could choose to take on the role of a minutes keeper and write a classical summary of the main points of a lecture. That would be all I asked my students to do back in the day, so not super exciting, but maybe it is what someone would choose? Or someone might choose to be a science journalist that does not only document the main points, but additionally finds a hook for why a reader should care, so for example relating it to recent local events. Or someone could pick the role of devil’s advocate and summarise the main points but also try to find any gaps or inconsistencies in the story line. Or someone might want to be a teacher and not only summarise the main points, but also find a way to teach them better than the lecturer did (or possibly to a different audience). Or someone might want to be a curator and combine the key points of the lecture with other supporting resources. Or an artist, or a travel guide, …? Or, of course, there are specific roles depending on the topic: A fisherman? Someone living in a region affected by some event? A policy maker? A concerned citizen?
Choosing such a role might give students permission to get creative. A summary does not necessarily be a written piece, it could also be a short podcast or a piece of art, if they so choose. That would definitely make it a lot more fun for everybody, wouldn’t it? No idea if students would like this new format, but it’s definitely something that I want to bring up in discussions, and — if they think it’s a good idea — also give a try some time soon!
One idea that I encounter a lot in higher education workshops is the idea of learning styles: that some people are “visual learners” that learn best by looking at visual representations of information, and other people that learn best from reading, or from listening to lectures, and that those are traits we are born with. When I encounter these ideas, they usually come with the understanding that we, as teachers, should figure out students’ learning styles and cater to the individual students’ styles. Which — even though I haven’t seen that actually happening in practice very much — if we take it seriously, obviously adds a lot of pressure and work that is taken on with the best intentions of supporting students in the best possible way.
But learning styles are a bit of a myth. When you ask people, yes, they will tell you about their preferences for learning. And certainly there are people who work better with one representation than with another. But that in itself is not enough to support the whole theory of learning styles.
In a review article, Pashler et al. (2008) show what kind of studies we would need to conduct in order to conclude that learning styles actually exist: We would have to separate a group of students based on their leaning styles and then teach part of each group with one method designed for one of the learning styles, and another part of the group with another method designed for the other learning style, and test both groups with the same test.
Pashler et al. (2008) then show what would count as evidence for the existence of learning styles and what would not (which is one reason for why I enjoyed reading the article so much, check out their Figure 1!!): Only if students with one learning style learn best from the method designed for their learning style, and students with the other learning style learn best from the method designed for them, we can conclude that students should be taught using the method that works with their learning style. And Pashler et al. (2008) state that they could not find studies showing that kind of evidence. If one method works better for all students with one learning style but the other method does not work better for students with the other, then we might still consider offering different methods to different people, but clearly the learning style isn’t the criterium we should be using to assign methods.
Then why do so many people believe in learning styles that it’s worth building an entire industry around them? It seems that the myth that our learning style is something we are born with is really common; especially in educators working with young children (Nancekivell et al., 2020). What that means it that the idea that “you are someone who learns best from looking at pictures” or “you are someone who learns best from listening” is propagated from kindergarten teachers to really young kids, and that we are likely to grow up with a belief about how we learn best based on what we were told when we were young. That belief is usually not challenged (and why would we challenge it?), and since it’s a framework that we have accepted for us and others, we are likely to start diagnosing learning types in others later on and thus keep the myth going. Since the learning style idea is never challenged, we are likely to adapt inefficient strategies based on our belief on what our learning style is.
What does that then mean for which methods we should be using in teaching? Pashler et al. (2008) conclude that we should focus our time and energy on methods for which there is empirical evidence of effectiveness (see for example here). Mixing up representations and including visual, auditory, tactile, … learning is probably still good — only tying it to specific learners and suggesting to them that they are inherently better at learning from one representation over all others, is not. Or if it is, there is no empirical evidence of it.
Nancekivell, S. E., Shah, P., & Gelman, S. A. (2020). Maybe they’re born with it, or maybe it’s experience: Toward a deeper understanding of the learning style myth. Journal of Educational Psychology, 112(2), 221.
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological science in the public interest, 9(3), 105-119.
After being “invited” to do some service work because someone noticed “that there was nobody on the committee without a beard” (gee, thanks for making me feel like you appreciate my qualifications!), and then the next day feeling all kinds of stereotype threats triggered in a video call where I noticed I was the only woman out of more than a dozen people, I finally read Sandra Laursen & Ann E. Austin’s book “Building Gender Equity in the Academy. Institutional Strategies for Change” this weekend. And it was great!
The book compiles many years of experience in the NSF’s ADVANCE program into a compelling collection. After setting the scene and describing the structural problems that women face in academia, all based on hard data that show the scope of the issue, the whole book is basically a call to action to “fix the system, not the women” while giving actionable suggestions for how to do this.
The book is structured along four main themes:
Many processes in recruitment, hiring, tenure, and promotion are biased, but there are ways to counteract the biases.
Workplaces themselves need an overhaul to make them more equitable, for example by addressing institutional leadership, improving climate at departments, or making gender issues more visible.
People need to be seen and supported as whole persons if we want to attract diversity into the workplace, for example by supporting dual-career hires, allowing flexibility in work arrangements, or providing adequate accommodation.
While we are still working on the whole system becoming more equable, individual success of people who are already in the system can be supported by providing grants, development programmes, or mentoring and networking
For each of the four main themes, four strategies are presented together with different examples of how the strategy has been implemented in one of the ADVANCE projects, and reflections on how it worked.
The authors explain that even though their focus in the book is on gender (because the program that funded the projects they were evaluating was one focussing on gender), all the strategies most likely work for increasing diversity for other characteristics, too.
I found this really interesting from several different perspectives:
As someone who wants to support cultural change, I like that this book gives actionable suggestions and reflections on how they worked in different contexts. It will be great to refer back to this book whenever I see that there is potential for changes in policy and procedures, because there will certainly be good ideas in there that have already been tested and that we can build on! For everybody working in uni admin in any capacity, I would totally recommend keeping this book close by
As a woman in science, I used to be very active and on the leadership board of the Earth Science Women’s Network, where I met Sandra and really appreciated her perspective on things (and that she would join me for early morning swims in the lake!) and I’m just super happy to see that there is such a great body of work that we can all build on together and change things!
As someone who’s getting more and more interested in exploring the literature on faculty development and cultural change, this is a really good review of the literature related to gender equity in the academy. This is a great starting point for quickly finding relevant literature on this topic
So there is really no reason for anyone to not pick up this book and learn how to build gender equity in the academy :)
One thing I really enjoy about teaching virtually is that it is really easy to address everybody by their names with confidence, since their names are always right there, right below their faces. But that really does not have to end once we are back in lecture theatres again, because even in large classes, we can always build and use name tents. And voilà: names are there again, right underneath people’s faces!
Sounds a bit silly when there are dozens or hundreds of students in the lecture theatre, both because it has a kindergarten feel and also because there are so many names, some of them too far away to read from the front, and also you can’t possibly address this many students by name anyway? In last week’s CHESS/iEarth workshop, run by Cathy and Mattias on “students as partners”, we touched upon the topic of the importance of knowing students’ names, and that reminded me of an article that I’ve been wanting to write about forever, that actually gives a lot of good reasons for using name tents: “What’s in a name? The importance of students perceiving that an instructor knows their names in a high-enrollment biology classroom” by Cooper et al. (2017). So here we go!
In that biology class with 185 students, the instructors encouraged the regular use of name tents (those folded pieces of paper that students put up in front of themselves), and afterwards the impact of those was investigated. What they found is that while of the large classes students had taken previously, only 20% of the students thought that instructors knew their names. In this class it were actually 78% (even though in reality, instructors knew only 53% of the names). And 85% of students felt that instructors knowing their names was important. It is important for nine different reasons that can be classified under three categories, as Cooper and colleagues found out:
When students think the instructor knows their names, it affects their attitude towards the class since they feel more valued and also more invested.
Students then also behave differently, because they feel more comfortable asking for help and talking to the instructor in general. They also feel like they are doing better in the class and are more confident about succeeding in class.
It also changes how they perceive the course and the instructor: In the course, it helps them build a community with their peers. They also feel that it helps create relationships between them and the instructor, and that the instructor cares about them, and that the chance of getting mentoring or letters of recommendation from the instructor is increased.
So what does that mean for us as instructors? I agree with the authors that this is a “low-effort, high-impact” practice. Paper tents cost next to nothing and they don’t require any effort to prepare on the instructor’s side (other than it might be helpful to supply some paper). Using them is as simple as asking students to make them, and then regularly reminding them to put them up again (in the class described in the article, this happened both verbally as well as on the first slide of the presentation). Obviously, we then also need to make use of the name tents and actually call students by their names, and not only the ones in the first row, but also the ones further in the back (and walking through a classroom — both while presenting as well as when students are working in small groups or on their own, as for example in a think-pair-share setting — is a good strategy in any case because it breaks up things and gives more students direct access to the instructor). And in the end, students even sometimes felt that the instructors knew their names when they, in fact, did not, so we don’t actually have to know all the names for positive effects to occur (but I wonder what happens if students switch name tents for fun and the instructor does not notice. Is that going to just affect the two that switched, or more people since the illusion has been blown).
In any case, I will definitely be using name tents next time I’m actually in the same physical space as other people. How about you? (Also, don’t forget to include pronouns! Read Laura Guertin’s blogpost on why)
Cooper, K. M., Haney, B., Krieg, A., & Brownell, S. E. (2017). What’s in a name? The importance of students perceiving that an instructor knows their names in a high-enrollment biology classroom. CBE—Life Sciences Education, 16(1), ar8.
Last week, I wrote about increasing inquiry in lab-based courses and mentioned that it was Kirsty who had inspired me to think about this in a new-to-me way. For several years, Kirsty has been working on developing practical work, and a central part of that has been finding out the types and amount of experiences incoming students have with lab work. Knowing this is obviously crucial to adapt labs to what students do and don’t know and avoid frustrations on all sides. And she has developed a nifty tool that helps to ask the right questions and then interpret the answers. Excitingly enough, since this is something that will be so useful to so many people and, in light of the disruption to pre-univeristy education caused by Covid-19, the slow route of classical publication is not going to help the students who need help most, she has agreed to share it (for the first time ever!) on my blog!
A tool to understand students’ previous experience and adapt your practical courses accordingly
Kirsty Dunnett (2021)
Since March 2020, the Covid-19 pandemic has caused enormous disruption across the globe, including to education at all levels. University education in most places moved online, while the disruption to school students has been more variable, and school students may have missed entire weeks of educational provision without the opportunity to catch up.
From the point of view of practical work in the first year of university science programmes, this may mean that students starting in 2021 have a very different type of prior experience to students in previous years. Regardless of whether students will be in campus labs or performing activities at home, the change in their pre-university experience could lead to unforeseen problems if the tasks set are poorly aligned to what they are prepared for.
Over the past 6 years, I have been running a survey of new physics students at UCL, asking about their prior experience. It consists of 5 questions about the types of practical activities students did as part of their pre-universities studies. By knowing students better, it is possible to introduce appropriate – and appropriately advanced – practical work that is aligned to students when they arrive at university (Dunnett et al., 2020).
The question posed is: “What is your experience of laboratory work related to Physics?”, and the five types of experience are:
1) Designed, built and conducted own experiments
2) Conducted set practical activities with own method
3) Completed set practical activities with a set method
4) Took data while teacher demonstrated practical work
5) Analysed data provided
For each statement, students select one of three options: ‘Lots’, ‘Some’, ‘None’, which, for analysis, can be assigned numerical values of 2, 1, 0, respectively.
The data on its own can be sufficient for aligning practical provision to students (Dunnett et al., 2020).
More insight can be obtained when the five types of experience are grouped in two separate ways.
1) Whether the students would have been interacting with and manipulating the equipment directly. The first three statements are ‘Active practical work’, while the last two are ‘Passive work’ on the part of the student.
2) Whether the students have had decision making control over their work. The first two statements are where students have ‘Control’, while the last three statements are where students are given ‘Instructions’.
Using the values assigned to the levels of experience, four averages are calculated for each student: ‘Active practical work’, ‘Passive work’; ‘Control’, ‘Instructions’. The number of students with each pair of averages is counted. This leads to the splitting of the data set, into one that considers ‘Practical experience’ (the first two averages) and one that considers ‘Decision making experience’ (the second pair of averages). (Two students with the same ‘Practical experience’ averages can have different ‘Decision making experience’ averages; it is convenient to record the number of times each pair of averages occurs in two separate files.)
To understand the distribution of the experience types, one can use each average as a co-ordinate – so each pair gives a point on a set of 2D axes – with the radius of the circle determined by the fraction of students in the group who had that pair of averages. Examples are given in the figure.
Prior experience of Physics practical work for students at UCL who had followed an A-level scheme of studies before coming to university. Circle radius corresponds to the fraction of responses with that pair of averages; most common pairs (largest circles, over 10% of students) are labelled with the percentages of students. The two years considered here are students who started in 2019 and in 2020. The Covid-19 pandemic did not cause disruption until March 2020, and students’ prior experience appears largely unaffected.
With over a year of significant disruption to education and limited catch up opportunities, the effects of the pandemic on students starting in 2021 may be significant. This is a quick tool that can be used to identify where students are, and, by rephrasing the statements of the survey to consider what students are being asked to to in their introductory undergraduate practical work – and adding additional statements if necessary, provide an immediate check of how students’ prior experience lines up with what they will be asked to do in their university studies.
With a small amount of adjustment to the question and statements as relevant, it should be easy to adapt the survey to different disciplines.
At best, it may be possible to actively adjust the activities to students’ needs. At worst, instructors will be aware of where students’ prior experience may mean they are ill-prepared for a particular type of activity, and be able to provide additional support in session. In either case, the student experience and their learning opportunities at university can be improved through acknowledging and investigating the effects of the disruption caused to education by the Covid-19 pandemic.
K. Dunnett, M.K. Kristiansson, G. Eklund, H. Öström, A. Rydh, F. Hellberg (2020). “Transforming physics laboratory work from ‘cookbook’ to genuine inquiry”. https://arxiv.org/abs/2004.12831
My new Twitter friend Kirsty, my old GFI-friend Kjersti and I have been discussing teaching in laboratories. Kirsty recommended an article (well, she did recommend many, but one that I’ve read and since been thinking about) by Buck et al. (2008) on “Characterizing the level of inquiry in the undergraduate laboratory”.
In the article, they present a rubric that I found intriguing: It consists of six different phases of laboratory work, and then assigns 5 levels ranging from a “confirmation” experiment to “authentic inquiry”, depending on whether or not instruction is giving for the different phases. The “confirmation” level, for example, prescribes everything: The problem or question, the theoretical background, which procedures or experimental designs to use, how the results are to be analysed, how the results are to be communicated, and what the conclusions of the experiment should be. For an open inquiry, only the question and theory are provided, and for authentic inquiry, all choices are left to the student.
The rubric is intended as a tool to classify existing experiments rather than designing new ones or modifying existing, but because that’s my favourite way to think things through, I tried plugging my favourite “melting ice cubes” experiment into the rubric. Had I thought about it a little longer before doing that, I might have noticed that I would only be copying fewer and fewer cells from the left going to the right, but even though it sounds like a silly thing to do in retrospect, it was actually still helpful to go through the exercise.
It also made me realize the implications of Kirsty’s heads-up regarding the rubric: “it assumes independence at early stages cannot be provided without independence at later stages”. Which is obviously a big limitation; one can think of many other ways to use experiments where things like how results are communicated, or even the conclusion, are provided, while earlier steps are left open for the student to decide. Also providing guidance on how to analyse results without prescribing the experimental design might be really interesting! So while I was super excited at first to use this rubric to povide an overview over all the different ways labs can possibly be structured, it is clearly not comprehensive. And a better idea than making a comprehensive rubric would probably be to really think about why instruction for any of phases should or should not be provided. A little less cook-book, a little more thought here, too! But still a helpful framework to spark thoughts and conversations.
Also, my way of going from one level to the next by simply withholding instruction and information is not the best way to go about (even though I think it works ok in this case). As the “melting ice cubes” experiment shows unexpected results, it usually organically leads into open inquiry as people tend to start asking “what would happen if…?” questions, which I then encourage them to pursue (but this usually only happens in a second step, after they have already run the experiment “my way” first). This relates well to “secret objectives” (Bartlett and Dunnett, 2019), where a discrepancy appears between what students expect based on previous information and what they then observe in reality (for example in the “melting ice cube” case, students expect to observe one process and find out that another one dominates), and where many jumping-off points exist for further investigation, e.g. the condensation pattern on the cups, or the variation of parameters (what if the ice was forced to the bottom of the cup? what’s the influence of the exact temperatures or the water depth, …?).
Introducing an element of surprise might generally be a good idea to spark interest and inquiry. Huber & Moore (2001) suggest using “discrepant events” (their example is dropping raisins in carbonated drinks, where they first sink to the bottom and then raise as gas bubbles attach to them, only to sink again when the bubbles break upon reaching the surface) to initiate discussions. They then suggest following up the observation of the discrepant event with a “can you think of a way to …?” question (i.e. make the raisin raise faster to the surface). The “can you think of a way to…?” question is followed by brainstorming of many different ideas. Later, students are asked “can you find a way to make it happen?”, which then means that they pick one of their ideas and design and conduct an experiment. Huber & Moore (2001) then suggest a last step, in which students are asked to do a graphical representation or of their results or some other product, and “defend” it to their peers.
In contrast to how I run my favourite “melting ice cubes” experiment when I am instructing it in real time, I am using a lot of confirmation experiences, for example in my advent calendar “24 days of #KitchenOceanography”. How could they be re-imagined to lead to more investigation and less cook-book-style confirmation, especially when presented on a blog or social media? Ha, you would like to know, wouldn’t you? I’ve started working on that, but it’s not December yet, you will have to wait a little! :)
I’m also quite intrigued by the “product” that students are asked to produce after their experimentation, and by what would make a good type of product to ask for. In the recent iEarth teaching conversations, Torgny has been speaking of “tangible traces of learning” (in quotation marks which makes me think there is definitely more behind that term than I realize, but so far my brief literature search has been unsuccessful). But maybe that’s why I like blogging so much, because it makes me read articles all the way to the end, think a little more deeply about them, and put the thought into semi-cohesive words, thus giving me tangible proof of learning (that I can even google later to remind me what I thought at some point)? Then, maybe everybody should be allowed to find their own kind of product to produce, depending on what works best for them. On the other hand, for the iEarth teaching conversations, I really like the format of one page of text, maximum, because I really have to focus and edit it (not so much space for rambling on as on my blog, but a substantially higher time investment… ;-)). Also I think giving some kind of guidance is helpful, both to avoid students getting spoilt for choice, and to make sure they focus their time and energy on things that are helping the learning outcomes. Cutting videos for example might be a great skill to develop, but it might not be the one you want to develop in your course. Or maybe you do, or maybe the motivational effects of letting them choose are more important, in which case that’s great, too! One thing that we’ve done recently is to ask students to write blog or social media posts instead of classical lab reports and that worked out really well and seems to have motivated them a lot (check out Johanna Knauf’s brilliant comic!!!).
Kirsty also mentioned a second point regarding the Buck et al. (2008) rubric to keep in mind: it is just about what is provided by the teacher, not about the students’ role in all this. That’s an easy trap to fall into, and one that I don’t have any smart ideas about right now. And I am looking forward to discussing more thoughts on this, Kirsty :)
In any case, the rubric made me think about inquiry in labs in a new way, and that’s always a good thing! :)
Bartlett, P. A. and K. Dunnett (2019). Secret objectives: promoting inquiry and tackling preconceptions in teaching laboratories. arXiv:1905.07267v1 [physics.ed-ph]
Buck, L. B., Bretz, S. L., & Towns, M. H. (2008). Characterizing the level of inquiry in the undergraduate laboratory. Journal of college science teaching, 38(1), 52-58.
Huber, R.A., and C.J. Moore. 2001. A model for extending hands-on science to be inquiry based. School Science and Mathematics 101 (1): 32–41.