Recently, one topic seemed to emerge a lot in conversations I’ve been having: Students cheating, or the fear thereof. Cheating is “easier” when exams are written online and we don’t have students directly under our noses, and to instructors it feels like cheating has increased a lot (and maybe it has!). We’ve discussed all kinds of ways to avoid cheating: Asking questions that have answers that cannot easily be googled (but caution — this tends to make things a lot more difficult than just asking for definitions!). Putting enough time pressure on students so they don’t have time to look up things they don’t know (NOT a fan of that!!!). Using many different exams in parallel where students get assigned exercises randomly so that they would at least have to make sure they are copying from someone trying to answer the same question. But one question that has been on my mind a lot is why do students cheat in the first place, and is there anything we can do as instructors to influence whether they will want to cheat?
I read the chapter “Why students cheat: an exploration of the motivators of student academic dishonesty in higher education” in the Handbook of Academic Integrity by Brimble (2016) and here are some of the points, all backed up by different studies (for references, check back to that chaper), that stood out to me:
Students are under an enormous pressure to succeed academically, yet at the same time they are real people with lives, families, responsibilities, possibly jobs, and more. Whether its because of financial considerations, expectations of parents or peers, or other reasons: Cheating might sometimes feel like it’s the only solution to survive and finish a course among competing priorities.
Since students are under such a pressure to succeed, it is important to them that the playingfield is level and others don’t get an unfair and undeserved advantage over them. If students feel like everybody else is cheating, they might feel like they have to cheat in order to keep up. Also if the workload is so high they feel like they cannot possibly manage in other ways or content is so difficult, they feel like cheating is their only way out.
Students also feel that cheating is a “victimless crime”, so no harm done, really. Especially helping other students, even if that counts in fact as cheating, isn’t perceived as doing anything wrong. Especially if courses feel irrelevant to their lives or if students don’t have a relationship with the instructor, it does not feel like they are doing anything wrong by cheating.
Also in other cases, students might not even be aware that they are cheating (for example if they are new at university, or studying in interdisciplinary programs where norms differ between programs, or in situations that are new to them (like for example in open-book online exams, where it isn’t clear what needs to be cited and what’s common knowledge?).
Students report the actions of their role models in their academic field, their instructors, are super important in forming an idea of what is right and acceptable. If instructors don’t notice that students cheat, or worse, don’t react to it by reporting and punishing such a behavior, this feels almost like encouragement to cheat more, both to the original cheater and to others who observe the situation. Students then rationalize cheating even when they know it’s wrong.
Cheating is also a repeat offense — and the more a student does it, the easier it gets.
So from reading all of that, what can we do as instructors to lower the motivation to cheat?
First: educate & involve
If students don’t know exactly what we define as cheating, they cannot be blamed if they accidentally cheat. It’s our job to help them understand what cheating means in our specific context. We can probably all be a little more explicit about what is acceptable and what is not, especially in situations where there is a grey area. Of course it’s not a fun topic, but we need to be explicit about rules and also what happens when rules aren’t adhered to.
Interestingly, apparently the more involved students are in campus culture, the more they want to protect the institution’s reputation and not cheat. So building a strong environment that includes e.g. regularly communicated honor codes that become part of the culture might be beneficial, as well as helping students identify with the course, the study program, the institution.
Second: prosecute & punish
It’s not enjoyable, but if we notice any cheating, we need to prosecute it and punish it, even though that might come at high costs to us in terms of time, conflict, admin. The literature seems to be really clear on this one: If we let things slide a little, they become acceptable.
Ideally we would know what the rules and procedures are like at our institutions if we see something that we feel is cheating, and who the people are that can support us in dealing with the situation. If not, maybe now is a good time to figure this out.
Third: engage & adapt
Cheating is more likely to occur when there are no, or only weak, instructor-student relationships. Additionally, if students don’t feel engaged in a course, if they don’t receive enough guidance by the instructor, or if a course feels irrelevant or like they aren’t learning anything anyway, students are more likely to cheat. Similarly if a course feels too difficult or too time-consuming, if the workload is too high, or if they feel treated unfairly.
So the lesson here is to build strong relationships and make the courses both engaging and relevant to students. Making sure that the learning outcomes are relevant in the curriculum and for students’ professional development is, of course, always good advice, but in the light of making students want to learn and not have them feel like they just need to tick a box (and then do it by cheating because it really doesn’t matter one way or the other). Explaining what they will be able to do once they meet the learning outcomes (both in terms of what doors the degree opens, but also what they can practically do with the skills they learned) is another common — nevertheless now particularly useful — piece of advice. And then adjusting level of difficulty and workload to something that is managable for students — again, good advice in general and now in particular!
Of course, doing all those things is not a guarantee that students won’t cheat. But to me it feels like if I’ve paid attention to all this, I did what I could do, and that then it’s on them (which makes it easier to prosecute? Hopefully?).
What do you think? Any advice on how to deal with cheating, and especially how to prevent it?
Brimble, M. (2016). Why students cheat: an exploration of the motivators of student academic dishonesty in higher education. Handbook of academic integrity, 365.
Two years ago, I was really into daily writing in my bullet journal. I used it to plan out my day, week, month, year, but also to set goals and reflect on how I was doing achieving them. During that year I felt really efficient, accomplished, capable, and it definitely felt related to all that reflection and goal-setting going on. In 2019 I continued, but not with the same regularity, and this year I’m only on page 40 of my 2020 bullet journal. But as I felt frustrated about not moving towards a specific goal a little while ago (and, in fact, effectively moving away from it), I decided that it was time to bring out the bullet journal and write down what I wanted, and why. I instantly felt better and more motivated, and this reminded me of an article that I had wanted to blog about for a while now. Because even though I don’t know if bullet journaling is what really helps me stay on the track I want to be on, or if there are other mechanisms at play, there is good evidence that short, written exercises can help students transform their mindset and achieve more.
What is an academic mind-set, and why does it matter?
What is referred to as the “academic mind-set” is a collection of core beliefs around how capable someone is and how relevant the effort that person puts into something is for their bigger picture, both related in an academic context. So for example students might believe that their intelligence and capabilities are static (“I am just too stupid for maths”) or alternatively, that anything can be learned if you just put your mind to it and enough effort into learning it. Or students might believe that they are learning for the teacher or to achieve a certain grade, rather than because they are actually learning something that will improve their own lives (or those of others).
Obviously some of those beliefs are more conductive to learning than others, and therefore the idea is of academic mind-set interventions is to change beliefs to help students become more successful in their academic lives, for example by helping them see that intelligence is not fixed but rather a matter of training, helping them develop a “growth mindset”. Or recognizing that classes — no matter how boring — might be a useful tool to bring them closer to what really matters to them, which helps give them a sense of purpose. If those beliefs are successfully addressed through interventions, that can change how students react to challenges that come their way because they interpret for example effort not as a sign of weakness but rather as a sign of effort that leads to learning. Ideally, this leads to a “positive viciuos circle” where they recognize more and more how true those beliefs are because they are becoming more successful academically. And whether this works on a large scale was tested in the article I want to tell you about:
In the article, Paunesku et al. (2015) describe how academic-mind-set interventions can increase academic outcomes even when they are administered online and not specifically targeted to the students’ individual contexts. That way, those interventions become easily applicable everywhere and are not only available to students who are likely advantaged already, e.g. those where the parents and/or school invest extra time and money into their development.
In this case, high school students participated in two 45 minute sessions online (which is really not a lot of time in the big scheme of things!) and both interventions showed a positive impact. And, it turns out, that students who received both interventions (in contrast to one intervention and one control treatment) did not show greater benefit than from just one intervention (So if we wanted to do an intervention with our class, we wouldn’t even need to commit to twice 45 minutes).
One of the 45 minute sessions was dedicated to “growth-mind-set interventions”, designed to help students recognize that their intelligence can increase when they work hard on difficult tasks, and that the difficulty they are having is the opportunity for growth and not a sign that they are not good enough.
For this intervention, students read an article on how the brain can grow through hard work. Additionally, students did two writing exercises: Summarizing the article in their own words, and then writinga letter to a student who felt not smart enough to do well, and advising them on what they could do, based on the article the students had read.
The second 45 minute session dealt with a “sense-of-purpose intervention”. This was done by first asking students to reflect briefly about their vision of a better world, and then helping the students reflect on what meaningful goals beyond themselves they could, and and would want to, contribute to if they learned a lot in school, and how schoolwork could help them there. This intervention is designed to help students stay motivated during boring or frustrating times because they are working towards a bigger goal.
Intervening online; and should we try it, too?
The interventions discussed in the article were specifically designed to work well online: They targeted only one single core belief each, they took only very little time, and they could be done with standardized materials because they used common stories and science concepts, i.e. they did not require tailoring to the specific course or context. This makes them — or something similar — a viable tool in other instruction, too. Seeing that having two interventions didn’t yield larger gains than just having one, I would tend to do something along the lines of the second intervention: Have students describe their vision of an ideal world, and then write about how studying will let them contribute to making it become a reality.
Granted, this research was done on highschool students and is more of a proof-of-concept than a blueprint that we can copy. But I still think that we could have our students do something similar. There is a lot of research on how applying learning to students’ lives is a really important step in the learning process, and reflecting about how that learning is contributing to their lives is one part of that. And if they grow their academic mind-set and are thus more successful even beyond the specific course we are teaching, how awesome would that be?
Even just thinking about writing about my vision for the world and how my learning of new things can open up ways I can contribute to making that vision become a reality makes me feel motivated and like the world is opening up to all these exciting new possibilities that I can’t wait to get started with. Can you feel it? I think it would be amazing to give this to our students!
Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mind-set interventions are a scalable treatment for academic underachievement. Psychological science, 26(6), 784-793. [link]
If you’ve been trying to actively engage students in your classes, I am sure you’ve felt at least some level of resistance. Even though we know from literature (e.g. Freeman et al., 2014) that active learning increases student performance, it’s sometimes difficult to convince students that we are asking them to do all the activities for their own good.
But I recently came across an article that I think might be really good to help convince students of the benefits of active learning: Deslauriers et al. (2019) are “measuring actual learning versus feeling of learning in response to being actively engaged in the classroom” in different physics classes. They compare active learning (which they base on best practices in the given subject) and passive instruction (where lectures are given by experienced instructors that have a track record of great student evaluations). Apart from that, both groups were treated equally, and students were randomly assigned to one or the other group.
Figure from Deslauriers et al. (2019), showing a comparison of performance on the test of learning and feeling of learning responses between students taught with a traditional lecture (passive) and students taught actively for the statics class
As expected, the active case led to more learning. But interestingly, despite objectively learning more in the active case, students felt that they learned less than the students in the passive group (which is another example that confirms my conviction that student evaluations are really not a good measure of quality of instruction), and they said they would choose the passive learning case given the choice. One reason might be that students interpret the increased effort that is required in active learning as a sign that they aren’t doing as well. This might have negative effects on their motivation as well as engagement with the material.
So how can we convince students to engage in active learning despite their reluctance? Deslauriers et al. (2019) give a couple of recommendations:
Instructors should, early on in the semester, explicitly explain the value of active learning to students, and explicitly point out that increased cognitive effort means that more learning is taking place
Instructors should also have students take some kind of assessment early on, so students get feedback on their actual learning rather than relying only on their perception
Throughout the semester, instructors should use research-based strategies for their teaching
Instructors should regularly remind students to work hard and point out the value of that
Lastly, instructors should ask for frequent student feedback throughout the course (my favourite method here) and respond to the points that come up
I think that showing students data like the one above might be really good to get them to consider that their perceived learning is actually not a good indicator for their actual learning, and convincing them that putting in the extra effort that comes with active learning is helping them learn even though it might not feel like it. I’ve always explicitly talked to students about why I am choosing certain methods, and why I might continue doing that even when they told me they didn’t like it. And I feel that that has always worked pretty well. Have you tried that? What are your experiences?
Measuring actual learning versus feeling of learning in response to being actively engaged in the classroom
Louis Deslauriers, Logan S. McCarty, Kelly Miller, Kristina Callaghan, Greg Kestin
Proceedings of the National Academy of Sciences
Sep 2019, 16 (39) 19251-19257; DOI: 10.1073/pnas.1821936116
For many people it has been (and still is!) a huge hassle to quickly figure out ways to teach field courses in a covid-19 world, and I can relate so much! But I’m also getting more and more excited about the possibilities that are opening up when we think about fieldwork in a new way. And as I’ve been researching and teaching workshops for university teaching staff on how to transition field courses into a socially-distanced world, I have seen many exciting examples. In this blogpost, I want to share what I think is important to consider when transitioning field courses online, and some really amazing ways I’ve seen it done in the second half of the post.
Most importantly: Don’t despair, and don’t undermine whatever you end up doing!
Yes, we’d all prefer to be outside for our field courses, and not stuck to our home office, looking at our students’ faces in tiny moving stamps on a video call (at best) or talking into the wide, quiet void (at worst). There are many ways to bring fieldwork to life even in socially-distant settings, and even small “interventions” might have a large effect.
There are a couple of things we need to keep in mind:
Students might actually learn better in an unconventional setting
While we like to think that field courses are taught a certain way because they have been optimized for the specific learning outcomes, that might not actually always be the case. In many cases, they are just following a tradition without actually questioning it (and I’ll talk a little about why that is bad further down). And there are studies that show that sometimes virtual learning environments work better than traditional ones: Finkelstein (2005) showed for a direct current circuit laboratory that students who used simulated equipment outperformed students who went through a conventional lab course, both on a conceptual survey of the domain and in the coordinated tasks of assembling a real circuit and describing how it worked. So why would we assume that similar things might also be true for virtual field courses?
Virtual science is real science, too
Honestly, how many scientists do we know who are in the field every day or even only most of the time? Very very few. Most science these days happens virtually, whether data is acquired remotely, or whether scientists are using datasets that other people measured, or scientists working with numerical models. Virtual science is real science, too, and therefore even though it is not the only kind of science, maybe it’s helpful to convey to the students that while they are missing out on a fun experience (and certainly on some learning outcomes that we wish they had), they are still able to do real science.
Don’t accidentally undermine your virtual field work
That said, while I think it’s important to be honest about what is lost — the travel to an exciting destination, the experience of being on a research ship, the smell of a certain weather pattern, the feeling of different temperatures and humidities than at home — we need to be super careful to not undermine whatever we end up teaching virtually. It’s maybe not our first choice to do it this way, and we might not have spent as much time preparing it as we would have liked, but constantly telling students what they are missing out on is not going to increase their motivation in a time that is already taxing on everybody.
What are field courses?
When I’m speaking about field courses here, what I envision are the kind of field courses I am familiar with in STEM education: Excursions where biologists investigate an ecosystem, sea practicals where oceanographer spend time on a research ship, trips where engineering students look at structures for coastal protection in situ — basically outdoor teaching.
Following the classification by Fedesco et al. (2020), those would all either fall into the categories of
“collecting primary data/visiting primary sources”, where students enter an authentic, new-to-them research setting in order to do open-ended investigations on data that they generate while in the field, and where learning outcomes (partly — I would argue that many learning outcomes don’t) depend on the results of that data. Students are creating new knowledge and are actively participating in authentic research processes;
“guided discovery of a site”, where the instructor is familiar with the site and plans activities that help students discover things, leading to pre-defined learning outcomes, because students are working with skills and concepts that they learned earlier in the course and apply them to a setting that is known in advance; or maybe
“backstage access”, where students visit a site that people usually don’t have access to, for example a wave power plant (or, when I was teaching the intro to oceanography a looong time ago back in Bergen, a company that makes oceanographic instrumentation, thanks Ailin!).
Learning outcomes in field courses
While field courses might have very specific, subject- and location-specific content, there are many learning outcomes that are common to most field courses, e.g.
observation and perception skills
giving meaning to learning
providing first-hand experience
stimulating interest and motivation
(Compare Larsen et al., 2017, and others)
I think it is super helpful (always, but especially in this case) to look closely at learning outcomes, and to see how interconnected they really are. When I did this for the courses I am currently involved in, it turned out that surprisingly many of the learning outcomes can very easily be done virtually. Anything that is to do with planning of experiments, data analysis, learning of concepts could be disconnected from practicing observational skills or team working. And once they are disconnected, they can be practiced in different exercises which don’t have to rely on the same method of instruction. This makes it much easier to, for example, practice some parts in online discussions, while other parts required students to be outside and observe something themselves. The more things become modular in your mind, the easier it is to implement them.
What motivates students in field courses
When we think about field courses, we usually remember (and envision) them as extremely motivating because typically they are the occasions where students get super excited and want to dig deep and really understand the material. But why is that?
One explanation can be found in the self-determination theory by Deci & Ryan, where three basic psychological needs that need to be fulfilled in order for people to feel instrinsic motivation are described: autonomy, competence and relatedness.
Autonomy in the context of a field course means that students typically get to decide more when they are out and about doing fieldwork than when they are passively sitting in a lecture, just consuming whatever someone else decided to talk about. They might or might not get to decide what kind of questions they work on, but even if they don’t they are a lot more free in how they structure their work, how they interact with peers during that time, …
Interacting with peers is an important component for the second basic psychological need: Relatedness. In field courses, students and instructors typically spend informal time together: sitting in a bus, waiting for a boat, during the actual fieldwork. This provides opportunities for conversations that might otherwise not happen, to relate to peers and instructors on a more personal level, to also experience instructors as role models.
Lastly, field courses help students feel competence in a way they usually don’t get to in normal university settings. They work long days, potentially under challenging physical conditions, on the kind of question that they feel is more authentic than the exercises they typically do. So this might be one of the few times where they feel competent in the identity they are trying to develop: as a professional in their chosen field.
Barriers to fieldwork
But all the benefits of fieldwork come at a price (Giles et al., 2020). And those costs are not to be underestimated, especially because the barriers to fieldwork are especially felt by disabled students and those from racial and ethnic minorities, all of whom are critically underrepresented in the geosciences anyway.
Barriers include for example
the financial burden of travel / equipment / functional clothing
the emotional burden of dealing with daunting practical aspects of being outdoors (toilet breaks, periods)
the physical burden of accessibility issues (the physically challenging aspects of fieldwork that are satisfying and fun for some can on the other hand completely exclude others)
the logistical and financial burden (and emotional!) of finding a replacement for caring responsibilities
the mental burden of dealing with previous or expected harassment and inappropriate behavior
In the light of all these burdens, there is an urgent need to consider what can be done to make traditional field courses more accessible! And I think having to reinvent so many things now is a great opportunity to make sure those barriers are taken down.
Things to consider when filming for virtual field courses
Virtual field courses seems to often mean “videos of the instructor talking”, whether in their office or in the field. When filming instructional videos, for me the most important points to consider are the viewers’ attention spans, and what might keep a viewer engaged.
As for the attention span, there are many different studies that find that the shorter, the better. Of course it always depends on the video and the material and lots of other things, but the best advice would be to really think about whether anything needs to be longer than 15 minutes in one go (unless it is extremely well produced).
In order to keep viewers engaged, it’s really important to not only keep students in the role of “viewers”, but to engage them more actively. But for the periods where they are “just” watching, it seems that it is helpful to have the instructor visible and make them relatable as an authentic person. Especially having more than one instructor that interact with oneanother makes it more engaging and also provides more potential role models to students.
A list of best practices for creating engagement in educational videos is given in Choe et al., 2019; my take-away from that here.
How to motivate students in virtual field courses
Haha, you were hoping for an easy answer here? I think keeping in mind the three basic psychological needs of students that I described in the framework of the self-determination theory (autonomy, competence and relatedness) is extremely important. The better we can find ways to give students opportunities to feel any and all of those, the more motivated they’ll be.
Good-practice examples of virtual field courses
(This section was first called “best-practice”, but then I noticed that I am showing quite a lot of my own work and decided I’d rather take it down a notch ;-))
There are many categorizations possible for the examples I’m showing below, but I went for the continuum from “fully virtual” on the one hand and then “fully synchronous outside” on the other.
If you are doing a fully virtual field course, no matter whether it is video-based or text based, it’s really helpful to integrate activities that aren’t related to listening or reading, for example:
Working with pictures of real examples
Providing students with a picture of a field site, or some example of a process, or some instrumentation that they’ve just learnt about, and asking them to annotate the picture is a quick and easy activity that also helps you gauge the students’ level of understanding. This works well if you just want students doing something else than listening to you for 15 minutes.
Working with simulations
It’s fascinating how many really nice virtual representations exist online on all kinds of topics once one starts looking!
I was very impressed with this virtual arboretum I came across recently. If you were teaching about plants, this might be a neat tool for example when you want students to practice drawing plant features, for example.
Investigating a compilation of media
At the recent #FieldWorkFix conference, we were shown this platform for a virtual site assessment which I found super impressive: It’s basically “only” 360° pictures, movies and audio files that are located on a map, so students can do a virtual walk through a park that they would otherwise have visited. But the way this is done, by for example also including a picture of the parking spot and visitors center, makes it feel very real and relatable, and the other pictures, movies and audio files of the park make it possible to do the real assessment.
Another example that I find extremely inspiring is not of a whole site, but it’s a study guide on ID-ing different kinds of rocks. There is a large visual bank of rocks, each combined with the data that students need to make an ID, for example a scale so one can estimate the real size of the rocks, responses to different acids that give clues about the chemical composition, etc.. It seems incredibly comprehensive and like a lot of fun!
Investigating real data
There are of course also many amazing datasets compiled for different regions, for example Svalbox.no for Svalbard, where students can use gis-systems to access many different kinds of data in a geo-referenced frame. Combined with for example google Earth this can be used for free exploration into many different questions.
Creating the features you want to investigate
Last not least, if you want students to do some practical work at home in a virtual course, there is always kitchen oceanography, which in this context means hands-on activities that can be done solely with materials that students typically have at home already. It can mean investigating ocean currents in plastic cups with water, ice and black tea (for 24 easy ideas check out my advent calendar), or it can mean using bread or chocolate bars to simulate an investigation into how rocks behave under pressure. Or if you wanted to get fancy, you could even send out materials (e.g. sand samples in small zip lock bags to get a feel of different grain sizes). Doing small hands-on stuff at home can be a great way to change up long days of sitting in front of a computer…
With “remotely controlled kitchen oceanography” we’ve shown how small, hands-on stuff that students do at home can be combined with experiments with more complicated setups, that are streamed from my kitchen. We were all in a video conference and could therefore all see each others’ experiments while being able to really closely look at our own. Doing something similar with an instructor in the field should be easy enough (if the network and weather cooperate).
Virtual with “outdoor” aspects
As much fun as kitchen oceanography breaks are, sometimes it might be even better to get students out the door with a purpose.
Observe something related to your field right outside your door
But how to implement it in a virtual field course?
One way to take the pressure off students when doing local fieldwork tasks was shown to us at the #FieldWorkFix conference in this super best practice example that I got to experience myself during a fairly intensive virtual conference day: During the one hour lunch break, we not only had to eat lunch, but were asked to go outside and follow the wandering cards on here. Those are cards that give you instructions for your short walk: “Follow something yellow”, “sit for 2 minutes and observe things around you”, “take a right turn”, that kind of things (I, of course, didn’t follow the instructions because I wanted to see some water during my lunch break). We were also instructed to take pictures of something related to our field course, upload it on a website and write a short description (which I did).
And it was a great experience: Within this one hour, I did manage to eat lunch, go outside, take a picture, upload it, and add a description. This let me get some exercise and oxygen, gave me a purpose for my walk, and also proved how easy and fast these kinds of tasks can be if you don’t feel that you need to go to The Best wave watching spot, see the most exciting plant, whatever, but instead just have to find anything related to the course. And it was great to see all the different pictures of participants coming together! This is a way to introduce the local excursions that I will definitely be using in the future to give students that feeling of competence but also a glimpse of one of the typical feelings of fieldwork: That time is precious and every minute and every observation counts. But that a lot can be gained in a really short time, too!
If one of the learning outcomes is to practice observation and classification skills, working with citizen science apps like iNaturalist or the german Naturgucker are great. Both are parts of citizen science projects where everybody can upload pictures and other observations (e.g. audio files) that are then classified either by that person directly or through discussions on the platform. Here students contribute to “real science” by collecting data that is relevant for a larger purpose, and they interact with specialists and thus get feedback and feel part of a bigger community. I don’t know anything like that for my own topics, but in biology those are great tools.
One tool that I really want to use in asynchronous outdoor teaching myself are geocaches. Geocaching is a virtual treasure hunt: small “treasures” (often tiny plastic boxes) are hidden and can be found using an app tht gives clues where to look. Geocaches can also be virtual, and are already used for educational purposes for example as “EarthCaches“. This special form of geocaches has been developed by the Geological Society of America and the goal is to bring people to geologically interesting sites and teach them something related to that site. Wouldn’t it be awesome to do something like that for your class?
Geocaches are peer-reviewed before they appear on the app, so a lower threshold version of the same idea could be QR-codes that you hide in the area you want your students to investigate, and have the QR-codes link to websites that you can easily adapt with the seasons, or update from year to year, or have full and easy control over. Of course you might need to check the QR-codes are still there before you run the class the next year, but this is fairly low-key if you are working close to home. (Close to home being an important caveat: in fully virtual semesters, students might actually not be where you are. Please consider ways to accommodate them!)
In the last workshop I ran on virtual field courses, a participant told us about a tour guide system his institute had just bought in order to be able to do in-person excursions. The devil is in the detail, of course (how do you make sure all students can see while still maintaining the necessary distance from each other?), but that sounded like a great idea.
In my experience, writing for a different audience than just one overwhelmed instructor is very motivating to students, both because they can use it to show their friends and family what they are doing all day long, and because social media provides the potential for super positive feedback (check out Robert’s tweet about one of my kitchen oceanography experiments that just received its 330th “like” today!). An assignment like that helps on all three psychological basic needs that help foster intrinsic motivation: feeling autonomous, competent and related. So why not give it a shot?
What is your experience with virtual field courses? Do you have best practice examples to add to this? Please share!
Ronny C. Choe, Zorica Scuric, Ethan Eshkol, Sean Cruser, Ava Arndt, Robert Cox, Shannon P. Toma, Casey Shapiro, Marc Levis-Fitzgerald, Greg Barnes, and H. Crosbie (2019). “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, CBE—Life Sciences Education, Vol. 18, No. 4. Published Online: 1 Nov 2019 https://doi.org/10.1187/cbe.18-08-0171
Fedesco, H. N., Cavin, D., Henares, R. (2020). Field-based Learning in Higher Education: Exploring the Benefits and Possibilities. Journal of the Scholarship of Teaching and Learning, Vol. 20, No. 1, April 2020, pp.65-84. doi: 10.14434/josotl.v20i1.24877
Finkelstein, N. D., Adams, W. K., Keller, C. J., Kohl, P. B., Perkins, K. K., Podolefsky, N. S., Reid S., LeMaster. R. (2005). When learning about the real world is better done virtually: A study of substituting computer simulations for laboratory equipment. Physical review special topics – Physics education research 1, 010103
I’m currently preparing a couple of workshops on higher education topics, and of course it is always important to talk about learning outcomes. I had a faint memory of having developed some materials (when still working at ZLL together with one of my all time favourite colleagues, Timo Lüth) to help instructors work with the modified Bloom’s taxonomy (Anderson & Krathwohl, 2001), and when I looked it up, I realized I had not blogged about it. But since I was surprised at how helpful I still find the materials, here we go! :-)
The idea is that instructors are often told to ask specific types of questions (usually “concept” questions), but that it is really difficult to know what that means and how to do it.
So we developed a decision tree that gives an overview over all different kinds of questions. The decision tree can support you in
constructing questions that provoke specific cognitive processes in your students,
checking what exactly you are asking your students to do when posing existing questions, and
modifying existing questions to better match your purpose.
The nitty gritty details and the theoretical foundation are written up in Glessmer & Lüth (2016), unfortunately in german. But check out the decision trees below, I think they work pretty well on their own! We have four different versions of that decision tree, that guide you through both the cognitive and knowledge dimension until you reach the sweet spot you wanted to reach. Have fun!
Here is one example, links to the others below.
Abstract decision tree (most helpful for getting familiar with the general concept) [pdf English | pdf German]
Decision tree with example questions (most helpful for constructing, or classifying, or changing questions) [pdf English | pdf German]
Decision tree with example multiple-choice questions (most helpful as inspiration when working with multiple-choice questions) [pdf English | pdf German]
Comparison of our decision tree with “conventional” types of questions (if you want to find out what a “concept question” really is when classified in the Bloom taxonomy) [pdf English | pdf German]
Any comments, feedback, suggestions? Please do get in touch!
Glessmer, M. S., & Lüth, T. (2016). Lernzieltaxonomische Klassifizierung und gezielte Gestaltung von Fragen. Zeitschrift für Hochschulentwicklung, 11 (5) doi: 10.3217/zfhe-11-05/12
Student feedback has become a fixture in higher education. But even though it is important to hear student voices when evaluating teaching and thinking of ways to improve it, students aren’t perfect judges of what type of teaching leads to the most learning, so their feedback should not be taken onboard without critical reflection. In fact, there are many studies that investigate specific biases that show up in student evaluations of teaching. So in order to use student feedback to improve teaching (both on the individual level when we consider changing aspects of our classes based on student feedback, as well as at an institutional level when evaluating teachers for personnel decisions), we need to be aware of the biases that student evaluations of teaching come with.
While student satisfaction may contribute to teaching effectiveness, it is not itself teaching effectiveness. Students may be satisfied or dissatisfied with courses for reasons unrelated to learning outcomes – and not in the instructor’s control (e.g., the instructor’s gender).
Boring et al. (2016)
What student evaluations of teaching tell us
In the following, I am not presenting a coherent theory (and if you know of one please point me to it!), these are snippets of current literature on student evaluations of teaching, many of which I found referenced in this annotated literature review on student evaluations of teaching by Eva (2018). The aim of my blogpost is not to provide a comprehensive literature review, rather than pointing out that there is a huge body of literature that teachers and higher ed administrators should know exists somewhere out there, that they can draw upon when in doubt (and ideally even when not in doubt ;-)).
6 second videos are enough to predict teacher evaluations
This is quite scary, so I thought it made sense to start out with this study. Ambady and Rosenthal (1993) found that silent videos shorter than 30 seconds, in some case as short as 6 seconds, significantly predicted global end-of-semester student evaluations of teachers. These are videos that do not even include a sound track. Let this sink in…
Student responses to questions of “effectiveness” do not measure teaching effectiveness
And let’s get this out of the way right away: When students are asked to judge teaching effectiveness, that answer does not measure actual teaching effectiveness.
Stark and Freishtat (2014) give “an evaluation of course evaluations”. They conclude that student evaluations of teaching, though providing valuable information about students’ experiences, do not measure teaching effictiveness. Instead, ratings are even negatively associated with direct measures of teaching effectiveness and are influenced by gender, ethnicity and attractiveness of the instructor.
Uttl et al. (2017) conducted a meta-analysis of faculty’s teaching effectiveness and found that “student evaluation of teaching ratings and student learning are not related”. They state that “institutions focused on student learning and career success may want to abandon [student evaluation of teaching] ratings as a measure of faculty’s teaching effectiveness”.
Students have their own ideas of what constitutes good teaching
Nasser-Abu Alhija (2017) showed that out of five dimensions of teaching (goals to be achieved, long-term student development, teaching methods and characteristics, relationships with students, and assessment), students viewed the assessment dimension as most important and the long-term student development dimension as least important. To students, the grades that instructors assigned and the methods they used to do this were the main aspects in judging good teaching and good instructors. Which is fair enough — after all, good grades help students in the short term — but that’s also not what we usually think of when we think of “good teaching”.
Students learn less from teachers they rate highly
Kornell and Hausman (2016) review recent studies and report that when learning is measured at the end of the respective course, the “best” teachers got the highest ratings, i.e. the ones where the students felt that they had learned the most (which is congruent with Nasser-Abu Alhija (2017)’s findings of what students value in teaching). But when learning was measured during later courses, i.e. when meaningful deep learning was considered, other teachers seem to have more effective. Introducing desirable difficulties is thus good for learning, but bad for student ratings.
Appearances can be deceiving
Carpenter et al. (2013) compared a fluent video (instructor standing upright, maintaining eye contact, speaking fluidly without notes) and a disfluent video (instructor slumping, looking away, speaking haltingly with notes). They found that even though the amount of learning that took place when students watched either of the videos wasn’t influenced by the lecturer’s fluency or lack thereof, the disfluent lecturer was rated lower than the fluent lecturer.
The authors note that “Although fluency did not significantly affect test performance in the present study, it is possible that fluent presentations usually accompany high-quality content. Furthermore, disfluent presentations might indirectly impair learning by encouraging mind wandering, reduced class attendance, and a decrease in the perceived importance of the topic.”
Student expect more support from their female professors
When students rate teachers effectiveness, they do that based on their assumption of how effective a teacher should be, and it turns out that they have different expectations depending on the gender of their teachers. El-Alayi et al. (2018) found that “female professors experience more work demands and special favour requests, particularly from academically entitled students”. This was both true when male and female faculty reported on their experiences, as well as when students were asked what their expectations of fictional male and female teachers were.
Boring (2017) found that even when learning outcomes were the same for students in courses taught by male and female teachers, female teachers received worse ratings than male teachers. This got even worse when teachers didn’t act in accordance to the stereotypes associated with their gender.
MacNell et al. (2015) found that believing that an instructor was female (in a study of online teaching where male and female names were sometimes assigned according to the actual gender of the teacher and sometimes not) was sufficient to rate that person lower than an instructor that was believed (correctly or not) to be male.
White male students challenge women of color’s authority, teaching competency, and scholarly expertise, as well as offering subtle and not so subtle threats to their persons and their careers
This title was drawn from the abstract of Pittman (2010)’s article that I unfortunately didn’t have access to, but thought an important enough point to include anyway.
There are very many more studies on race, and especially women of color, in teaching contexts, which all show that they are facing a really unfair uphill battle.
Students will punish a percieved accent
Rubin and Smith (1990) investigated “effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants” in North America and found that 40% of undergraduates avoid classes instructed by nonnative English-speaking teaching assistants, even though the actual accentedness of teaching assistants did not actually influence student learning outcomes. Nevertheless, students judged teaching assistants they perceived as speaking with a strong accent as poorer teachers.
Similarly, Sanchez and Khan (2016) found that “presence of an instructor accent […] does not impact learning, but does cause learners to rate the instructor as less effective”.
Student will rate minorities differently
Ewing et al. (2003) report that lecturers that were identified as gay or lesbian received lower teaching ratings than other lecturers with undisclosed sexual orientation when they, according to other measures, were perfoming very well. Poor teaching performance was, however, rated more positively, possibly to avoid discriminating against openly gay or lesbian lecturers.
Students will punish age
Stonebraker and Stone (2015) find that “age does affect teaching effectiveness, at least as perceived by students. Age has a negative impact on student ratings of faculty members that is robust across genders, groups of academic disciplines and types of institutions”. Apparently, when it comes to students, from your mid-40ies on, you aren’t an effective teacher any more (unless you are still “hot” and “easy”).
Student evaluations are sensitive to student’s gender and grade expectation
Boring et al. (2016) find that “[student evaluation of teaching] are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness.“
What can we learn from student evaluations then?
Pay attention to student comments but understand their limitations. Students typically are not well situated to evaluate pedagogy.
Stark and Freishtat (2014)
Does all of the above mean that student evaluations are biased in so many ways that we can’t actually learn anything from them? I do think that there are things that should not be done on the basis of student evaluations (e.g. rank teacher performance), and I do think that most times, student evaluations of teaching should be taken with a pinch of salt. But there are still ways in which the information gathered is useful.
Even though student satisfaction is not the same as teaching effectiveness, it might still be desirable to know how satisfied students are with specific aspects of a course. And especially open formats like for example the “continue, start, stop” method are great for gaining a new perspective on the classes we teach and potentially gaining fresh ideas of how to change things up.
Also tracking ones own evaluation over time is helpful since — apart from aging — other changes are hopefully intentional and can thus tell us something about our own development, at least assuming that different student cohorts evaluate teaching performance in a similar way. Also getting student feedback at a later date might be helpful, sometimes students only realize later which teachers they learnt from the most or what methods were actually helpful rather than just annoying.
A measure that doesn’t come directly from student evaluations of teaching but that I find very important to track is student success in later courses. Especially when that isn’t measured in a single grade, but when instructors come together and discuss how students are doing in tasks that build on previous courses. Having a well-designed curriculum and a very good idea of what ideas translate from one class to the next is obviously very important.
It is also important to keep in mind that, as Stark and Freishtat (2014) point out, statistical methods are only valid if there are enough responses to actually do statistics on them. So don’t take very few horrible comments to heart and ignore the whole bunch of people who are gushing about how awesome your teaching is!
P.S.: If you are an administrator or on an evaluation committee and would like to use student evaluations of teaching, the article by Linse (2017) might be helpful. They give specific advice on how to use student evaluations both in decision making as well as when talking to the teachers whose evaluations ended up on your desk.
Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64(3), 431–441. https://doi.org/10.1037/0022-35220.127.116.111
Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: Instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20(6), 1350–1356. https://doi.org/10.3758/s13423-013-0442-z
El-Alayi, A., Hansen-Brown, A. A., & Ceynar, M. (2018). Dancing backward in high heels: Female professors experience more work demands and special favour requests, particularly from academically entitled students. Sex Roles. https://doi.org/10.1007/s11199-017-0872-6
Ewing, V. L., Stukas, A. A. J., & Sheehan, E. P. (2003). Student prejudice against gay male and lesbian lecturers. Journal of Social Psychology, 143(5), 569–579. http://web.csulb.edu/~djorgens/ewing.pdf
Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94- 106. https://doi.org/10.1016/j.stueduc.2016.12.004
MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291– 303. https://doi.org/10.1007/s10755-014-9313-4
Pittman, C. T. (2010). Race and Gender Oppression in the Classroom: The Experiences of Women Faculty of Color with White Male Students. Teaching Sociology, 38(3), 183–196. https://doi.org/10.1177/0092055X10370120
Rubin, D. L., & Smith, K. A. (1990). Effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants. International Journal of Intercultural Relations, 14, 337–353. https://doi.org/10.1016/0147-1767(90)90019-S
Sanchez, C. A., & Khan, S. (2016). Instructor accents in online education and their effect on learning and attitudes. Journal of Computer Assisted Learning, 32, 494–502. https://doi.org/10.1111/jcal.12149
Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. http://dx.doi.org/10.1016/j.stueduc.2016.08.007
Just imagine you had written an article on “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, like Choe et al. (2019) did. What excellent timing to inform teaching decisions all around the world!
Choe et al. compare 8 different video styles (all of which can be watched as supplementary material to the article which is really helpful!), 6 to replace “normal lectures” and two that complement them, to investigate the influence of video style on both how much students are learning from each, and how they feel watching them.
The “normal lecure” videos were different combinations of the lecturer and information on slides/blackboards/tablets/…: a “classic classroom” where the lecturer is filmed in front of a blackboard and a screen, a “weatherman” style in front of a green screen on which the lecture slides are later imposed, a “learning glass” where the lecturer is seen writing on a board, a “pen tablet” where the lecturer can draw on the slides, a “talking head” where the lecturer is superimposed on the slides in a little window, and “slides on/off” where the video switches between showing slides or the lecturer.
And the good news: Turns out that the style you choose for your recorded video lecture doesn’t really affect student learning outcomes very much. Choe et al. did, however, deduce strengths and weaknesses of each of the lecture formats, and from that come up with a list of best practices for student engagement, which I find very helpful. Therein, they give tips for different stages of the video production, related to the roles (lecturer and director of the video), and content covered in the videos, and these are really down-to-earth, practical tips like “cooler temperatures improve speaker comfort”. And of course all the things like “not too much text on slides” and “readable font” are mentioned, too; always a good reminder!
One thing they point out that I wasn’t so clear to me before is that it’s important that the lecturer is visible and that they maintain eye contact with the camera. Of course that adds a layer of difficulty to recording lectures — and a lot of awkward feelings and extra work in terms of what to wear and actually having to shower and stuff — but in the big scheme of things if it creates a better user experience, maybe it’s not such a big sacrifice. Going forward, I’ll definitely keep that in mind!
Especially making the distinction between the roles of “lecturer” and “director” was a really helpful way for me to think about making videos, even though I am playing both roles myself. But it reminds me of how many considerations (should) go into a video besides “just” giving the lecture! If you look at the picture above, you’ll see that I’ve started sketching out what I want to be able to show on a future video, and what that means for how many cameras I need, where to place them, and how to orient them (portrait or landscape). When I made the (german) instructions for kitchen oceanography, I filmed myself in portrait mode, thinking of posting them to my Instagram stories, but then ended up editing a landscape video for which I then needed to fill all the awkward space around the portrait movie. Would have been helpful to think about it in these terms before!
Choe et al. even include a “best practice” video in their supplementary material, which I find super helpful. Because even though in some cases it might be feasible to professionally produce lectures in a studio, but that’s not what I (or most people frantically producing video lectures) these days have access to. So seeing something that is professionally produced but that doesn’t (seem) to require incredibly complicated technology or fancy editing is reassuring. In fact, even though the lecturer appears to have been filmed in front of a green screen, I think in the end it’s not too unsimilar to what I did in the (german) instructions for kitchen oceanography mentioned above: A lecturer on one side, the slides (in a portrait format) on the other.
In addition to the six “lecture” videos, there was a “demo” video where the lecturer showed a simple demonstration, and an “interview” video, where the lecturer was answering questions that were shown on a screen (so no second person there). Those obviously can’t replace a traditional lecture, but can be very useful for specific learning outcomes!
The “demo” type video is the one I am currently most interested in, since that’s where I can best contribute my expertise in a niche where other people appreciate getting some input. Also, according to Choe at al., students found that type of video engaging, entertaining, and of high learning value. All the more reason for me to do a couple more demo videos over the next couple of days, I’m already on it!
Ronny C. Choe, Zorica Scuric, Ethan Eshkol, Sean Cruser, Ava Arndt, Robert Cox, Shannon P. Toma, Casey Shapiro, Marc Levis-Fitzgerald, Greg Barnes, and H. Crosbie (2019). “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, CBE—Life Sciences Education, Vol. 18, No. 4. Published Online: 1 Nov 2019 https://doi.org/10.1187/cbe.18-08-0171
Reaching non-specialist audiences and engaging them with science at an affordable seaside campsite
The idea behind the study is that while science days and science festival and those kinds of events are great opportunities for the interested public to engage with cutting edge research or other interesting science, the problem is that it will only engage the interested public. As long as people have to choose to specifically enter a space (whether physically or on the internet) where scicomm happens, doing so actually needs to be made a priority. A priority in how time and money are spent, and in competition with many other things that might be a lot more important to people. So how can people be reached without relying on them to make the effort to enter in a scicomm space?
In this study, the scicomm topic was “insects as a sustainable food source”. The way they did it was a pop up kitchen in the middle of a campsite where they offered a menu made from insects as well as information and conversations on that topic. And here is what they recommend:
In the study, an affordable campside near the seaside was chosen in order to reach audiences who might not make an active effort to engage with science otherwise. The assumption that those audiences are more likely to be found on affordable, local campsites than in high-end holiday ressorts is grounded in literature.
(Also, a campsite can provide infrastructure that will make your experience as scicommer a lot more enjoyable. Parking spots, toilets, food, all within easy reach…)
People have time
In the study, Woolman found that since people were on vacation and had time, engagement wasn’t just the sadly too common “grab and go” of scicomm giveaways, but that extended engagement (longer than 10 minutes) could easily take place. This is important because other scicomm activities that take place in spaces where people just happen to be are often in very busy places like shopping malls or even train stations, where there is a lot of people going through, but where engagement is made difficult because people are there for a specific purpose which they want to get done and then go some place else. At a campsite, on the other hand, people have a lot of time on their hands and are often grateful for some kind of unexpected stimulation or the opportunity to have the kids kept busy for a couple of minutes.
School holidays or a weekend in November?
Depending on who your target audience is and what type of engagement you are going for, it might be a good idea to do your scicomm activity during the busy times. For example during the summer school holidays, camsites are typically most busy, with all sorts of people. If you were to target families with school-aged kids, for example, this would be the time to do your activity! But of course it’s also possible that your target audience are pensioneers — then maybe choosing a weekend or even week day outside of the school holidays might be a better idea! It might not be as busy in total numbers, but the density of your target audience might be relatively higher.
So what now?
I really like the idea of doing pop up scicomm at campsites. At my friend Sara‘s windsurfing school, this was happening when both she and other Kiel Science Outreach Campus (KiSOC; I was the project’s scientific coordinator at that time) PhD students did scicomm on their projects on the beach (in the picture you see a 3D movie on water striders being test-watched). Another project was related to sunscreen — very appropriate to do this on the beach! And from that experience doing scicomm specifically at that place, but more generally in a similar setting was something I wanted to do more of, and that I’ve been thinking about for two reasons.
As you know, my pet project is wave watching. And what better place to do it than on a beach? And that beach specifically is great because it offers a variety of features that influence a wave field (Check out a short wave watching movie from that beach here), plus I enjoy hanging out there (which I think is a really important factor when planning a scicomm activity — it needs to be enjoyable! If it’s not, that will show and put people off your science, no matter how awesome it might be).
I’ve been thinking about offering wave watching excursions there and actually had some scheduled this spring and summer, where I would meet up with people, walk to different spots on the beach, and explain what physics they can observe there. Well, there is always next year, or my wave watching Instagram @fascinocean_kiel :-)
GEO-Tag der Natur
I’m the programme manager of the german “GEO-Tag der Natur” festival on biodiversity. As part of my job I’ve been thinking about engaging different audiences through new formats, and this seems like a great idea. For GEO-Tag der Natur, there are typically excursions into interesting biotopes where experts on that type of biotope explain animals and plants that can be found there. Usually we advertise excursions in spots that are especially interesting in terms of biodiversity, but even just a regular beach, forest, or nature around wherever the campsite is located are super interesting and there is so much to discover anywhere! So using campsites as home bases for our excursions is definitely something that I want to try when it’s possible again. It’s also an attractive idea for the campsites themselves to be able to offer these kinds of events, so it’s a win-win!
What are your thoughts on doing scicomm on a campsite? Let me know!
Woolman, A. (2020) ‘Reaching non-specialist audiences and engaging them with science at an affordable seaside campsite’. Research for All, 4 (1): 6–15. DOI https://doi.org/10.18546/RFA.04.1.02