Summaries of two more inspiring articles recommended by my colleagues: On educational assessment (Hager & Butler, 1996) and on variables associated with achievement in higher ed (Schneider & Preckel, 2017)

Following the call to share inspiring articles, here are two more that I’m summarising below. See the three previous ones (on assessment (Wiliams, 2011), workload (D’Eon & Yasinian, 2021), and quality (Harvey & Stensaker, 2008)) here. And please keep sending me articles that inspire you, I really enjoy reading and summarising them! :)

First up, recommended by Jenny:

“Two models of educational assessment” by Hager & Butler (1996)

Hagen & Butler (1996) look at educational assessment and how it has changed over time: from a scientific measurement model towards a judgmental model, and recommend using the latter.

The scientific measurement model aimed at summing up performance in one number (or letter) grade, based on criteria and using statistics, to be “maximally objective”, valid and reliable. Examples for this are IQ scores or multiple-choice exams. The tendency with those measures is to make the results about people rather than about their performance on that one specific task of taking the test. But it has been shown over and over that those scores are not a good predictor of future performance in, for example, a job. Not surprisingly, because what is tested is not what is actually required in the job: they test in a very limited context, usually often knowledge and not skills, often using methods that are not suitable, and with no regards to who the test-takers actually are or what their attitudes are like.

The newer judgemental model, on the other hand, is about assessing the competencies that are required in the job, relying on competency models that describe what those competencies are and what it would look like if someone had them. This is how for example problem-based learning or portfolios are typically evaluated. In this model, rather than using only one fixed dataset to come to a conclusion about performance, it is possible to gather more data when a case is unclear, and to come into dialogue with the person being assessed. This dialogue makes it possible to integrate learning and assessment more closely.

Hagen & Butler (1996) suggest a model of assessing professional development with three levels:

1. Knowledge, attitudes, and skills

This level can be assessed following the scientific measurement model and consists, for example, of multiple-choice tests of knowledge and cognitive skill, subject-specific problem-solving skills, and observation of skills in practice setting. So far, so good, but when professionals, for example, medical doctors, are asked to judge colleagues, this is not what their focus is on. So knowledge, attitudes and skills are necessary, but not enough.

2. Performance in simulations

Performance is context dependent, so on this level, artificial simulations of real-world contexts are created so that the performance can be evaluated on a macro level that depends on bringing together knowledge and skills from several domains. Usually, this is done using checklists. But again, only passing this level is not enough.

3. Personal competence in the practise domain

On this level, people are observed “on the job”. In contrast to the previous two levels, now evaluation happens without formalized checklists and criteria. This makes it very much dependant on who the judge is, but judges can and should learn from each other to get rid of personal biases: “Objectivity is the intelligent learned use of subjectivity, not a denial of it. In the judgmental model of assessment it is the assessor who delivers objectivity, not the data.”

Comparing the underlying assumptions of the intelligence approach/scientific measurement model vs the cognitive approach/judgemental model, Hagen & Butler (1996) write: “Whereas the intelligence approach encourages selection of people to fit prespecified jobs, the cognitive approach enables us to view the workplace as a set of opportunities for people to learn and grow.” And isn’t the learning and growing why we are in the job as educators in the first place?

When trying to find this article online, I came across the response by Martin (1997), supporting the original article, who warns falling for the Macnamara’s fallacy: “making the measurable important rather than the important measurable”. Which I had never heard put in those terms, but will keep in mind as a very nice way of making a very important point!

And now on to article no 2, recommended by Sandra:

“Variables associated with achievement in higher education: A systematic review of meta-analyses” by Schneider & Preckel (2017)

Schneider & Preckel (2017) is a review of meta analyses and a great start for when you want to know “what works” in higher education. I wrote a longer summery here, and here is my summary of that summary, mostly based on their “10 cornerstone findings”:

There is A LOT of evidence of what works and what doesn’t in higher education. What comes out is that it doesn’t matter so much what you choose to do, but it does matter that, whatever you do, you do it well. Ideally as a combination of teacher-centred and student-centred approaches, and with equal attention to assessment than to the rest of teaching.

Additionally, there are many small elements that, combined, have a large effect on student learning. In a nutshell: Creating a climate in which questions and discussions are encouraged and valued and feedback happens often and focussed, make it clear what learning goals are, relate course content to students’ lives, goals, dreams.

Also be aware that there are a lot of biases and obstacles depending on who the students are and their prior trajectories, and that good study strategies can help any student succeed (and study strategies are best taught within the disciplinary context, not in separate courses).

It is totally worth reading the original article!

What other articles are inspiring you right now? Let me know and I’ll include them in the list!


Hager, P., and Butler, J. (1996). Two models of educational assessment. Assessment & Evaluation in Higher Education, 21(4), 367–378. https://doi.org/10.1080/0260293960210407

Martin, S. (1997). Two Models of Educational Assessment: a response from initial teacher education: if the cap fits …, Assessment & Evaluation in Higher Education, 22:3, 337-343, DOI: 10.1080/0260293970220307

Schneider, M., Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychol Bull. 2017; 143(6): 565-600.

Summaries of three inspiring articles on assessment (Wiliams, 2011), workload (D’Eon & Yasinian, 2021), and quality (Harvey & Stensaker, 2008)

In my group of academic development colleagues at LTH, we just opened up an internal call for the one (or two, or three, or more) articles that are most influential for our current thinking. And I want to make sure I read them all, so here are summaries of the first (and, as of just now, only) three. I’ll add to the list if and when I receive more… And I can say that they are all inspiring in very different ways, but now I am done for today! :-D

First up, sent by Torgny:

“What is assessment for learning?” by Wiliam (2011)

Wiliam (2011) Is an overview article over how the term “assessment for learning” developed and changed meaning over time, and relates this to classroom practice. Here is my short summary:

In contrast to assessment that is done after the learning process, “assessment for learning” and “formative assessment” are terms used for processes that guide learning towards intended goals during the learning process. These concept developed, because it was recognised that the same instruction does not lead to the same results independent of who the learners are, and that failure to learn is not necessarily the learners’ fault. Bloom stated at some point that learners would be able to learn better if they received “feedback” and “correctives”.

“Feedback” as a term is not very useful, however: Feedback is only the information about a gap between a desired and an actual state, it does not, in itself, do anything to close a gap. Specifically, it does not necessarily contain any kind of information for how the learner can improve their learning. So in that sense, giving feedback is pointless.

There has been a whole lot of research on assessment and classroom learning that is summarized next. Basically: There are a bunch of conflicting results. One thought that I found interesting, though, is that there are basically four eight ways to respond to feedback: change behaviour to reach the goal, modify the goal, abandon the goal, or reject the feedback. Only two responses, increasing effort and finding a bigger goal, will actually achieve more learning than if no feedback was given. So how do we get students to pick one of these two choices? Wiliam (2021) quotes Shute (2008)’s guidelines to enhance learning (make feedback specific, give clear guidance for how to improve, make it about the task, not the learner, make it as simple as possible while keeping it as complex as neccessary, …) and on the timing (for procedural learning give feedback immediately).

After more elaboration on what different authors mean by formative assessment, Wiliams (2021) refers back to their definition from 2009:

“Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited. (Black & Wiliam, 2009, p. 9)”

Two features come out as especially important:

  1. The assessment should not just show a gap between what should be and what is, but also contain information that can help improve instruction
  2. The learner must act on it in a way that supports their learning: Students can take one of two pathways: The “growth pathway” (with the goal to learn more), or the “well-being pathway” (with the goal to minimize harm, which can include more learning, but can also mean disengaging).

If we can design assessment in such a way to meet both criteria, it is likely to lead to more student engagement and better learning outcomes.

The next two articles were sent by Per:

“Student work: a re-conceptualization based on prior research on student workload and Newtonian concepts around physical work” by D’Eon & Yasinian (2021)

The article by D’Eon & Yasinian (2021) begins by making a very strong case for the importance of considering student workload, by giving an overview over all the negative effects a too high workload has on students: It does not only hinder their learning and kill their motivation, it also makes plagiarism and other forms of cheating more likely and has effects on students’ social life and health. But there is no good definition of what workload actually means: sometimes it’s defined in “objective” terms like for example as time-on-task (but who records and reports, and how “on task” does something have to be to count?), other times in “subjective” terms, for example perceived effort. In any case, gut feeling tells us that there is a link between how much effort students put into something, and how well they succeed. D’Eon & Yasinian (2021) suggest a link, based on Newton:

SW = Eff × Ach

where SW is student work (not workload, to point out that it is the work a student actually does, not what is imposed on them by a course), Eff is effort, and Ach is achievement. In this equation, effort is then seen as analogue the force that acts on the course requirements, the mass, to move them to a given achievement, over a given distance.

But what exactly is student academic effort? D’Eon & Yasinian (2021) identify four different, but interrelated, domains: cognitive (how much cognitive capacity is required? This also depends on prior knowledge, for example), physical (being awake, getting to- and from campus, buying required textbooks, …), psychological (organising themselves, keeping up motivation, …), and social (working in groups etc). Each of the four dimensions obviously relies on resources that are unequally distributed between students, for example cognitive ability, health, high intrinsic motivation, a good network. Institutions can help making resources more equally available to all students by, for example, providing materials or coaching for free to everybody, or making deadlines flexible so they can be adapted to work schedules. This model is really helpful to show that considering all four domains, and what resources we could provide students with so their effort ends up in the dimesion we want it to end up (probably the cognitive one), is directly relevant to student work, and that the course load (as specified in the syllabus) and course demands (as experienced by students) are two very different beasts. And the way this is presented speaks to my physical oceanographer’s heart :)

So now on to article no 3:

“Quality culture: Understandings, boundaries and linkages” by Harvey & Stensaker (2008)

I summarised this article before (good thing I have a blog as an external memory), but here is my summary of my summary: It is really helpful to clarify what we actually mean when we talk about “quality”, because it can mean very different things to very different people! For example, is striving for quality about being exceptional in what we do? Or is it that we are consistently meeting pre-defined measures? Or that outcomes meet a purpose? Or do we mean value for money? Or are we talking about improvement over time?

When then wanting to improve quality of whatever flavour, it is important to consider the quality culture in which this is supposed to happen. A 2×2 matrix of strong-to-weak group control and strong-to-weak external rules leads to four types of quality culture: responsive (strong group control, strong external rules), where impulses from the outside are mostly taken up as opportunities and only sometimes are perceived as imposed and constricting ownership and degrees of freedom; reactive (weak group-control, strong external rules), where measures of quality are imposed from the outside and are dealt with when necessary, but not in an integrated way; regenerative (strong group control, weak external rules), where external expectations are incorporated as far as they are perceived useful to further the internal agenda; and reproductive (weak group control, weak external rules), where the status quo is maintained with as little influence from the outside as possible. Quality insurance strategies are of course most likely to be successful if they work with the existing quality culture.

So those are the three “key articles” so far. Any comments? What other article should we include in the list? Please let me know! :-)


D’Eon, M. & Yasinian, M. (2021): Student work: a re-conceptualization based on prior research on student workload and Newtonian concepts around physical work, Higher Education Research & Development, DOI: 10.1080/07294360.2021.1945543

Harvey, L., & Stensaker, B. (2008). Quality culture: Understandings, boundaries and linkages. European journal of Education, 43(4), 427-442.

Wiliam, Dylan. “What is assessment for learning?.” Studies in educational evaluation 37.1 (2011): 3-14.

Trauma-aware teaching: Listening to Karen Costa on the “tea for teaching” podcast

Today was the first time in half a year or so that I listened to a podcast (other than the Academic Imperfectionist, where I make sure to not miss an episode, and, occasionally, the Amazing If Squiggly Careers, which I also find super helpful in navigating my own squiggly career), and I will take that as a good sign of slowly adjusting to life in a new country and slowly starting to have the capacity to listen to things on my walks again, instead of just being happy about low-input time (well, except for the wave watching, of course!) to process all the new things around me.

The episode I listened to today was on trauma-aware pedagogy on one of my other favourite podcasts, tea for teaching, with guest Karen Costa, and it has given me so much to think about! (And, I actually just registered for a workshop on Climate Action Pedagogy she’s leading in August because she was just so inspiring and I want more of that!)

The conversation is centred around student disengagement, which many teachers report having noticed since the beginning of the pandemic, and three main questions around it: Why is it happening, what about other disengagement, and what can we do?

First to the why of student disengagement: It’s basically a survival response to all the stresses of the pandemic (and other stressors that are also present, like for example climate change!): Students focus on getting out of the figurative burning building rather than on whatever teaching we might think they should be engaging with inside that building. And this flight response influences students’ executive functions: their decision making, time management, concentration, … This is just a biological reaction that we need to be aware of, not much we can do about it. Although, yesterday I listened to an Academic Imperfectionist episode where she talks about how we can strap our anxieties into the passenger seat and keep on driving, and just pick dedicated times to actually confront the anxieties, so there are strategies that we, and students, could employ to stay more focussed on what we want to focus on. BUT! Student blaming is not the point, and it leads nicely to the second big theme:

Why do we always talk about student disengagement, not about faculty disengagement, or dean disengagement, or politics disengagement? We are more than two years into this pandemic, where are the strategies from deans, universities, governments to help us all deal with it in a constructive way?

So what can we do?

One super important message in this episode is to, no matter how much good you want to do for your students, not sacrifice yourself. Even though offering more flexibility is a great way to make life easier for students, it still needs to stay manageable and not lead to burnout; that would not serve the students, either. And just because some other teachers might be able to do more to accommodate students does not mean that you have to, too: everybody has different resources at their disposal and lives different lives. Being transparent to students about what you can do and where your boundaries are is really important. And transparency is also important in communicating “upwards”: This is what I would like to do, this is what I can do, this is what I need more resources for. We all do have choices of where we allocate time and money, and if we all communicate what we need, maybe “those up there” can and will take it into consideration more.

The perhaps most thought-provoking prompt in this episode for me was that this pandemic is a symptom of climate change, and we need to be prepared for what is to come: Things are very unlikely to become easier in the future, so how can I best prepare for what will be needed then?

For me personally, a lot of my work is currently focussed on creating environments in which students feel like they belong, and where they can concentrate on learning and are not distracted by stereotype threats, harassment, etc.. I believe that this is super important, but one aspect that I think I should focus on more than I’ve done so far is how those efforts would translate if we had to go back to virtual teaching, possibly on short notice. Would my workshop concepts still work, or can I figure out a virtual or hybrid version in parallel to the in-person ones we are currently planning? And in addition to thinking about obstacles to belonging (like harassment etc), can we strengthen belonging even in virtual settings? Of course there are strategies that I have employed over the last two years that focus on creating community online etc, but one aspect that I hadn’t really thought about was that belonging, in addition to feeling part of a peer group in the subject, also has the component of feeling connected to the content, the discipline, the books etc.. And I have definitely experienced that aspect myself: When I moved out of oceanography research almost 10 years ago, and lost the daily direct access to oceanographers in coffee rooms and at conferences, my #KitchenOceanography and #WaveWatching work became more important to me to keep my identity as oceanographer alive while I moved into academic development work. So I will be thinking more about how both #KitchenOceanography and #WaveWatching can be useful to not only connect disciplinary content to everyday experiences, but also to strengthen a sense of belonging with the discipline and identification with the subject. The often-repeated message of this episode, “small is all”, is a good motto to live by as I take on this new task!

So go, listen to the episode and let me know: what is your next small step?

P.S.: One thing that I’ve never done before and that feels slightly weird is feature specific scientists whose work I find inspiring. But since I wrote about the “tea for teaching” podcast episode just now, and when I wanted to tweet my blog post, came across the Twitter profile @teaforteaching, which, as far as I can see, is not related to the podcast at all, but belongs to Katie Bateman, and her research is the PERFECT continuation of the things I was pondering this morning, so here we go: check her out, her work is inspiring! She recently published on how playdough can help learn spatial skills in geoscience education (which I would put under the wide umbrella of #KitchenOceanography, but check it out: Bateman et al., 2022), and also about what we can learn from this pandemic for future disruptions (It’s not only about how individuals adapt, it’s also about what kind of support network they have / their caring responsibilities / …, stress levels are highest for people on non-permanent positions (surprise!), and different people need different kinds of support in terms of good learning management systems, support from academic developers, … And: the more time people have to prepare, the better. So universities should be as quick as possible, making decisions as early as possible, to give people security for their planning. Read more here: Bateman et al., 2022)


Bateman, K. M., Ham, J., Barshi, N., Tikoff, B., & Shipley, T. F. (2022). Scaffolding geology content and spatial skills with playdough modeling in the field and classroom. Journal of Geoscience Education, 1-15. https://www.tandfonline.com/doi/epub/10.1080/10899995.2022.2071082

Bateman, K. M., Altermatt, E., Egger, A. E., Iverson, E., Manduca, C., Riggs, E. M., … & Shipley, T. F. (2022). Learning from the COVID-19 Pandemic: How Faculty Experiences Can Prepare Us for Future System-Wide Disruption. GSA Today. https://digitalcommons.cwu.edu/cgi/viewcontent.cgi?article=1170&context=geological_sciences

Eight criteria for authentic assessment; my takeaways from Ashford-Rowe, Herrington & Brown (2014)

“Authentic assessment” is a bit of a buzzword these days. Posing assessment tasks that resemble problems that students would encounter in the real world later on sounds like a great idea. It would make learning, even learning “for the test”, so much more relevant and motivating, and it would prepare students so much better for their lives after university. So far, so good. But what does authentic assessment actually mean when we try to design it, and is it really always desirable to its fullest meaning?

Ashford-Rowe, Herrington & Brown (2014) have reviewed the literature and, through a process of discussion and field testing, came up with eight critical elements of authentic assessment, which I am listing as the headers below, together with my thoughts on what it would mean to implement them.

1. To what extent does the assessment activity challenge the student?

For assessment to be authentic, it needs to mirror real-world problems that are not solved by just reproducing things that students learned by heart, but by creating new(-to-the-student) solutions, including analysing the task in order to choose the relevant skills and knowledge to even approach the task with.

This clearly sounds great, but as an academic developer, my mind goes directly to how this is aligned with both learning outcomes and learning activities. My fear would be that it is easy to make the assessment way too challenging if the focus is on authentic assessment alone and students are not practising on very similar tasks before already.

2. Is a performance, or product, required as a final assessment outcome?

Real-world problems require actual solutions, often in form of a product that addresses a certain need. In an assessment context, we need to balance the ideal of a functional product that provides a solution to a problem with wanting to check whether specific skills have been acquired. I.e. what would happen if students found a solution that perfectly solved the problem they were tasked with, but did it in some other way without demonstrating the skills we had planned on assessing? Would that be ok, or do we need to provide boundary conditions that make it necessary or explicit that students are to use a specific skill in their solutions?

This facet of authentic assessment strongly implicates that assessment cannot happen only in a matter of hours in a written closed-book exam, but requires more time and likely different formats (even though many of my authentic products in my job actually are written pieces).

3. Does the assessment activity require that transfer of learning has occurred, by means of demonstration of skill?

Transfer is a highest level learning outcome in both Bloom’s and the SOLO taxonomy, and definitely something that students should learn — and we should assess — at some point. How far the transfer should be, i.e. if skills and knowledge really need to be applied in a completely different domain, or just on a different example, and where the boundary between those two is, is open for debate though. And we can transfer skills that we learned in class to different contexts, or we can bring skills that we learned elsewhere into the context we focussed on in class. But again we need to keep in mind that we should only be assessing things that students actually had the chance to learn in our courses (or, arguably, in required courses before).

4. Does the assessment activity require that metacognition is demonstrated?

It is obviously an important skill to acquire to self-assess and self-direct learning, and to put it into the bigger context of the real world and ones own goals. If assessment is to mirror the demands of real-world tasks after university, reflecting on ones own performance might be a useful thing to include. But this, again, is something that clearly needs to be practice and formative feedback before it can be used in assessment.

5. Does the assessment require a product or performance that could be recognised as authentic by a client or stakeholder? (accuracy)

This point I find really interesting. How close is the assessment task to a real-world problem that people would actually encounter outside of the classroom setting? Before reading this article, that would have been my main criterion for what “authentic assessment” means.

6. Is fidelity required in the assessment environment? And the assessment tools (actual or simulated)?

Continuing the thought above: If we have a task that students might encounter in the real world, do we also provide them with the conditions they would encounter it in? For an authentic assessment situation, we should be putting students in an authentic(-ish) environment, where they have access to the same tools, the same impressions of their surroundings, the same sources of information as one would have if one was confronted with the same problem in the real world. So we need to consider how/if we could even justify for example not letting students use internet searches or conversations with colleagues when working on the task!

7. Does the assessment activity require discussion and feedback?

This follows nicely on my thoughts on the previous point. In the real world, we would discuss and receive feedback while we are working on solving a problem. If our assessment is to be authentic, this also needs to happen! But do we also assess this aspect (for example in the reflection that students do in order to demonstrate metacognition, or by assessing the quality of the discussion and feedback they give to each other), or do we require it without actually assessing it, or do we just create conditions in which it is possible and beneficial to discuss and give&receive feedback, without checking whether it actually occurred? Also, who should the students discuss with and get feedback from: us, their peers, actual authentic stakeholders? Literature shows that peer feedback is of comparable quality to teacher feedback, and that students learn both from giving and receiving, so maybe including a peer-feedback loop is a good idea. Real stakeholders would surely be motivating, but depending on the context that might be a little difficult to arrange.

8. Does the assessment activity require that students collaborate?

Collaboration is of critical importance in the real world. (How) do we include it in assessment?

I really enjoyed thinking through these eight critical elements of authentic assessment, and it definitely broadened my understanding of authentic assessment considerably, both in terms of its potential and the difficulties to implement it. What are your thoughts?


Ashford-Rowe, K., Herrington, J., & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205-222.

Dealing with collective tragedy as a teacher, according to my reading of Huston & DiPetro (2007)

When “collective tragedy”, for example terrorism, or natural catastrophes like hurricanes and floods, or pandemics, happens, it is difficult to decide whether, and how, to act as teacher. Should we acknowledge the event, and if so to what extent and in what form, or is it better to stick to business as usual to give students a sense of normalcy and stability? Collective tragedies often involve not just dealing with traumatic losses and experiences, which would be difficult to talk about in itself, but often also relate to controversial topics of politics, or culture, or religion, which might make us hesitant to bring the topic up in class.

Looking at how students experienced their teachers’ reactions to 9/11, Huston & DiPietro (2007) recommend to “do something, just about anything”.

What type of events “deserve” a reaction?

Criteria that Huston & DiPietro (2007) identify for what should be addressed in the classroom include magnitude and scale of events (so for example national events that are covered broadly in the media), how close something happened to the campus (directly on campus, same city), how likely it is to impact students (or their friends and families) directly (what they don’t mention but I am thinking about here are also international students that are affected by events in other countries!), and how likely students are to identify with the victims (maybe due to victims being the same age or being involved in the same practice of a sport or subject).

When in doubt whether or not to react to something, Huston & DiPietro (2007) recommend looking out for “situational cues”: Do students seem affected by an event, which might become visible for example by student-led events on the campus, like demonstrations or vigils? Is the event on mine, the teacher’s, mind so much that I have difficulties shaking it off when I start class? Do students even bring it up to you, or do you overhear them talking about the topic? Then it is probably worth addressing.

What kinds of reactions are helpful?

After 9/11, 11% of the faculty that responded to a specific study said they did not speak of the attacks at all. But of those other faculty that did react, reactions ranged from really low-effort, small scale actions, like a minute of silence, large efforts like making the attacks topic of (parts of) the course. Generally, there was a lot of confusion about their role as a teacher, and what that meant for what they should do, and still afterwards what they should have done, both for instructors who had reacted and those that didn’t.

While it is unclear what effects reactions or the lack thereof actually did have on students, this is what students report:

  • Completely ignoring was perceived as the instructor not taking the situation seriously enough, not caring about how the students did, and generally as “terrible”.
  • 78% of students report that they appreciated when instructors suggested ways to become active to, in some small way, make the situation better (like where to donate blood or where to give to charity), which is called “problem-focussed coping” and has been found to often be effective at reducing stress by giving people agency.
  • 69% of the students found being offered extensions of deadlines, or other alternative ways to deal with the workload of the course, helpful to counteract the mental load of dealing with a traumatic event
  • !! Acknowledging the event but not adjusting anything in class was perceived as really unhelpful !!

In addition to the students’ self-reports, clinical research found psychological interventions, for example journal writing, or an active approach to get involved in community efforts to help others, effective ways to deal with trauma.

So in summary, my impression is that it is (as always!) important to show humanity as a teacher: Acknowledging that something terrible is happening, showing that we care about students by asking them how they are doing and trying to lessen their burdens as far as that is within our powers, and giving them the opportunity to reflect on what is going on and possibly digest it by scientifically approaching the topic in the frame of the course, as well as supporting them in dealing with the real-world effects of the tragedy by suggesting ways to become active for the benefit of the community. That is quite a tall order, but I think it is good to keep in mind that it is better to do anything positive, no matter how small, than doing nothing.


Huston, T. A., & DiPietro, M. (2007). 13: In the Eye of the Storm: Students’ Perceptions of Helpful Faculty Actions Following a Collective Tragedy. To improve the academy, 25(1), 207-224.

Microaggressions: How intent and impact don’t always go together.

I’ve recently started including the topic of microaggressions in my academic development workshops, and here is one reflection on the topic (including the super helpful sandals&boots-analogy by Presley Pizzo). I initially wrote this for a newsletter to all teachers at my faculty, but then I also wrote a second – much more hands-on-“three-things-to-do” – version, which was ultimately the one that was deemed more fitting for the target audience. But I still like this one, so here I’m giving you both. So without further ado:

“That’s not what I said, and it’s definitely not what I meant, and do you really think someone like me would do such a thing?”

Continue reading

My notes from last week’s ICED22 conference

Starting out with a wave watching picture from my walk before the first day of the conference. When in Aarhus, I had to get in the 90 minute walk before the first presentation and visit the infinite bridge!

Last week, I attended my first ICED, The International Consortium for Educational Development’s, conference. And since I took an insane amount of notes which are almost unreadable now and will become completely unreadable as I forget the context, I thought I’d go through them & write them up nicely (mostly for my own benefit, but if you are curious to read about my experience, you are welcome to do so! :-)).

Continue reading

Reducing bias and discrimination in teaching: an annotated, incomplete — WORK IN PROGRESS! — list of references

Already at the time of posting, I have added to my to-read list for an updated version of this post. Please let me know of any additional literature I should include, and of any other comments you might have! As it says in the title, this work is incomplete and in progress!


“The rights perspective, which is also called the justice or democracy perspective, means that everyone should have the right to the same education and career. Gender – or other “dividing categories” – should therefore not affect career opportunities. The research system perspective emphasizes the importance of finding the best people for research, while the socio-economic perspective points out that it is a waste of public resources not to make use of the most suitable candidates. Finally, the epistemological perspective focuses on the fact that increased diversity among researchers leads to greater diversity and thus to better quality in research.” (translated after Schnaas, 2011)

.There are many reasons for why we should strive to reduce bias in academia. This is a collection of references I curated for their relevance regarding creating conditions in which students can focus on learning rather than their gender, race, sexuality, disability, … and where biases regarding student identities are reduced. This document is incomplete (the focus right now is very much on gender; but I am assuming that a lot of the findings are probably transferrable to other minorities, and I will look into that literature in the future).

.To make this collection relevant in the context of my work at LTH, I have included articles from the “LTHs pedagogiska inspirationskonferens” data base. I used the search terms (on 1.4.2022) “bias” (0 results), diskriminering (1), discrimination (1), “gender” (6), “genus” (1), “kön” (2), mångfald (0), UDL (2). Some of these articles were not relevant to this topic, others included several of the search terms. In the text, sources from LTH are highlighted as such, and are linked to directly from the text.

.The intent of this document is not that everything mentioned here be included in all academic development courses I teach, rather it gives us “aces up our sleeves” that we can bring in if/when appropriate.

.The structure of this document follows the main topics of the “Introduction to Teaching and Learning in Higher Education” course at LTH (NOTE: THE LINKS BELOW ONLY WORK IF YOU LOOK AT THE EXPANDED VERSION OF THE BLOGPOST!):

The student
– Who are we trying to communicate with?
– LTH’s students’ stereotypes of “typical engineers” are different from how they see themselves (Soneson & Torstensson, 2013)
– Research in learning and teaching is done on a sample that might not be relevant in all contexts
— Almost exclusively WEIRD (Henrich et al., 2010)
— Often male-dominated, with a binary understanding of gender, where the achievements of men are used as the gold standard (Traxler et al., 2016)
– Gender-based discrimination does exist
— #metoo in Sweden: #akademiuppropet (Salmonsson, 2020)
— #metoo at Lund University (Agardh et al., 2020)
— #metoo at LTH (Wrammerfors, 2018)
— Almost 4 in 5 female students experience sexual harassment at least once a year and that affects their motivation to study STEM subjects (Leaper & Starr, 2019)
— Increasing awareness about gender discrimination and sexual harassments helps women realize that it is not their fault that they are being targeted (Weisgram & Bigler, 2007)
– Female and male students attribute their own performance in different ways (Beyer, 1998)
— Physical science career interest is supported by discussion of underrepresentation of women (Hazari et al., 2013)

Student learning
– Students achieve more when they believe that they can develop their abilities, and we can influence that (Yeager & Dweck, 2012)
— Even non-feedback comments can influence student mindset and performance (Smith et al., 2018)
— Activating stereotypes can trigger student underperformance (Steele, 2011)
– Intersectionality: Some students belong to several disadvantaged categories simultaneously (Phoenix & Pattynama, 2006)
– There is a tendency for teachers to explain performance based on the student’s gender (Espinoza et al., 2014)

Course design
– Everybody should be able to participate
— Making learning accessible for everyone: Universal design for learning (Brand et al., 2012)
— Decisions on accommodations due to disabilities are often biased against specific disabilities (Druckman, 2021)
– Choosing learning outcomes, materials & physical space wisely
— Learning outcomes and examples might not appeal to everyone in the same way (Stadler et al., 2000)
— Textbooks and other materials can perpetuate (and activate) stereotypes (Taylor, 1979)
— Including (narratives of) role models for everyone can balance stereotype threat (McIntyre et al., 2003)
— Make sure to include all relevant voices (“Decolonizing the curriculum”; Dessent et al., 2022)
— The physical environment influences who feels welcome and participates (Cheryan et al., 2009)

Learning activities
– Participation matters, and there is a gender gap
— The person who speaks most, not who makes the best points, emerges as leader (MacLaren et al., 2020)
— There is a gender gap in participation in Scandinavia (Ballen et al., 2017)
– Active learning can help reduce gaps
— Interactive engagement reduces gender gap in physics (Lorenzo et al., 2006)
— “Reductions in achievement gaps only occur when course designs combine deliberate practice with inclusive teaching” (Aguillon et al., 2020; Theobald et al., 2020)
— Women like the connections provided by PBL (Reynolds, 2003)
– Students working in small groups
— Optimal group size for (physics) problem solving is three (Heller & Hollabaugh, 1992)
— High-ability groups don’t always perform best (Heller & Hollabaugh, 1992)
— Male students ignore female students’ input to their own detriment (Heller & Hollabaugh, 1992)
— When assigning groups, cluster minorities rather than stretching them across as many groups as possible (Stoddard et al., 2020)
— “Women-only exercise groups” sometimes recommended by teachers at LTH
– Focussed sessions (interventions) can help decrease achievement gaps
— Help students see that they belong in the classroom and that adversity is normal, temporary, and surmountable (Cohen et al., 2006; Hammarlund et al., 2022)
— Help students remember their values (Martens et al., 2006; Cohen et al., 2009; Mijake et al., 2010)

Communication
– Global English (Aarup Jensen et al., 2017)
– If you are new to the Swedish educational system, be aware of cultural differences (Natalle, 2012)
– It matters to students that teachers make an effort to know their names (Tip: name tents!) (Cooper et al., 2017)
– Sensitive language
— Gender-sensitive communication
— Preferred gender pronouns
— Disability-sensitive communication
— Questionable terminology and better alternatives
– When and how to address issues of diversity and inclusion in teaching (when it is not the topic of the course)
— Examples of gender and equality sensitive situations in teaching
— Interrupting microaggressions (Thurber & DiAngelo, 2018)
— Purposefully observing gender-relevant situations: “genusobservatörer” (Carstensen, 2006)
— What kind of resistance to expect when gender becomes a topic of conversation (Carstensen, 2006)
— “Harassment” is a really tricky (and potentially not helpful) concept (Carstensen, 2016)

Assessment
– It matters who gives an exam because it can activate stereotype threats (Marx & Roman, 2002; Marx & Goff, 2005)
– If you have to ask about demographics, do it after the test so you don’t activate stereotype threat (Danaher & Crandall, 2008)
– Multiple-choice questions and gender bias
— Multiple choice questions vs constructed response questions (Weaver & Raptis, 2001)
— Negative marking for multiple-choice questions (Funk & Perrone, 2016)
— Women are more conservative and timid test takers than men (Pekkarinen, 2015)

The teacher
– Your own growth-mindset matters (Hammarlund, 2022)
– Gender bias is everywhere and it is also acting against teachers
– Women have to be better than their male peers to be perceived as equal to them in academic hiring decisions (Eaton at al., 2020)
– Student evaluations of teaching are biased (Heffernan, 2021; but there are ways to use them for good nevertheless! Roxå et al., 2021)
– There is a backlash for men violating gender stereotypes (Moss-Racusin et al., 2018)
– There is documented gender bias at LTH
— Teachers at LTH have, on average, a “Slight automatic association for Male with Science and Female and with Liberal Arts” (Allanson et al., 2021)
— Some teachers at LTH write in ways that suggest that men are superior to women (Berndtsson & Thern, 2012)
– Talking about gender, race, and other prejudices and biases is difficult!
— Why it’s so hard for white people to talk about racism (DiAngelo, 2018)
— Sometimes “nothing happens” and still something happens (Husu, 2020)
— “Visibility paradox”: women can be simultaneously highly visible and invisible (Husu, 2020)
– Legal aspects: Defamation lawsuits (Damström, 2020)
– What can you do to support equality?
— “Gender mainstreaming” (European Commission, 2000)
— Women: don’t downplay the effects of gender! (Korvajärvi, 2021)
— Men: accept evidence of gender biases in STEM! (Handley et al., 2015)
— Install institutional strategies for gender equity! (Laursen & Austin, 2020)
— Be careful to not write biased letters of references! (Madera et al., 2009)

References

.

Continue reading