Yesterday, in our “collegial project course: teaching sustainability”, I showed two models of how one might approach thinking about teaching sustainability, and here is another one that I quite like, from the article
Tag Archives: Recommended by CEE
Currently reading: “Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice” (Nicol & Macfarlane-Dick, 2006)
Somehow a print of the “Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice” (Nicol & Macfarlane-Dick, 2006) article ended up on my desk. I don’t know who wanted me to read it, but I am glad I did! See my summary below.
Students’ sense of belonging, and what we can do about it
Last week, Sarah Hammarlund (of “Context Matters: How an Ecological-Belonging Intervention Can Reduce Inequities in STEM” by Hammarlund et al., 2022) gave a presentation here at LTH as part of a visit funded by iEarth* that led to a lot of good discussions amongst our colleagues about what we can do to increase students’ sense of belonging, and to the question “what can we, as teachers, do, to help students feel that they belong?”.
Below, I’m throwing together some ideas on the matter, from all kinds of different sources.
Creating a “time for telling” (Schwartz & Bransford, 1998)
As we are talking more and more about co-creation and all these cool things, I find it important to remember that sometimes, giving a lecture is still a really good choice. Especially when it happens at the right time, when we have created conditions for students to actually want to be told about stuff.
One way to create “a time for telling” for students with very little prior knowledge is described in Schwartz & Bransford (1998), where students work with contrasting cases to the point that they are really curious about why the cases are different (one example I have heard mentioned in this context is the coke-and-mentos experiment, that only leads to this cool fountain when you use diet coke, not any other type of coke. But why???), and are prepared to listen to someone giving them an explanation. In this case, listening to a lecture is perceived as the fastest way to learn the information that is relevant and interesting in this moment, rather than, in many other cases, listening to a boring monologue that needs to be memorised because someone thinks that it should be.
Currently reading: “Do Learners Really Know Best? Urban Legends in Education” by Kirschner & van Merriënboer (2013)
In today’s edition of “what article are you currently inspired by?“: an article that my colleague Michael sent me.
“Do Learners Really Know Best? Urban Legends in Education” by Kirschner & van Merriënboer (2013)
How we teach is, of course, influenced by what we believe about “what works” in teaching. However, persistent urban legends are in circulation that sound plausible, at least to some extent — as all urban legends do — and that are also false — again, as all urban legends. In this article, Kirschner & van Merriënboer (2013) debunk three popular urban legends on learning and teaching.
Currently reading: “Small teaching: Everyday lessons from the science of learning” by Lang (2021)
On the “tea for teaching” podcast episode on trauma-aware pedagogy that I wrote about here, the book by Lang (2021) on “Small teaching: Everyday lessons from the science of learning” was recommended. It sounded so interesting that I decided I had to read it*, and I am glad I did!
The book is full of small (really, really small!) tweaks that individually, and even more so collectively, can improve teaching. But the implicit expectation isn’t to change everything at once, which is obviously not really realistic anyway. Instead, the reader is asked to think about the class tomorrow morning (is there one small change for five of the 90 or so minutes I’m teaching? How about if I open up in a new and motivating way?), next semester (can I, for example, change a whole session, or the language on the syllabus?), and then your whole teaching career (if I’m going to do this for another couple of decades, where do I want to develop to?).
What I also really like about this book is how all the information is presented on several levels in parallel: In really concrete “small teaching quick tips”, which are also presented in form of their underlying principles, and in a more comprehensive explanation that brings in the learning science that they were derived from. For me, this led to fun, non-linear reading: For some chapters, I wanted to know the whole background and then see what follows from it, for others I just checked the quick teaching tips and then sometimes dove back into the background to see what they were built on.
I don’t want to give away all the teaching tips here or summarise the whole book, but I am going to mention a couple of things that stood out to me.
In the chapter on belonging, which was the first one I read (again, love the modular way the book is structured, that makes it super easy to jump back and forth), the focus is on one big obstacle to belonging: a fixed mindset and doubts about whether one is good enough to actually be at university. Things we can do to help students feel like they cognitively belong are for example
Asset-spotlighting: an activity to focus on what is good, not what is lacking: Inviting students to introduce themselves on notecards to the teacher not just with the usual information, but also with something they are especially good at, proud of, or interested in, and the promise to try and find a way to include it into the class. How inspiring is this! I can totally see how that will influence the teacher and the way they meet the students, but also the students’ mindset because they are reminded of what they bring to the course that might help them succeed, and how the course relates to other interests they have. Especially if the teacher actually follows through and includes (some of) the topics they mentioned to actually make the course relevant to students’ skills and interests!
Name good work: it’s as easy as the name suggests. Point out when someone did really well, either in front of the class, or in short messages afterwards. This does not have to be “just” about deliverables, but could also about leading a group discussion or any other form of participation. Pointing out that someone contributed is also pointing out that they are in the right place. One really important tip here is to keep a list of who you’ve praised & make sure that over the course of the semester, everybody receives praise at least once (based on something good they do, obviously, but everybody does at least one good thing during a semester!).
Normalize help-seeking behaviour: For example by including a statement on syllabus to let someone know if there are problems with, for example, food and housing: Those things don’t exclude you from belonging at university! This is something that I have always taken for granted, but that really isn’t granted at all, not even in Sweden. So important to be reminded of that! And of course help can and should be sought for other issues, too, like mental health, or study skills, or anything else.
One tiny teaching tip mentioned in the book is to not call on anyone to answer a question before at least five hands are raised. What I’ve been thinking about since: then how do you decide who gets to speak? Which is really interesting to reflect on. How do I usually decide who gets to speak in what order? Is it really only about how quick people are to indicate that they want to say something?
One thing that stood out to me in the chapter on “connecting” is the connection notebook. I have been keeping those all my life but have not explicitly included them in my teaching so far. In fifth grade math, we had a “Merk- und Regelheft” (Something like “rules & other stuff to remember”), a special notebook that we kept for the purpose of writing down rules that we learned (in our best handwriting, no less!), and that I actually kept to this day (see pic at the top of the blog post. This is something that I wrote in 1992, 30 years ago!). Later, during my studies, I was given a “Schlaues Buch” (“clever book”) by my friend: again a special, pretty notebook to keep all kinds of notes related to my thesis work. And I have been keeping lab books ever since, even now that I have not worked in a lab for a really long time, where I keep notes of things I read, presentations I see, thoughts I have, plans I make, brainstorming, sketches, everything. It’s still a physical notebook that travels with me everywhere! Long story short: based on personal experience, I think it is super useful to keep notes on all kinds of things in one place. In order to find them quickly, but also, as is stressed in the book, to discover new connections. And I love the suggestions of how to use this explicitly in teaching, by making time for students to take notes on new things they learned, thoughts they had, connections they made, and by explicitly prompting them to explain where they recognise concepts from the course outside of the classroom in real life, on TV, in another class. But this is something that I was taught at a young age, and which was then prompted again by a role model. So the advice to “build into your teaching approach frequent opportunities for students to come up with their own examples, analogies, and reasons.” is really one after my own heart! It also relates very much to the advice given in the chapter on motivation: your favourite subject is very unlikely to become all of your students’ favourite subject as well. But you can make them start noticing things, wondering what is going on, creating connections with what they see around them!
All these points are just a tiny fraction of all the great advice collected in the book. I totally recommend you read it!
*I requested and received a free digital evaluation copy of the book, which I am grateful for. However, this is not a sponsored post, and I am writing this without having been prompted or receiving any compensation for it.
Lang, J. M. (2021). Small teaching: Everyday lessons from the science of learning. John Wiley & Sons
Summaries of two more inspiring articles recommended by my colleagues: On educational assessment (Hager & Butler, 1996) and on variables associated with achievement in higher ed (Schneider & Preckel, 2017)
Following the call to share inspiring articles, here are two more that I’m summarising below. See the three previous ones (on assessment (Wiliams, 2011), workload (D’Eon & Yasinian, 2021), and quality (Harvey & Stensaker, 2008)) here. And please keep sending me articles that inspire you, I really enjoy reading and summarising them! :)
First up, recommended by Jenny:
“Two models of educational assessment” by Hager & Butler (1996)
Hagen & Butler (1996) look at educational assessment and how it has changed over time: from a scientific measurement model towards a judgmental model, and recommend using the latter.
The scientific measurement model aimed at summing up performance in one number (or letter) grade, based on criteria and using statistics, to be “maximally objective”, valid and reliable. Examples for this are IQ scores or multiple-choice exams. The tendency with those measures is to make the results about people rather than about their performance on that one specific task of taking the test. But it has been shown over and over that those scores are not a good predictor of future performance in, for example, a job. Not surprisingly, because what is tested is not what is actually required in the job: they test in a very limited context, usually often knowledge and not skills, often using methods that are not suitable, and with no regards to who the test-takers actually are or what their attitudes are like.
The newer judgemental model, on the other hand, is about assessing the competencies that are required in the job, relying on competency models that describe what those competencies are and what it would look like if someone had them. This is how for example problem-based learning or portfolios are typically evaluated. In this model, rather than using only one fixed dataset to come to a conclusion about performance, it is possible to gather more data when a case is unclear, and to come into dialogue with the person being assessed. This dialogue makes it possible to integrate learning and assessment more closely.
Hagen & Butler (1996) suggest a model of assessing professional development with three levels:
1. Knowledge, attitudes, and skills
This level can be assessed following the scientific measurement model and consists, for example, of multiple-choice tests of knowledge and cognitive skill, subject-specific problem-solving skills, and observation of skills in practice setting. So far, so good, but when professionals, for example, medical doctors, are asked to judge colleagues, this is not what their focus is on. So knowledge, attitudes and skills are necessary, but not enough.
2. Performance in simulations
Performance is context dependent, so on this level, artificial simulations of real-world contexts are created so that the performance can be evaluated on a macro level that depends on bringing together knowledge and skills from several domains. Usually, this is done using checklists. But again, only passing this level is not enough.
3. Personal competence in the practise domain
On this level, people are observed “on the job”. In contrast to the previous two levels, now evaluation happens without formalized checklists and criteria. This makes it very much dependant on who the judge is, but judges can and should learn from each other to get rid of personal biases: “Objectivity is the intelligent learned use of subjectivity, not a denial of it. In the judgmental model of assessment it is the assessor who delivers objectivity, not the data.”
Comparing the underlying assumptions of the intelligence approach/scientific measurement model vs the cognitive approach/judgemental model, Hagen & Butler (1996) write: “Whereas the intelligence approach encourages selection of people to fit prespecified jobs, the cognitive approach enables us to view the workplace as a set of opportunities for people to learn and grow.” And isn’t the learning and growing why we are in the job as educators in the first place?
When trying to find this article online, I came across the response by Martin (1997), supporting the original article, who warns falling for the Macnamara’s fallacy: “making the measurable important rather than the important measurable”. Which I had never heard put in those terms, but will keep in mind as a very nice way of making a very important point!
And now on to article no 2, recommended by Sandra:
“Variables associated with achievement in higher education: A systematic review of meta-analyses” by Schneider & Preckel (2017)
Schneider & Preckel (2017) is a review of meta analyses and a great start for when you want to know “what works” in higher education. I wrote a longer summery here, and here is my summary of that summary, mostly based on their “10 cornerstone findings”:
There is A LOT of evidence of what works and what doesn’t in higher education. What comes out is that it doesn’t matter so much what you choose to do, but it does matter that, whatever you do, you do it well. Ideally as a combination of teacher-centred and student-centred approaches, and with equal attention to assessment than to the rest of teaching.
Additionally, there are many small elements that, combined, have a large effect on student learning. In a nutshell: Creating a climate in which questions and discussions are encouraged and valued and feedback happens often and focussed, make it clear what learning goals are, relate course content to students’ lives, goals, dreams.
Also be aware that there are a lot of biases and obstacles depending on who the students are and their prior trajectories, and that good study strategies can help any student succeed (and study strategies are best taught within the disciplinary context, not in separate courses).
It is totally worth reading the original article!
What other articles are inspiring you right now? Let me know and I’ll include them in the list!
Hager, P., and Butler, J. (1996). Two models of educational assessment. Assessment & Evaluation in Higher Education, 21(4), 367–378. https://doi.org/10.1080/0260293960210407
Martin, S. (1997). Two Models of Educational Assessment: a response from initial teacher education: if the cap fits …, Assessment & Evaluation in Higher Education, 22:3, 337-343, DOI: 10.1080/0260293970220307
Schneider, M., Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychol Bull. 2017; 143(6): 565-600.
Summaries of three inspiring articles on assessment (Wiliams, 2011), workload (D’Eon & Yasinian, 2021), and quality (Harvey & Stensaker, 2008)
In my group of academic development colleagues at LTH, we just opened up an internal call for the one (or two, or three, or more) articles that are most influential for our current thinking. And I want to make sure I read them all, so here are summaries of the first (and, as of just now, only) three. I’ll add to the list if and when I receive more… And I can say that they are all inspiring in very different ways, but now I am done for today! :-D
First up, sent by Torgny:
“What is assessment for learning?” by Wiliam (2011)
Wiliam (2011) Is an overview article over how the term “assessment for learning” developed and changed meaning over time, and relates this to classroom practice. Here is my short summary:
In contrast to assessment that is done after the learning process, “assessment for learning” and “formative assessment” are terms used for processes that guide learning towards intended goals during the learning process. These concept developed, because it was recognised that the same instruction does not lead to the same results independent of who the learners are, and that failure to learn is not necessarily the learners’ fault. Bloom stated at some point that learners would be able to learn better if they received “feedback” and “correctives”.
“Feedback” as a term is not very useful, however: Feedback is only the information about a gap between a desired and an actual state, it does not, in itself, do anything to close a gap. Specifically, it does not necessarily contain any kind of information for how the learner can improve their learning. So in that sense, giving feedback is pointless.
There has been a whole lot of research on assessment and classroom learning that is summarized next. Basically: There are a bunch of conflicting results. One thought that I found interesting, though, is that there are basically four eight ways to respond to feedback: change behaviour to reach the goal, modify the goal, abandon the goal, or reject the feedback. Only two responses, increasing effort and finding a bigger goal, will actually achieve more learning than if no feedback was given. So how do we get students to pick one of these two choices? Wiliam (2021) quotes Shute (2008)’s guidelines to enhance learning (make feedback specific, give clear guidance for how to improve, make it about the task, not the learner, make it as simple as possible while keeping it as complex as neccessary, …) and on the timing (for procedural learning give feedback immediately).
After more elaboration on what different authors mean by formative assessment, Wiliams (2021) refers back to their definition from 2009:
“Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited. (Black & Wiliam, 2009, p. 9)”
Two features come out as especially important:
- The assessment should not just show a gap between what should be and what is, but also contain information that can help improve instruction
- The learner must act on it in a way that supports their learning: Students can take one of two pathways: The “growth pathway” (with the goal to learn more), or the “well-being pathway” (with the goal to minimize harm, which can include more learning, but can also mean disengaging).
If we can design assessment in such a way to meet both criteria, it is likely to lead to more student engagement and better learning outcomes.
The next two articles were sent by Per:
“Student work: a re-conceptualization based on prior research on student workload and Newtonian concepts around physical work” by D’Eon & Yasinian (2021)
The article by D’Eon & Yasinian (2021) begins by making a very strong case for the importance of considering student workload, by giving an overview over all the negative effects a too high workload has on students: It does not only hinder their learning and kill their motivation, it also makes plagiarism and other forms of cheating more likely and has effects on students’ social life and health. But there is no good definition of what workload actually means: sometimes it’s defined in “objective” terms like for example as time-on-task (but who records and reports, and how “on task” does something have to be to count?), other times in “subjective” terms, for example perceived effort. In any case, gut feeling tells us that there is a link between how much effort students put into something, and how well they succeed. D’Eon & Yasinian (2021) suggest a link, based on Newton:
SW = Eff × Ach
where SW is student work (not workload, to point out that it is the work a student actually does, not what is imposed on them by a course), Eff is effort, and Ach is achievement. In this equation, effort is then seen as analogue the force that acts on the course requirements, the mass, to move them to a given achievement, over a given distance.
But what exactly is student academic effort? D’Eon & Yasinian (2021) identify four different, but interrelated, domains: cognitive (how much cognitive capacity is required? This also depends on prior knowledge, for example), physical (being awake, getting to- and from campus, buying required textbooks, …), psychological (organising themselves, keeping up motivation, …), and social (working in groups etc). Each of the four dimensions obviously relies on resources that are unequally distributed between students, for example cognitive ability, health, high intrinsic motivation, a good network. Institutions can help making resources more equally available to all students by, for example, providing materials or coaching for free to everybody, or making deadlines flexible so they can be adapted to work schedules. This model is really helpful to show that considering all four domains, and what resources we could provide students with so their effort ends up in the dimesion we want it to end up (probably the cognitive one), is directly relevant to student work, and that the course load (as specified in the syllabus) and course demands (as experienced by students) are two very different beasts. And the way this is presented speaks to my physical oceanographer’s heart :)
So now on to article no 3:
“Quality culture: Understandings, boundaries and linkages” by Harvey & Stensaker (2008)
I summarised this article before (good thing I have a blog as an external memory), but here is my summary of my summary: It is really helpful to clarify what we actually mean when we talk about “quality”, because it can mean very different things to very different people! For example, is striving for quality about being exceptional in what we do? Or is it that we are consistently meeting pre-defined measures? Or that outcomes meet a purpose? Or do we mean value for money? Or are we talking about improvement over time?
When then wanting to improve quality of whatever flavour, it is important to consider the quality culture in which this is supposed to happen. A 2×2 matrix of strong-to-weak group control and strong-to-weak external rules leads to four types of quality culture: responsive (strong group control, strong external rules), where impulses from the outside are mostly taken up as opportunities and only sometimes are perceived as imposed and constricting ownership and degrees of freedom; reactive (weak group-control, strong external rules), where measures of quality are imposed from the outside and are dealt with when necessary, but not in an integrated way; regenerative (strong group control, weak external rules), where external expectations are incorporated as far as they are perceived useful to further the internal agenda; and reproductive (weak group control, weak external rules), where the status quo is maintained with as little influence from the outside as possible. Quality insurance strategies are of course most likely to be successful if they work with the existing quality culture.
So those are the three “key articles” so far. Any comments? What other article should we include in the list? Please let me know! :-)
D’Eon, M. & Yasinian, M. (2021): Student work: a re-conceptualization based on prior research on student workload and Newtonian concepts around physical work, Higher Education Research & Development, DOI: 10.1080/07294360.2021.1945543
Harvey, L., & Stensaker, B. (2008). Quality culture: Understandings, boundaries and linkages. European journal of Education, 43(4), 427-442.
Wiliam, Dylan. “What is assessment for learning?.” Studies in educational evaluation 37.1 (2011): 3-14.
Why it’s important to use students’ names, and how to make it easy: use name tents! (After Cooper et al., 2017)
One thing I really enjoy about teaching virtually is that it is really easy to address everybody by their names with confidence, since their names are always right there, right below their faces. But that really does not have to end once we are back in lecture theatres again, because even in large classes, we can always build and use name tents. And voilà: names are there again, right underneath people’s faces!
Sounds a bit silly when there are dozens or hundreds of students in the lecture theatre, both because it has a kindergarten feel and also because there are so many names, some of them too far away to read from the front, and also you can’t possibly address this many students by name anyway? In last week’s CHESS/iEarth workshop, run by Cathy and Mattias on “students as partners”, we touched upon the topic of the importance of knowing students’ names, and that reminded me of an article that I’ve been wanting to write about forever, that actually gives a lot of good reasons for using name tents: “What’s in a name? The importance of students perceiving that an instructor knows their names in a high-enrollment biology classroom” by Cooper et al. (2017). So here we go!
In that biology class with 185 students, the instructors encouraged the regular use of name tents (those folded pieces of paper that students put up in front of themselves), and afterwards the impact of those was investigated. What they found is that while of the large classes students had taken previously, only 20% of the students thought that instructors knew their names. In this class it were actually 78% (even though in reality, instructors knew only 53% of the names). And 85% of students felt that instructors knowing their names was important. It is important for nine different reasons that can be classified under three categories, as Cooper and colleagues found out:
- When students think the instructor knows their names, it affects their attitude towards the class since they feel more valued and also more invested.
- Students then also behave differently, because they feel more comfortable asking for help and talking to the instructor in general. They also feel like they are doing better in the class and are more confident about succeeding in class.
- It also changes how they perceive the course and the instructor: In the course, it helps them build a community with their peers. They also feel that it helps create relationships between them and the instructor, and that the instructor cares about them, and that the chance of getting mentoring or letters of recommendation from the instructor is increased.
So what does that mean for us as instructors? I agree with the authors that this is a “low-effort, high-impact” practice. Paper tents cost next to nothing and they don’t require any effort to prepare on the instructor’s side (other than it might be helpful to supply some paper). Using them is as simple as asking students to make them, and then regularly reminding them to put them up again (in the class described in the article, this happened both verbally as well as on the first slide of the presentation). Obviously, we then also need to make use of the name tents and actually call students by their names, and not only the ones in the first row, but also the ones further in the back (and walking through a classroom — both while presenting as well as when students are working in small groups or on their own, as for example in a think-pair-share setting — is a good strategy in any case because it breaks up things and gives more students direct access to the instructor). And in the end, students even sometimes felt that the instructors knew their names when they, in fact, did not, so we don’t actually have to know all the names for positive effects to occur (but I wonder what happens if students switch name tents for fun and the instructor does not notice. Is that going to just affect the two that switched, or more people since the illusion has been blown).
In any case, I will definitely be using name tents next time I’m actually in the same physical space as other people. How about you? (Also, don’t forget to include pronouns! Read Laura Guertin’s blogpost on why)
Cooper, K. M., Haney, B., Krieg, A., & Brownell, S. E. (2017). What’s in a name? The importance of students perceiving that an instructor knows their names in a high-enrollment biology classroom. CBE—Life Sciences Education, 16(1), ar8.