Tag Archives: feedback

Fostering student sense of belonging in a large online class (after Lim, Atif, Farmer; 2022)

When we talk about fostering student sense of belonging, it is easiest to think about in-person interactions. However, a lot of our teaching these days is online, and in high-enrolment courses. What can we do then? Two elements are critical: Teacher presence and interactive course design. Lim, Arif and Farmer (2022) present a case study of a learning analytics feedback intervention that I will summarize below.

Continue reading

Currently reading Cohen, Steele & Ross (1999) “The mentor’s dilemma: Providing critical feedback across the racial divide”

It seems to be common knowledge in my network that effective teachers articulate both high standards and their belief that students can meet those standards. Looking for sources for this in the literature, I came across Cohen, Steele & Ross (1999)’s “The mentor’s dilemma: Providing critical feedback across the racial divide”, which I’ll summarise below.

Continue reading

Currently reading: “Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice” (Nicol & Macfarlane-Dick, 2006)

Somehow a print of the “Formative assessment and self‐regulated learning: A model and seven principles of good feedback practice” (Nicol & Macfarlane-Dick, 2006) article ended up on my desk. I don’t know who wanted me to read it, but I am glad I did! See my summary below.

Continue reading

Using peer feedback to improve students’ writing (Currently reading Huisman et al., 2019)

I wrote about involving students in creating assessment criteria and quality definitions for their own learning on Thursday, and today I want to think a bit about involving students also in the feedback process, based on an article by Huisman et al. (2019) on “The impact of peer feedback on university students’ academic writing: a Meta-Analysis”. In that article, the available literature on peer-feedback specifically on academic writing is brought together, and it turns out that across all studies, peer feedback does improve student writing, so this is what it might mean for our own teaching:

Peer feedback is as good as teacher feedback

Great news (actually, not so new, there are many studies showing this!): Students can give feedback to each other that is of comparable quality than what teachers give them!

Even though a teacher is likely to have more expert knowledge, which might make their feedback more credible to some students (those that have a strong trust in authorities), it also makes it more relevant to other students, and there is no systematic difference between improvement after peer feedback and feedback from teaching staff. But to alleviate fears related to the quality of peer feedback is to use peer feedback purely (or mostly) formative, whereas the teacher does the assessment themselves.

Peer feedback is good for both giver and receiver

If we as teachers “use” students to provide feedback to other students, it might seem like we are pushing part of our job on the students. But: Peer feedback improves writing both for the students giving it as well as for the ones receiving it! Giving feedback means actively engaging with the quality criteria, which might improve future own writing, and doing peer feedback actually improves future writing more than students just doing self-assessment. This might be, for example, because students, both as feedback giver and receiver, are exposed to different perspectives on and approaches towards the content. So there is actual benefit to student learning in giving peer feedback!

It doesn’t hurt to get feedback from more than one peer

Thinking about the logistics in a classroom, one question is whether students should receive feedback from one or multiple peers. Turns out, in the literature it is not (significantly) clear whether it makes a difference. But gut feeling says that getting feedback from multiple peers creates redundancies in case quality of one feedback is really low, or the feedback isn’t actually given. And since students also benefit from giving peer feedback, I see no harm in having students give feedback to multiple peers.

A combination of grading and free-text feedback is best

So what kind of feedback should students give? For students receiving peer feedback, a combination of grading/ranking and free-text comments have the maximum effect, probably because it shows how current performance relates to ideal performance, and also gives concrete advise on how to close the gap. For students giving feedback, I would speculate that a combination of both would also be the most useful, because then they need to commit to a quality assessment, give reasons for their assessment and also think about what would actually improve the piece they read.

So based on the Huisman et al. (2019) study, let’s have students do a lot of formative assessment on each other*, both rating and commenting on each other’s work! And to make it easier for the students, remember to give them good rubrics (or let them create those rubrics themselves)!

Are you using student peer feedback already? What are your experiences?

*The Huisman et al. (2019) was actually only on peer feedback on academic writing, but I’ve seen studies using peer feedback on other types of tasks with similar results, and also I don’t see why there would be other mechanisms at play when students give each other feedback on things other than their academic writing…


Bart Huisman, Nadira Saab, Paul van den Broek & Jan van Driel
(2019) The impact of formative peer feedback on higher education students’ academic writing: a Meta-Analysis, Assessment & Evaluation in Higher Education, 44:6, 863-880, DOI: 10.1080/02602938.2018.1545896

#Methods2Go: Methods for feedback and reflection in university teaching

More methods today, inspired by E.-M. Schumacher’s “Methoden 2 go online!“! Today:

Evaluating

Flashlight

I used to hate it when in in-person workshops everybody was asked to give a statement at the end, about what the most important thing was they learned, or how they liked something, or that kind of thing because on the pressure I felt in those situations. But virtually, fo example as a lightening storm in the chat, I rather like the method because it gives an equal voice to everybody instead of a few people dominating everything, and it’s also documented rather than just everybody just quickly saying something before then rushing off. It’s definitely a nice way to get a quick impression from everybody!

Doing this synchronously (as in everybody submitting what they wrote at the same time) also gives you an overview that is less biased as in there wasn’t some kind of group opinion forming as people started talking, that other people later did not want to go against. And sometimes there are weird group dynamics at play when people start off negatively and everybody just keeps piling on…

Letter to myself

Another method I quite like: asking students to write a letter to themselves where they reflect on what they learned. This can happen virtually as an email, and I’ve even used it in in-person workshops on paper, where people then put it in a sealed envelope and we sent it out to them a couple of weeks later. I really liked getting those letters from former me, especially when I had set goals or points to follow up on, and was reminded of them! The time delay there is quite useful (spaced repetition? ;-)) and also getting hand-written mail (even if written by myself) is always nice…

Five finger feedback

Five finger feedback can be done in in-person workshops, but also virtually (for example in a table with five columns where everybody notes down their comments).

1) The thumb. What went well? 2) The index finger. What could be improved? 3) The middle finger. What went wrong? Negative feedback. 4) The ring finger. What would we like to keep? 5) The pinkie finger. What did not get enough attention?

In in-person settings, this tends to take a looong time, and also put too much pressure on participants to make me feel comfortable, but I can see this working a lot better online!

Packing my bags

This is another fun method to look at what students want to remember from a lesson: Having a graphic of a suitcase or bag, and then adding sticky notes with the things students want to take away from the workshop. Works offline as well as online! But then it’s not really different from minute papers etc, so maybe use it to spice things up occasionally. Or, if you use it regularly, seeing the graphic of the luggage might already act as trigger for students so they start on the task, without you having to remind them. That might actually also work well!

Coming up with exam questions

Always a great method: Asking students to come up with good exam questions. They can then be discussed in small groups or with the large group, used as exercises practicing for the exam, or even used in the final exam!

But beware: Coming up with good exam questions is really difficult and students might need a lot of guidance, for example discussing a grading rubric and what kind of knowledge and skill should be able to be shown by completing an exam question. And I would always also ask them to provide the solution with the question, otherwise it is really difficult for students to get a good idea of how difficult or easy a question is (usually they become super difficult if students try to make them interesting).

That’s it for now about E.-M. Schumacher’s “Methoden 2 go online!“! There are plenty more where these came from, would you be interested in reading about more?

Student evaluations of teaching are biased, sexist, racist, predjudiced. My summary of Heffernan’s 2021 article

One of my pet peeves are student evaluations that are interpreted way beyond what they can actually tell us. It might be people not considering sample sizes when looking at statistics (“66,6% of students hated your class!”, “Yes, 2 out of 3 responses out of 20 students said something negative”), or not understanding that student responses to certain questions don’t tell us “objective truths” (“I learned much more from the instructor who let me just sit and listen rather than actively engaging me” (see here)). I blogged previously about a couple of articles on the subject of biases in student evaluations, which were then basically a collection of all the scary things I had read, but in no way a comprehensive overview. Therefore I was super excited when I came a sytematic review of the literature this morning. And let me tell you, looking at the literature systematically did not improve things!

In the article “Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching.” (2021), Troy Heffernan reports on a systematic analysis of the existing literature of the last 30 years represented in the major databases, published in peer-reviewed English journals or books, and containing relevant terms like “student evaluations” in their titles, abstracts or keywords. This resulted in 136 publications being included in the study, plus an initial 47 that were found in the references of the other articles and deemed relevant.

The conclusion of the article is clear: Student evaluations of teaching are biased depending on who the students are that are evaluating, depending on the instructor’s person and prejudices that are related to characteristics they display, depending on the actual course being evaluated, and depending on many more factors not related to the instructor or what is going on in their class. Student evaluations of teaching are therefore not a tool that should be used to determine teaching quality, or to base hiring or promotion decisions on. Additionally, those groups that are already disadvantaged in their evaluation results because of personal characteristics that students are biased against, also receive abusive comments in student evaluations that are harmful to their mental health and wellbeing, which should be reason enough to change the system.

Here is a brief overview over what I consider the main points of the article:

It matters who the evaluating students are, what course you teach and what setting you are teaching in.

According to the studies compiled in the article, your course is evaluated differently depending on who the students are that are evaluating it. Female students evaluate on average 2% more positively than male students. The average evaluation improves by up to 6% when given by international students, older students, external students or students with better grades.

It also depends on what course you are teaching: STEM courses are on average evaluated less positively than courses in the social sciences and humanities. And comparing quantitative and qualitative subjects, it turns out that subjects that have a right or wrong answer are also evaluated less positively than courses where the grades are more subjective, e.g. using essays for assessment.

Additionally, student evaluations of teaching depend on even more factors beside course content and effectiveness, for example class size and general campus-related things like how clean the university is, whether there are good food options available to students, what the room setup is like, how easy to use course websites and admission processes are.

It matters who you are as a person

Many studies show that gender, ethnicity, sexual identity, and other factors have a large influence on student evaluations of teaching.

Women (or instructors wrongly perceived as female, for example by a name or avatar) are rated more negatively than men and, no matter the factual basis, receive worse ratings at objective measures like turnaround time of essays. Also the way students react to their grades depends on their instructor’s gender: When students get the grades they expected, male instructors get rewarded with better scores, when their expectations are not met, men get punished less than women. The bias is so strong that for young (under 35 years old) women teaching in male-dominated subjects, this has been shown to have an effect of up to 37% lower ratings for women.

These biases in student evaluations result in strengthening the position of an already privileged group: white, able-bodied, heterosexual, men of a certain age (ca 35-50 years old), who the students believe to be heterosexual and who are teaching in their (and their students’) first language get evaluated a lot more favourable than anybody who does not meet one or several of the criteria.

Abuse disguised as “evaluation”

Sometimes evaluations are also used by students to express anger or frustration, and this can lead to abusive comments. Those comments are not distributed equally between all instructors, though, they are a lot more likely to be directed at women and other minorities, and they are cummulative. The more minority characteristics an instructor shows, the more abusive comments they will receive. This racist, sexist, ageist, homophobic abuse is obviously hurtful and harmful to an already disadvantaged population.

My 2 cents

Reading the article, I can’t say I was surprised by the findings — unfortunately my impression of the general literature landscape on the matter was only confirmed by this systematic analysis. However, I was positively surprised to read the very direct way in which problematic aspects are called out in many places: “For example, women receive abusive comments, and academics of colour receive abusive comments, thus, a woman of colour is more likely to receive abuse because of her gender and her skin colour“. This is really disheartening to read on the one hand because it becomes so tangible and real, especially since in addition to being harmful to instructors’ mental health and well-being when they contain abuse, student evaluations are also still an important tool in determining people’s careers via hiring and promotion decisions. But on the other hand it really drives home the message and call to action to change these practices, which I really appreciate very much: “These practices not only harm the sector’s women and most underrepresented and vulnerable, it cannot be denied that [student evaluations of teaching] also actively contribute to further marginalising the groups universities declare to protect and value in their workforces.”.

So let’s get going and change evaluation practices!


Heffernan, T. (2021). Sexism, racism, prejudice, and bias: a literature review and synthesis of research surrounding student evaluations of courses and teaching. Assessment & Evaluation in Higher Education, 1-11.

#TeachingTuesday: Student feedback and how to interpret it in order to improve teaching

Student feedback has become a fixture in higher education. But even though it is important to hear student voices when evaluating teaching and thinking of ways to improve it, students aren’t perfect judges of what type of teaching leads to the most learning, so their feedback should not be taken onboard without critical reflection. In fact, there are many studies that investigate specific biases that show up in student evaluations of teaching. So in order to use student feedback to improve teaching (both on the individual level when we consider changing aspects of our classes based on student feedback, as well as at an institutional level when evaluating teachers for personnel decisions), we need to be aware of the biases that student evaluations of teaching come with.

While student satisfaction may contribute to teaching effectiveness, it is not itself teaching effectiveness. Students may be satisfied or dissatisfied with courses for reasons unrelated to learning outcomes – and not in the instructor’s control (e.g., the instructor’s gender).
Boring et al. (2016)

What student evaluations of teaching tell us

In the following, I am not presenting a coherent theory (and if you know of one please point me to it!), these are snippets of current literature on student evaluations of teaching, many of which I found referenced in this annotated literature review on student evaluations of teaching by Eva (2018). The aim of my blogpost is not to provide a comprehensive literature review, rather than pointing out that there is a huge body of literature that teachers and higher ed administrators should know exists somewhere out there, that they can draw upon when in doubt (and ideally even when not in doubt ;-)).

6 second videos are enough to predict teacher evaluations

This is quite scary, so I thought it made sense to start out with this study. Ambady and Rosenthal (1993) found that silent videos shorter than 30 seconds, in some case as short as 6 seconds, significantly predicted global end-of-semester student evaluations of teachers. These are videos that do not even include a sound track. Let this sink in…

Student responses to questions of “effectiveness” do not measure teaching effectiveness

And let’s get this out of the way right away: When students are asked to judge teaching effectiveness, that answer does not measure actual teaching effectiveness.

Stark and Freishtat (2014) give “an evaluation of course evaluations”. They conclude that student evaluations of teaching, though providing valuable information about students’ experiences, do not measure teaching effictiveness. Instead, ratings are even negatively associated with direct measures of teaching effectiveness and are influenced by gender, ethnicity and attractiveness of the instructor.

Uttl et al. (2017) conducted a meta-analysis of faculty’s teaching effectiveness and found that “student evaluation of teaching ratings and student learning are not related”. They state that “institutions focused on student learning and career success may want to abandon [student evaluation of teaching] ratings as a measure of faculty’s teaching effectiveness”.

Students have their own ideas of what constitutes good teaching

Nasser-Abu Alhija (2017) showed that out of five dimensions of teaching (goals to be achieved, long-term student development, teaching methods and characteristics, relationships with students, and assessment), students viewed the assessment dimension as most important and the long-term student development dimension as least important. To students, the grades that instructors assigned and the methods they used to do this were the main aspects in judging good teaching and good instructors. Which is fair enough — after all, good grades help students in the short term — but that’s also not what we usually think of when we think of “good teaching”.

Students learn less from teachers they rate highly

Kornell and Hausman (2016) review recent studies and report that when learning is measured at the end of the respective course, the “best” teachers got the highest ratings, i.e. the ones where the students felt that they had learned the most (which is congruent with Nasser-Abu Alhija (2017)’s findings of what students value in teaching). But when learning was measured during later courses, i.e. when meaningful deep learning was considered, other teachers seem to have more effective. Introducing desirable difficulties is thus good for learning, but bad for student ratings.

Appearances can be deceiving

Carpenter et al. (2013) compared a fluent video (instructor standing upright, maintaining eye contact, speaking fluidly without notes) and a disfluent video (instructor slumping, looking away, speaking haltingly with notes). They found that even though the amount of learning that took place when students watched either of the videos wasn’t influenced by the lecturer’s fluency or lack thereof, the disfluent lecturer was rated lower than the fluent lecturer.

The authors note that “Although fluency did not significantly affect test performance in the present study, it is possible that fluent presentations usually accompany high-quality content. Furthermore, disfluent presentations might indirectly impair learning by encouraging mind wandering, reduced class attendance, and a decrease in the perceived importance of the topic.”

Student expect more support from their female professors

When students rate teachers effectiveness, they do that based on their assumption of how effective a teacher should be, and it turns out that they have different expectations depending on the gender of their teachers. El-Alayi et al. (2018) found that “female professors experience more work demands and special favour requests, particularly from academically entitled students”. This was both true when male and female faculty reported on their experiences, as well as when students were asked what their expectations of fictional male and female teachers were. 

Student teaching evaluations punish female teachers

Boring (2017) found that even when learning outcomes were the same for students in courses taught by male and female teachers, female teachers received worse ratings than male teachers. This got even worse when teachers didn’t act in accordance to the stereotypes associated with their gender.

MacNell et al. (2015) found that believing that an instructor was female (in a study of online teaching where male and female names were sometimes assigned according to the actual gender of the teacher and sometimes not) was sufficient to rate that person lower than an instructor that was believed (correctly or not) to be male.

White male students challenge women of color’s authority, teaching competency, and scholarly expertise, as well as offering subtle and not so subtle threats to their persons and their careers

This title was drawn from the abstract of Pittman (2010)’s article that I unfortunately didn’t have access to, but thought an important enough point to include anyway.

There are very many more studies on race, and especially women of color, in teaching contexts, which all show that they are facing a really unfair uphill battle.

Students will punish a percieved accent

Rubin and Smith (1990) investigated “effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants” in North America and found that 40% of undergraduates avoid classes instructed by nonnative English-speaking teaching assistants, even though the actual accentedness of teaching assistants did not actually influence student learning outcomes. Nevertheless, students judged teaching assistants they perceived as speaking with a strong accent as poorer teachers.

Similarly, Sanchez and Khan (2016) found that “presence of an instructor accent […] does not impact learning, but does cause learners to rate the instructor as less effective”.

Student will rate minorities differently

Ewing et al. (2003) report that lecturers that were identified as gay or lesbian received lower teaching ratings than other lecturers with undisclosed sexual orientation when they, according to other measures, were perfoming very well. Poor teaching performance was, however, rated more positively, possibly to avoid discriminating against openly gay or lesbian lecturers.

Students will punish age

Stonebraker and Stone (2015) find that “age does affect teaching effectiveness, at least as perceived by students. Age has a negative impact on student ratings of faculty members that is robust across genders, groups of academic disciplines and types of institutions”. Apparently, when it comes to students, from your mid-40ies on, you aren’t an effective teacher any more (unless you are still “hot” and “easy”).

Student evaluations are sensitive to student’s gender and grade expectation

Boring et al. (2016) find that “[student evaluation of teaching] are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness.

What can we learn from student evaluations then?

Pay attention to student comments but understand their limitations. Students typically are not well situated to evaluate pedagogy.
Stark and Freishtat (2014)

Does all of the above mean that student evaluations are biased in so many ways that we can’t actually learn anything from them? I do think that there are things that should not be done on the basis of student evaluations (e.g. rank teacher performance), and I do think that most times, student evaluations of teaching should be taken with a pinch of salt. But there are still ways in which the information gathered is useful.

Even though student satisfaction is not the same as teaching effectiveness, it might still be desirable to know how satisfied students are with specific aspects of a course. And especially open formats like for example the “continue, start, stop” method are great for gaining a new perspective on the classes we teach and potentially gaining fresh ideas of how to change things up.

Also tracking ones own evaluation over time is helpful since — apart from aging — other changes are hopefully intentional and can thus tell us something about our own development, at least assuming that different student cohorts evaluate teaching performance in a similar way. Also getting student feedback at a later date might be helpful, sometimes students only realize later which teachers they learnt from the most or what methods were actually helpful rather than just annoying.

A measure that doesn’t come directly from student evaluations of teaching but that I find very important to track is student success in later courses. Especially when that isn’t measured in a single grade, but when instructors come together and discuss how students are doing in tasks that build on previous courses. Having a well-designed curriculum and a very good idea of what ideas translate from one class to the next is obviously very important.

It is also important to keep in mind that, as Stark and Freishtat (2014) point out, statistical methods are only valid if there are enough responses to actually do statistics on them. So don’t take very few horrible comments to heart and ignore the whole bunch of people who are gushing about how awesome your teaching is!

P.S.: If you are an administrator or on an evaluation committee and would like to use student evaluations of teaching, the article by Linse (2017) might be helpful. They give specific advice on how to use student evaluations both in decision making as well as when talking to the teachers whose evaluations ended up on your desk.

Literature:

Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64(3), 431–441. https://doi.org/10.1037/0022-3514.64.3.431

Boring, A. (2017). Gender biases in student evaluations of teachers. Journal of Public Economics, 145(13), 27–41. https://doi.org/10.1016/j.jpubeco.2016.11.006

Boring, A., Dial, U. M. R., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research, (January), 1–36. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: Instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20(6), 1350–1356. https://doi.org/10.3758/s13423-013-0442-z

El-Alayi, A., Hansen-Brown, A. A., & Ceynar, M. (2018). Dancing backward in high heels: Female professors experience more work demands and special favour requests, particularly from academically entitled students. Sex Roles. https://doi.org/10.1007/s11199-017-0872-6

Eva, N. (2018), Annotated literature review: student evaluations of teaching (SET), https://hdl.handle.net/10133/5089

Ewing, V. L., Stukas, A. A. J., & Sheehan, E. P. (2003). Student prejudice against gay male and lesbian lecturers. Journal of Social Psychology, 143(5), 569–579. http://web.csulb.edu/~djorgens/ewing.pdf

Kornell, N. & Hausman, H. (2016). Do the Best Teachers Get the Best Ratings? Front. Psychol. 7:570. https://doi.org/10.3389/fpsyg.2016.00570

Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94- 106. https://doi.org/10.1016/j.stueduc.2016.12.004

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291– 303. https://doi.org/10.1007/s10755-014-9313-4

Nasser-Abu Alhija, F. (2017). Teaching in higher education: Good teaching through students’ lens. Studies in Educational Evaluation, 54, 4-12. https://doi.org/10.1016/j.stueduc.2016.10.006

Pittman, C. T. (2010). Race and Gender Oppression in the Classroom: The Experiences of Women Faculty of Color with White Male Students. Teaching Sociology, 38(3), 183–196. https://doi.org/10.1177/0092055X10370120

Rubin, D. L., & Smith, K. A. (1990). Effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants. International Journal of Intercultural Relations, 14, 337–353. https://doi.org/10.1016/0147-1767(90)90019-S

Sanchez, C. A., & Khan, S. (2016). Instructor accents in online education and their effect on learning and attitudes. Journal of Computer Assisted Learning, 32, 494–502. https://doi.org/10.1111/jcal.12149

Stark, P. B., & Freishtat, R. (2014). An Evaluation of Course Evaluations. ScienceOpen, 1–26. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1

Stonebraker, R. J., & Stone, G. S. (2015). Too old to teach? The effect of age on college and university professors. Research in Higher Education, 56(8), 793–812. https://doi.org/10.1007/s11162-015-9374-y

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. http://dx.doi.org/10.1016/j.stueduc.2016.08.007

“Continue. Start. Stop.”. An article supporting the usefulness of my favourite method of asking for student feedback on a course!

I’ve been recommending the “Continue. Start. Stop.” feedback method for years an years (at least since my 2013 blog post), but not as a research-backed method but mostly based on my positive personal experience with it. I have used this method to get feedback on courses I’ve been teaching a couple of weeks into the course in order to improve my teaching both within the course as well as over the years. If there was anything that students thought would improve their learning, I wanted to be able adapt my teaching (and also, in a follow-up discussion of the feedback, be able to address student expectations that might not have been explicit before that I might or might not want to follow). I like that even though it’s a qualitative method and thus fairly open, it gives students a structure along which they can write their feedback. Also by asking what should be continued as well as stopped and started, it’s a nice way to get feedback on what’s already working well, too! But when I was asked for a reference for the method today, I didn’t really have a good answer. But then I found one: an article by Hoon et al. (2015)!

Studies on the “continue. start. stop.” feedback vs open feedback

In the first study in the article, two different feedback methods are compared over three different courses: a free form feedback and a structured format, similar to “continue. start. stop.”. From this study, the authors draw pointers for changing the feedback method in the free form course to a more structured feedback. They investigate the influence of this change in a second study.

In that second study, the authors find that using a structured feedback led to an increasing depth of feedback, and that the students liked the new form of giving feedback. They also find indications that the more specific the questions are, the more constructive (as compared to more descriptive texts in the open form; not necessarily more positive or negative!) the feedback is.

My recommendations for how to use the “continue. start. stop.” feedback

If anything, this article makes me like this feedback method even more than I did before. It’s easy and straight forward and actually super helpful!

Use this as formative feedback!

Ask for this feedback early on in the course (maybe after a couple of weeks, when students know what to expect in your course, but with plenty of the course left to actually react to the feedback) and use the student replies to help you improve your teaching. While this method can of course also be used as summative feedback at the end of the course, how much cooler is it if students can benefit from the feedback they gave you?

Ask full questions

One thing that I might not have been clear about before when talking about the “continue. start. stop.” feedback method is that it is important to actually use the whole phrases (“In order to improve your learning in this course, please give me feedback on the following points

  1. Continue: What is working well in this course that you would like to continue?
  2. Start: What suggestions do you have for things that could improve the course?
  3. Stop: What would you like us to stop doing?”

or similar) rather than just saying “continue. start. stop.” and assuming the students know what that means.

Leave room for additional comments

It is also helpful to give an additional field for other comments the students might have, you never know what else they’d like to tell you if only they knew how and when to do it.

Use the feedback for several purposes at once!

In the article’s second study, a fourth question is added to the “continue. start. stop.” method, and that is asking for examples of good practice and highlights. The authors say this question was mainly included for the benefit of “external speakers who may value course feedback as evidence of their own professional development and engagement with education”, and I think that’s actually a fairly important point. While the “continue. start. stop.” feedback itself is a nice addition to any teaching portfolio, why not think specifically about the kind of things you would like to include there, and explicitly ask for them?

Give feedback on the feedback

It’s super important that you address the feedback you got with your class! Both so that they feel heard and know whether their own perception and feedback agrees with that of their peers, as well as to have the opportunity to discuss what parts of their suggestions you are taking on, what will be changing as a result of their suggestions, and what you might not want to change (and why!). If this does not happen, students might not give you good feedback the next time you ask for it because they feel that since it didn’t have an effect last time, why would they bother doing it again?

Now it’s your turn!

Have you used the “continue. start. stop.” method? How did it work for you? Will you continue using it or how did you modify it to make it suit you better? Let me know in the comments below! :-)

Reference:

Hoon, A. and Oliver, E.J. and Szpakowska, K. and Newton, P. (2015) ‘Use of the ‘Stop, Start, Continue’ method is associated with the production of constructive qualitative feedback by students in higher education.’, Assessment and evaluation in higher education., 40 (5). pp. 755-767. [link]

Giving – and receiving – helpful feedback

For a course we recently needed to come up with guidelines for feedback on work products. This is what I suggested. Discuss! ;-)

 

When giving feedback, there are a few pointers that help making it easier for you to give and for the other person to receive feedback:

  • Use the sandwich-principle: Start and end with positive remarks*
  • Be descriptive: Make sure both of you know exactly what you are talking about.
  • Be concrete: Point out exactly what you like and where you see potential for improvement.
  • Be constructive: Show options of how you might improve upon what is there.
  • Be realistic: If you are working on a tight timeline, do consider whether pointing out all issues is necessary or whether there are points that are more essential than others.
  • Don’t overdo it: Point out a pattern rather than criticizing every single occurrence of a systematic problem.
  • Point out your subjectivity: You are not an objective judge. Make sure the recipient of your feedback knows that you are giving a subjective opinion.
  • Don’t discuss: You state your point and clarify if you are asked for clarifications.
  • Don’t insist: It’s the recipient’s choice whether to accept feedback.

When receiving feedback, there are also a couple of behaviors that make it easier for the other person to give you feedback:

  • Don’t interrupt: Let them finish explaining the point they are trying to make.
  • Don’t justify: Accept their feedback on your choices or actions without trying to make them understand why you chose what you chose.
  • Ask for clarification: If in doubt, ask what they meant by what they said.
  • Take notes: Write down the important points and review them later.
  • Be appreciative: Let them know you value their feedback and are grateful they took the time to give it to you.

*edit 2.9.2022: These days, I tend to not recommend the sandwich principle any more. Instead, I really like this structure:

1: neutral acknowledgement (“I see you put a lot of effort into bringing together a lot of information!”)

2: warning of a problematic aspect (“with so many different ideas, it is not easy to find a red thread”)

3: suggesting a solution (“provide the reader with a structure by …”)

 

Assessing participation

One example of how to give grades for participation.

One of the most difficult tasks as a teacher is to actually assess how much people have learned, along with give them a grade – a single number or letter (depending on where you are) that supposedly tells you all about how much they have learnt.

Ultimately, what assessment makes sense depends on your learning goals. But still it is sometimes useful to have a couple of methods at hand for when you might need them.

Today I want to talk about a pet peeve of mine: Assessing participation. I don’t think this is necessarily a useful measure at all, but I’ve taught courses where it was a required part of the final grade.

I’ve been through all the classical ways of assessing participation. Giving a grade for participation from memory (even if you take notes right after class) opens you up to all kinds of problems. Your memory might not be as good as you thougt it was. Some people say more memorable stuff than others, or in a more memorable way. Some people are just louder and more foreward than others. No matter how objective you are (or attempt to be) – you always end up with complaints and there is just no way to convince people (including yourself) that the grades you end up giving are fair.

An alternative approach.

So what could you do instead? One method I have read about somewhere (but cannot find the original paper any more! But similar ideas are described in Maryellen Weimer’s article “Is it time to rethink how we grade participation“) is to set a number of “good” comments or questions that students should ask per day or week. Say, if a student asks 3 good questions or makes 3 good comments, this translates to a very good grade (or a maximum number of bonus points, depending on your system). 2 comments or questions still give a good grade (or some bonus points), 1 or less are worth less. But here is the deal: Students keep track of what they say and write it down after they’ve said it. At the end of the lesson, the day, the week or whatever period you chose, the hand you a list of their three very best questions or comments. So people who said more than three things are required to limit themselves to what they think were their three best remarks.

The very clear advantage is that

  • you are now looking for quality over quantity (depending on the class size, you will need to adjust the number of comments / questions you ideally want per person). This means people who always talk but don’t really say anything might not stop, but at least they aren’t encouraged to talk even more since they will have to find a certain number of substantial contributions to write down in the end rather than make sure they have the most air time.
  • you don’t have to rely on your memory alone. Sure, when you read the comments and questions you will still need to recall whether that was actually said during class or made up afterwards, but at least you have a written document to jog your memory.
  • you have written documentation of what they contributed, so if someone wants to argue about the quality of their remarks, you can do that based on what they wrote down rather than what they think they might have meant when they said something that they recall differently from you.
  • you can choose to (and then, of course, announce!) to let people also include other contributions on their lists, like very good questions they asked you in private, or emailed you about. Or extra projects they did on the side.

I guess in the end we need to remember that the main motive for grading participation is to enhance student engagement with the course content. And the more different ways we give them to engage – and receive credit for it – the more they are actually going to do it. Plus maybe they are already doing it and we just never knew?