Category Archives: Feedback

Assessing participation

One example of how to give grades for participation.

One of the most difficult tasks as a teacher is to actually assess how much people have learned, along with give them a grade – a single number or letter (depending on where you are) that supposedly tells you all about how much they have learnt.

Ultimately, what assessment makes sense depends on your learning goals. But still it is sometimes useful to have a couple of methods at hand for when you might need them.

Today I want to talk about a pet peeve of mine: Assessing participation. I don’t think this is necessarily a useful measure at all, but I’ve taught courses where it was a required part of the final grade.

I’ve been through all the classical ways of assessing participation. Giving a grade for participation from memory (even if you take notes right after class) opens you up to all kinds of problems. Your memory might not be as good as you thougt it was. Some people say more memorable stuff than others, or in a more memorable way. Some people are just louder and more foreward than others. No matter how objective you are (or attempt to be) – you always end up with complaints and there is just no way to convince people (including yourself) that the grades you end up giving are fair.

An alternative approach.

So what could you do instead? One method I have read about somewhere (but cannot find the original paper any more! But similar ideas are described in Maryellen Weimer’s article “Is it time to rethink how we grade participation“) is to set a number of “good” comments or questions that students should ask per day or week. Say, if a student asks 3 good questions or makes 3 good comments, this translates to a very good grade (or a maximum number of bonus points, depending on your system). 2 comments or questions still give a good grade (or some bonus points), 1 or less are worth less. But here is the deal: Students keep track of what they say and write it down after they’ve said it. At the end of the lesson, the day, the week or whatever period you chose, the hand you a list of their three very best questions or comments. So people who said more than three things are required to limit themselves to what they think were their three best remarks.

The very clear advantage is that

  • you are now looking for quality over quantity (depending on the class size, you will need to adjust the number of comments / questions you ideally want per person). This means people who always talk but don’t really say anything might not stop, but at least they aren’t encouraged to talk even more since they will have to find a certain number of substantial contributions to write down in the end rather than make sure they have the most air time.
  • you don’t have to rely on your memory alone. Sure, when you read the comments and questions you will still need to recall whether that was actually said during class or made up afterwards, but at least you have a written document to jog your memory.
  • you have written documentation of what they contributed, so if someone wants to argue about the quality of their remarks, you can do that based on what they wrote down rather than what they think they might have meant when they said something that they recall differently from you.
  • you can choose to (and then, of course, announce!) to let people also include other contributions on their lists, like very good questions they asked you in private, or emailed you about. Or extra projects they did on the side.

I guess in the end we need to remember that the main motive for grading participation is to enhance student engagement with the course content. And the more different ways we give them to engage – and receive credit for it – the more they are actually going to do it. Plus maybe they are already doing it and we just never knew?

Giving feedback on student writing

When feedback is more confusing than helpful.

The other day I came across a blog post on Teaching & Learning in Higher Ed. on responding to student writing/writers by P. T. Corrigan. And one point of that post struck home, and that point is on contradictory teacher feedback.

When I am asked to provide feedback on my peers’ writing, I always ask them about what stage in the writing process they are in and what kind of feedback do they want. Are they in the copy-editing stage and want me to check for spelling and commas, or is this a first draft and they are still open for input on the way their thoughts are organized, or even on the arguments they are making? If a thesis is to be printed that same evening, I am not going to suggest major restructuring of the document. If we are talking about a first draft, I might mark a typo that catches my eye, but I won’t focus on finding every single typo in the document.

But when we give feedback to students, we often give them all the different kinds of feedback at once, leaving them to sort through the feedback and likely sending contradictory messages in the process. Marking all the tiny details that could, and maybe should, be modified suggests that changes to the text are on a polishing level. When we suggest a completely different structure at the same time, chances are that rather than re-writing, students will just move existing blocks of text, assuming that since we provided feedback on a typo-level, those blocks of text are in their final, polished form already when that might not be how we perceive the text.

Thinking about this now, I realize that the feedback I give on student writing does not only need to be tailored to the specific purpose much better, it also needs to come with more meta information about what aspect of the writing my focus is on at that point in time. Only giving feedback on the structure without pointing out grammatical mistakes only sends the right message when it is made clear that the focus, right now, is only on the structure of the document. Similarly, students need to understand that copy-editing will usually not improve the bigger framing of the document and only focus on layout and typo-type corrections.

We’ve intuitively been doing a lot of this pretty well already. But go read Corrigan’s blog post and the literature he links to – it’s certainly worth a read!

Five finger feedback

At my new job the quality management team regularly offers workshops that the whole team attends. One detail has repeatedly come up and I want to present it here, too. It is a new-to-me method to ask for specific feedback: The five finger method.

For each finger of the hand, a specific question needs to be addressed. Many of the fingers are easy to remember if you imagine gestures that would include that finger, and/or the meaning that that finger carries in our culture.
1) The thumb. What went well?
2) The index finger. What could be improved?
3) The middle finger. What went wrong? Negative feedback.
4) The ring finger. What would we like to keep?
5) The pinkie finger. What did not get enough attention?
This method is certainly not suited for groups a lot larger than a dozen or so participants, especially not if everybody were asked to say something for every single finger (which we didn’t have to). But for a small group, I found it really helpful to have the visual reminder of the kind of feedback we were being asked to give, and to go through it in the order that was presented by just counting down the fingers on your hand.

Continue. Stop. Start.

Quick feedback tool for your teaching, giving you concrete examples of what students would like you to continue, start or stop

This is another great tool to get feedback on your classes. In contrast to the “fun” vs “learning” graph which gives you a cloud of “generally people seem to be happy and to have learned something”, this tool gives you much more concrete ideas of what you should continue, stop and start doing. Basically what you do is this: You hand out sheets of paper with the three columns and ask students to give you as many details as possible for each.

“Continue” is where students list everything that you do during your lectures that helps them learn and understand and that they think you should continue doing. Here students (of classes I teach! Obviously all these examples are highly dependent on the course) typically list things like that you are giving good presentations, ask whether they have questions, are available for questions outside of the lecture, are approachable, do fun experiments, let them discuss in class, that kind of thing.

“Stop” are things that hinder students learning (or sometimes things that they find annoying, like homework or being asked to present something in class, but usually students are pretty good about realizing that, even though annoying, those things might actually be helpful). Here students might list if you have an annoying habit, or if you always say things like “as everybody knows, …” when they don’t actually know but are now too shy to say so. Students will also give you feedback on techniques that you like using but they don’t think are appropriate for their level/group, or anything else they think is counterproductive.

“Start” are suggestions what you might want to add to your repertoire. I have recently been asked to give a quick overview over next lesson’s topics at the end of the lecture which makes perfect sense! But again, depending what you do in your course already you might be asked to start very different things.

In addition to help you teach better, this feedback is also really important for students, because it makes them reflect about how they learn as an individual and how their learning might be improved. And if they realize that they aren’t getting what they need from the instructor, at least they know now what they need and can go find it somewhere else if the instructor doesn’t change his/her teaching to meet that need.

When designing the questionnaire for this, you could also make very broad suggestions of topics that might be mentioned if you feel like that might spark students’ ideas (like for example, presentations, textbooks, assignments, activities, social interactions, methods, discussions, quizzes, …) but be aware that giving these examples means that you are more likely to get feedback on the suggested topics and less likely that students will bring up topics that you yourself had not considered.

On “fun” vs “learning”

Quick feedback tool, giving you an impression of the students’ perception of fun vs learning of a specific part of your course.

Getting feedback on your teaching and their learning from a group of students is very hard. There are tons of elaborate methods out there, but there is one very simple tool that I find gives me a quick overview: The “fun” vs “learning” graph.

This particular example is from last year’s GEOF130 “introduction to oceanography”, when we did the first in-class experiment (which I will do with this year’s class next week, so stay tuned!). Since the group was quite big for an oceanography class at my university (36 students) and I wanted to get a better feel of how each of them perceived their learning through experiments than what I would have gotten by just observing and asking a couple of questions, I asked them to anonymously put a cross on the graph where they feel they were located in the “fun” vs “learning” space after this experiment. And this is the result:

fun_vs_learning

A “fun” vs “learning” graph filled in by students of the GEOF130 course in 2012 in response to an experiment that they conducted in pairs during a lecture.

Of course this is not a sufficient tool to evaluate a whole semester or course, but I can really recommend it for a quick overview!