Tag Archives: feedback

“Continue. Start. Stop.”. An article supporting the usefulness of my favourite method of asking for student feedback on a course!

I’ve been recommending the “Continue. Start. Stop.” feedback method for years an years (at least since my 2013 blog post), but not as a research-backed method but mostly based on my positive personal experience with it. I have used this method to get feedback on courses I’ve been teaching a couple of weeks into the course in order to improve my teaching both within the course as well as over the years. If there was anything that students thought would improve their learning, I wanted to be able adapt my teaching (and also, in a follow-up discussion of the feedback, be able to address student expectations that might not have been explicit before that I might or might not want to follow). I like that even though it’s a qualitative method and thus fairly open, it gives students a structure along which they can write their feedback. Also by asking what should be continued as well as stopped and started, it’s a nice way to get feedback on what’s already working well, too! But when I was asked for a reference for the method today, I didn’t really have a good answer. But then I found one: an article by Hoon et al. (2015)!

Studies on the “continue. start. stop.” feedback vs open feedback

In the first study in the article, two different feedback methods are compared over three different courses: a free form feedback and a structured format, similar to “continue. start. stop.”. From this study, the authors draw pointers for changing the feedback method in the free form course to a more structured feedback. They investigate the influence of this change in a second study.

In that second study, the authors find that using a structured feedback led to an increasing depth of feedback, and that the students liked the new form of giving feedback. They also find indications that the more specific the questions are, the more constructive (as compared to more descriptive texts in the open form; not necessarily more positive or negative!) the feedback is.

My recommendations for how to use the “continue. start. stop.” feedback

If anything, this article makes me like this feedback method even more than I did before. It’s easy and straight forward and actually super helpful!

Use this as formative feedback!

Ask for this feedback early on in the course (maybe after a couple of weeks, when students know what to expect in your course, but with plenty of the course left to actually react to the feedback) and use the student replies to help you improve your teaching. While this method can of course also be used as summative feedback at the end of the course, how much cooler is it if students can benefit from the feedback they gave you?

Ask full questions

One thing that I might not have been clear about before when talking about the “continue. start. stop.” feedback method is that it is important to actually use the whole phrases (“In order to improve your learning in this course, please give me feedback on the following points

  1. Continue: What is working well in this course that you would like to continue?
  2. Start: What suggestions do you have for things that could improve the course?
  3. Stop: What would you like us to stop doing?”

or similar) rather than just saying “continue. start. stop.” and assuming the students know what that means.

Leave room for additional comments

It is also helpful to give an additional field for other comments the students might have, you never know what else they’d like to tell you if only they knew how and when to do it.

Use the feedback for several purposes at once!

In the article’s second study, a fourth question is added to the “continue. start. stop.” method, and that is asking for examples of good practice and highlights. The authors say this question was mainly included for the benefit of “external speakers who may value course feedback as evidence of their own professional development and engagement with education”, and I think that’s actually a fairly important point. While the “continue. start. stop.” feedback itself is a nice addition to any teaching portfolio, why not think specifically about the kind of things you would like to include there, and explicitly ask for them?

Give feedback on the feedback

It’s super important that you address the feedback you got with your class! Both so that they feel heard and know whether their own perception and feedback agrees with that of their peers, as well as to have the opportunity to discuss what parts of their suggestions you are taking on, what will be changing as a result of their suggestions, and what you might not want to change (and why!). If this does not happen, students might not give you good feedback the next time you ask for it because they feel that since it didn’t have an effect last time, why would they bother doing it again?

Now it’s your turn!

Have you used the “continue. start. stop.” method? How did it work for you? Will you continue using it or how did you modify it to make it suit you better? Let me know in the comments below! :-)


Hoon, A. and Oliver, E.J. and Szpakowska, K. and Newton, P. (2015) ‘Use of the ‘Stop, Start, Continue’ method is associated with the production of constructive qualitative feedback by students in higher education.’, Assessment and evaluation in higher education., 40 (5). pp. 755-767. [link]

Giving – and receiving – helpful feedback

For a course we recently needed to come up with guidelines for feedback on work products. This is what I suggested. Discuss! ;-)


When giving feedback, there are a few pointers that help making it easier for you to give and for the other person to receive feedback:

  • Use the sandwich-principle: Start and end with positive remarks.
  • Be descriptive: Make sure both of you know exactly what you are talking about.
  • Be concrete: Point out exactly what you like and where you see potential for improvement.
  • Be constructive: Show options of how you might improve upon what is there.
  • Be realistic: If you are working on a tight timeline, do consider whether pointing out all issues is necessary or whether there are points that are more essential than others.
  • Don’t overdo it: Point out a pattern rather than criticizing every single occurrence of a systematic problem.
  • Point out your subjectivity: You are not an objective judge. Make sure the recipient of your feedback knows that you are giving a subjective opinion.
  • Don’t discuss: You state your point and clarify if you are asked for clarifications.
  • Don’t insist: It’s the recipient’s choice whether to accept feedback.


When receiving feedback, there are also a couple of behaviors that make it easier for the other person to give you feedback:

  • Don’t interrupt: Let them finish explaining the point they are trying to make.
  • Don’t justify: Accept their feedback on your choices or actions without trying to make them understand why you chose what you chose.
  • Ask for clarification: If in doubt, ask what they meant by what they said.
  • Take notes: Write down the important points and review them later.
  • Be appreciative: Let them know you value their feedback and are grateful they took the time to give it to you.




Assessing participation

One example of how to give grades for participation.

One of the most difficult tasks as a teacher is to actually assess how much people have learned, along with give them a grade – a single number or letter (depending on where you are) that supposedly tells you all about how much they have learnt.

Ultimately, what assessment makes sense depends on your learning goals. But still it is sometimes useful to have a couple of methods at hand for when you might need them.

Today I want to talk about a pet peeve of mine: Assessing participation. I don’t think this is necessarily a useful measure at all, but I’ve taught courses where it was a required part of the final grade.

I’ve been through all the classical ways of assessing participation. Giving a grade for participation from memory (even if you take notes right after class) opens you up to all kinds of problems. Your memory might not be as good as you thougt it was. Some people say more memorable stuff than others, or in a more memorable way. Some people are just louder and more foreward than others. No matter how objective you are (or attempt to be) – you always end up with complaints and there is just no way to convince people (including yourself) that the grades you end up giving are fair.

An alternative approach.

So what could you do instead? One method I have read about somewhere (but cannot find the original paper any more! But similar ideas are described in Maryellen Weimer’s article “Is it time to rethink how we grade participation“) is to set a number of “good” comments or questions that students should ask per day or week. Say, if a student asks 3 good questions or makes 3 good comments, this translates to a very good grade (or a maximum number of bonus points, depending on your system). 2 comments or questions still give a good grade (or some bonus points), 1 or less are worth less. But here is the deal: Students keep track of what they say and write it down after they’ve said it. At the end of the lesson, the day, the week or whatever period you chose, the hand you a list of their three very best questions or comments. So people who said more than three things are required to limit themselves to what they think were their three best remarks.

The very clear advantage is that

  • you are now looking for quality over quantity (depending on the class size, you will need to adjust the number of comments / questions you ideally want per person). This means people who always talk but don’t really say anything might not stop, but at least they aren’t encouraged to talk even more since they will have to find a certain number of substantial contributions to write down in the end rather than make sure they have the most air time.
  • you don’t have to rely on your memory alone. Sure, when you read the comments and questions you will still need to recall whether that was actually said during class or made up afterwards, but at least you have a written document to jog your memory.
  • you have written documentation of what they contributed, so if someone wants to argue about the quality of their remarks, you can do that based on what they wrote down rather than what they think they might have meant when they said something that they recall differently from you.
  • you can choose to (and then, of course, announce!) to let people also include other contributions on their lists, like very good questions they asked you in private, or emailed you about. Or extra projects they did on the side.

I guess in the end we need to remember that the main motive for grading participation is to enhance student engagement with the course content. And the more different ways we give them to engage – and receive credit for it – the more they are actually going to do it. Plus maybe they are already doing it and we just never knew?

Giving feedback on student writing

When feedback is more confusing than helpful.

The other day I came across a blog post on Teaching & Learning in Higher Ed. on responding to student writing/writers by P. T. Corrigan. And one point of that post struck home, and that point is on contradictory teacher feedback.

When I am asked to provide feedback on my peers’ writing, I always ask them about what stage in the writing process they are in and what kind of feedback do they want. Are they in the copy-editing stage and want me to check for spelling and commas, or is this a first draft and they are still open for input on the way their thoughts are organized, or even on the arguments they are making? If a thesis is to be printed that same evening, I am not going to suggest major restructuring of the document. If we are talking about a first draft, I might mark a typo that catches my eye, but I won’t focus on finding every single typo in the document.

But when we give feedback to students, we often give them all the different kinds of feedback at once, leaving them to sort through the feedback and likely sending contradictory messages in the process. Marking all the tiny details that could, and maybe should, be modified suggests that changes to the text are on a polishing level. When we suggest a completely different structure at the same time, chances are that rather than re-writing, students will just move existing blocks of text, assuming that since we provided feedback on a typo-level, those blocks of text are in their final, polished form already when that might not be how we perceive the text.

Thinking about this now, I realize that the feedback I give on student writing does not only need to be tailored to the specific purpose much better, it also needs to come with more meta information about what aspect of the writing my focus is on at that point in time. Only giving feedback on the structure without pointing out grammatical mistakes only sends the right message when it is made clear that the focus, right now, is only on the structure of the document. Similarly, students need to understand that copy-editing will usually not improve the bigger framing of the document and only focus on layout and typo-type corrections.

We’ve intuitively been doing a lot of this pretty well already. But go read Corrigan’s blog post and the literature he links to – it’s certainly worth a read!

Five finger feedback

At my new job the quality management team regularly offers workshops that the whole team attends. One detail has repeatedly come up and I want to present it here, too. It is a new-to-me method to ask for specific feedback: The five finger method.

For each finger of the hand, a specific question needs to be addressed. Many of the fingers are easy to remember if you imagine gestures that would include that finger, and/or the meaning that that finger carries in our culture.
1) The thumb. What went well?
2) The index finger. What could be improved?
3) The middle finger. What went wrong? Negative feedback.
4) The ring finger. What would we like to keep?
5) The pinkie finger. What did not get enough attention?
This method is certainly not suited for groups a lot larger than a dozen or so participants, especially not if everybody were asked to say something for every single finger (which we didn’t have to). But for a small group, I found it really helpful to have the visual reminder of the kind of feedback we were being asked to give, and to go through it in the order that was presented by just counting down the fingers on your hand.

Continue. Stop. Start.

Quick feedback tool for your teaching, giving you concrete examples of what students would like you to continue, start or stop

This is another great tool to get feedback on your classes. In contrast to the “fun” vs “learning” graph which gives you a cloud of “generally people seem to be happy and to have learned something”, this tool gives you much more concrete ideas of what you should continue, stop and start doing. Basically what you do is this: You hand out sheets of paper with the three columns and ask students to give you as many details as possible for each.

“Continue” is where students list everything that you do during your lectures that helps them learn and understand and that they think you should continue doing. Here students (of classes I teach! Obviously all these examples are highly dependent on the course) typically list things like that you are giving good presentations, ask whether they have questions, are available for questions outside of the lecture, are approachable, do fun experiments, let them discuss in class, that kind of thing.

“Stop” are things that hinder students learning (or sometimes things that they find annoying, like homework or being asked to present something in class, but usually students are pretty good about realizing that, even though annoying, those things might actually be helpful). Here students might list if you have an annoying habit, or if you always say things like “as everybody knows, …” when they don’t actually know but are now too shy to say so. Students will also give you feedback on techniques that you like using but they don’t think are appropriate for their level/group, or anything else they think is counterproductive.

“Start” are suggestions what you might want to add to your repertoire. I have recently been asked to give a quick overview over next lesson’s topics at the end of the lecture which makes perfect sense! But again, depending what you do in your course already you might be asked to start very different things.

In addition to help you teach better, this feedback is also really important for students, because it makes them reflect about how they learn as an individual and how their learning might be improved. And if they realize that they aren’t getting what they need from the instructor, at least they know now what they need and can go find it somewhere else if the instructor doesn’t change his/her teaching to meet that need.

When designing the questionnaire for this, you could also make very broad suggestions of topics that might be mentioned if you feel like that might spark students’ ideas (like for example, presentations, textbooks, assignments, activities, social interactions, methods, discussions, quizzes, …) but be aware that giving these examples means that you are more likely to get feedback on the suggested topics and less likely that students will bring up topics that you yourself had not considered.

On “fun” vs “learning”

Quick feedback tool, giving you an impression of the students’ perception of fun vs learning of a specific part of your course.

Getting feedback on your teaching and their learning from a group of students is very hard. There are tons of elaborate methods out there, but there is one very simple tool that I find gives me a quick overview: The “fun” vs “learning” graph.

This particular example is from last year’s GEOF130 “introduction to oceanography”, when we did the first in-class experiment (which I will do with this year’s class next week, so stay tuned!). Since the group was quite big for an oceanography class at my university (36 students) and I wanted to get a better feel of how each of them perceived their learning through experiments than what I would have gotten by just observing and asking a couple of questions, I asked them to anonymously put a cross on the graph where they feel they were located in the “fun” vs “learning” space after this experiment. And this is the result:


A “fun” vs “learning” graph filled in by students of the GEOF130 course in 2012 in response to an experiment that they conducted in pairs during a lecture.

Of course this is not a sufficient tool to evaluate a whole semester or course, but I can really recommend it for a quick overview!