Tag Archives: rubric

Currently reading: “Teaching with rubrics: the good, the bad, and the ugly” (Andrade, 2005)

Doing my reading for the monthly iEarth journal club… Thanks for suggesting yet another interesting article, Kirsty! This one is “Teaching with rubrics: the good, the bad, and the ugly” (Andrade, 2005) — a great introduction on how to work with rubrics (and only 2.5 pages of entertaining, easy-to-read text, plus an example rubric). My summary of the article:

Continue reading

Using peer feedback to improve students’ writing (Currently reading Huisman et al., 2019)

I wrote about involving students in creating assessment criteria and quality definitions for their own learning on Thursday, and today I want to think a bit about involving students also in the feedback process, based on an article by Huisman et al. (2019) on “The impact of peer feedback on university students’ academic writing: a Meta-Analysis”. In that article, the available literature on peer-feedback specifically on academic writing is brought together, and it turns out that across all studies, peer feedback does improve student writing, so this is what it might mean for our own teaching:

Peer feedback is as good as teacher feedback

Great news (actually, not so new, there are many studies showing this!): Students can give feedback to each other that is of comparable quality than what teachers give them!

Even though a teacher is likely to have more expert knowledge, which might make their feedback more credible to some students (those that have a strong trust in authorities), it also makes it more relevant to other students, and there is no systematic difference between improvement after peer feedback and feedback from teaching staff. But to alleviate fears related to the quality of peer feedback is to use peer feedback purely (or mostly) formative, whereas the teacher does the assessment themselves.

Peer feedback is good for both giver and receiver

If we as teachers “use” students to provide feedback to other students, it might seem like we are pushing part of our job on the students. But: Peer feedback improves writing both for the students giving it as well as for the ones receiving it! Giving feedback means actively engaging with the quality criteria, which might improve future own writing, and doing peer feedback actually improves future writing more than students just doing self-assessment. This might be, for example, because students, both as feedback giver and receiver, are exposed to different perspectives on and approaches towards the content. So there is actual benefit to student learning in giving peer feedback!

It doesn’t hurt to get feedback from more than one peer

Thinking about the logistics in a classroom, one question is whether students should receive feedback from one or multiple peers. Turns out, in the literature it is not (significantly) clear whether it makes a difference. But gut feeling says that getting feedback from multiple peers creates redundancies in case quality of one feedback is really low, or the feedback isn’t actually given. And since students also benefit from giving peer feedback, I see no harm in having students give feedback to multiple peers.

A combination of grading and free-text feedback is best

So what kind of feedback should students give? For students receiving peer feedback, a combination of grading/ranking and free-text comments have the maximum effect, probably because it shows how current performance relates to ideal performance, and also gives concrete advise on how to close the gap. For students giving feedback, I would speculate that a combination of both would also be the most useful, because then they need to commit to a quality assessment, give reasons for their assessment and also think about what would actually improve the piece they read.

So based on the Huisman et al. (2019) study, let’s have students do a lot of formative assessment on each other*, both rating and commenting on each other’s work! And to make it easier for the students, remember to give them good rubrics (or let them create those rubrics themselves)!

Are you using student peer feedback already? What are your experiences?

*The Huisman et al. (2019) was actually only on peer feedback on academic writing, but I’ve seen studies using peer feedback on other types of tasks with similar results, and also I don’t see why there would be other mechanisms at play when students give each other feedback on things other than their academic writing…


Bart Huisman, Nadira Saab, Paul van den Broek & Jan van Driel
(2019) The impact of formative peer feedback on higher education students’ academic writing: a Meta-Analysis, Assessment & Evaluation in Higher Education, 44:6, 863-880, DOI: 10.1080/02602938.2018.1545896

Co-creating rubrics? Currently reading Fraile et al. (2017)

I’ve been a fan of using rubrics — tables that contain assessment criteria and a scale of quality definitions for each — not just in a summative way to determine grades, but in a formative way to engage students in thinking about learning outcomes and how they would know when they’ve reached them. Kjersti has even negotiated rubrics with her class, which she describes and discusses here. And now I read an article on “Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students” by Fraile et al. (2017), which I will summarise below.

Fraile et al. (2017) make the argument that — while rubrics are great for (inter-)rater reliability and many other reasons, students easily perceive them as external constraints that dampen their motivation and might lead to shallow approaches to learning, not as help for self-regulated deep learning. But if students were involved in creating the rubric, they might feel empowered and more autonomous because they are now setting their own goals and monitoring their performance against those, thus using it in ways that actually supports their learning.

This argument is then tested in a study on sports students, where a treatment group co-creates rubrics, whereas a control group uses those same rubrics afterwards. Co-creation of the rubric meant that after an introduction to the content by the teacher, students listed criteria for the activity and then discussed them in small groups. Criteria were then collected and clustered and reduced down to about eight, for which students, in changing groups, then produced two extreme quality definitions for each. Finally, the teacher compiled everything into a rubric and got final approval from the class.

So what happened? All the arguments above sounded convincing, however, results of the study are not as clear-cut as one might have hoped. Maybe the intervention wasn’t long enough or the group of students was too small to make results significant? But what does come out is that in thinking aloud protocols, the students who co-created the rubrics were reporting more self-regulated learning. They also performed better on some of the assessed tasks. And they reported more positive perceptions of rubrics, especially of transparency and understanding of criteria.

What do we learn from this study? At least that all indications are that co-creating rubrics might be beneficial to student learning, and that no drawbacks came to light. So it seems to be a good practice to adopt, especially when we are hoping for benefits beyond what was measured here, for example in terms of students feeling ownership for their own learning etc..


Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69-76.

Using rubrics

I’ve been a fan of working with rubrics for a long time, but somehow I don’t seem to have blogged about it. So here we go!

Rubrics are basically tables of learning outcomes. The rows give different criteria that are to be assessed, and then performance at (typically three) different levels is described. Below, I’ll talk about the benefits that working with rubrics have for both teachers and students, and give two concrete examples of how we used them and why that was helpful.

Rubrics are a great tool for teachers

  1. Designing a rubric makes you really think long and hard about what it is that you want students to be able to demonstrate for the different criteria, and how you would distinguish an ok performance from a good performance for each criterion.
  2. Once the rubric is set up, grading becomes a lot easier. Instead of having to think about how well any given response answers your question, now it’s basically about putting crosses in the relevant cells matching the performance you see in front of you.
  3. This makes it a lot easier when there are many people involved in grading — the dreaded “but x got a point for y and I didn’t!”-discussions become a lot fewer because now grading is a lot more objective
  4. Giving feedback also becomes a lot easier, since all the performance descriptions are already there and it’s now basically about copy&paste (or even sharing the crossed-through rubric) to show “this is where you are at” and “this is what I was expecting”.
  5. It also helps in course planning…

One example of where I was really glad we did have a rubric is the project that Torge and I collaborated on: We bought four cheap setups for rotating tank experiments and designed a course around making otherwise really unintuitive and difficult to observe concepts not only visible, but manipulating them in order to gain a deeper understanding. We had written down a rubric pre-corona, but when we went into lockdown in March 2020, having the rubric helped us a lot in quickly figuring out how to transfer a very much hands-on course online. Since we had clearly identified the learning outcomes, it became very easy to think of alternative ways to teach them virtually. The figure above shows part of the rubric, and circled in red is the only learning outcome in that selection (of a lesson that we thought was all about the hands-on experience!) that wasn’t just as well taught virtually. But looking closely at the rubric, we realised that the students did not actually need to necessarily do the rotating experiments themselves, as long as they were doing some kind of experiment themselves to practice conducting experiments following lab instructions. With the rubric, we had a checklist of “this is what they need to be able to do at the end of class” to directly convert into activities.  We ended up with me showing the rotating experiments from my kitchen, while the students were doing non-rotating experiments, using only readily available household items, from their homes. Without the very explicit learning outcomes in our rubric, converting the course would probably been a lot more difficult.

Rubrics are also great for students

  1. They get a comprehensive overview over what the instructor actually expects from them
  2. They can use the rubric to make sure they “tick all the boxes”, or strategically decide where to put their time and effort
  3. Instructor feedback is now a lot more helpful than “2 out of 5 points”.

Kjersti shares an example of how she “negotiated” rubrics in her GEOF105 class to co-create it with her students:

The goal is to invite students to negotiate an assessment rubric for written assignments. We have tested this out in the following way:

  • The teacher drafted a rubric and assigned an equal weighting of 5 points to each assessment criteria (15 criteria gave a total score of 75 points).
  • The students voted anonymously for which criteria they wanted to assign a stronger weighting. We made no limits in how many criteria each student could vote for.
  • The votes were counted up, and the remaining 25 points in the assessment were distributed based on the number of votes for each criterion.

The two criteria most students voted to weight stronger, were the structure of the lab report and the reflection part. I suspect they wanted more points for the structure partly because it is not too difficult, but also because they spend much time figuring out how a lab report should look. I also found it interesting that they wanted more points for reflection. Last year we asked the students to write a reflection paragraph that would not be assessed. We thought it would be stressful for the students to write the reflection knowing it would be evaluated. But, I guess we were wrong!

They also wanted more point for making/discussing hypothesis, using good illustrations and relating the experiment tank to the Earths geometry — all of which are objectively difficult parts of the lab report.

We found two main results after using the negotiated rubric:

  1. The students (on average) achieved higher scores than the previous year (were the rubric was fixed)
  2. The students made fewer complaints to the assignment score

We think the students achieved higher scores because they spent more time getting acquainted with the rubric before writing their assignments and could use it more constructively as a checklist.

So those are our experiences with using rubrics. How about you? We’d love to hear from you!

Letting students choose the format of their assessment

I just love giving students choice: It instantly makes them more motivated and engaged! Especially when it comes to big and important tasks like assessments. One thing that I have great experience with is letting students choose the format of a cruise or lab report. After all, if writing a classical lab report isn’t a learning outcome in itself, why restrict their creativity and have them create in a format that is — let’s be honest — super boring to read for the instructor?

I have previously given students the choice between a blog post, an Instagram post, and tweets, but would next time open it up to include other formats like tictoc or podcasts or even any social media format they like. What I did was give them the choice of format, and then also the choice of actually publishing it (on either a platform that I provided, or on one they organized themselves), or “just” submitting something that could have been posted on one of those platforms but ended up just visible to me and the class.

So how do we then make sure that the different formats all have the same level of “difficulty”, that it’s a fair assignment? This is where rubrics come in. Your rubric might assess several categories: First and foremost, the one directly related to your learning outcome. In case of a lab report things like is the experimental setup described correctly, does it become clear why an experiment is being performed and how it is done, are observations clearly described and results discussed etc.. All of these things can be done equally well in a twitter thread and in a blog post.

If you are so inclined and it is part of your learning outcomes, you might also evaluate if the social media channel is used well (An example evaluation rubric for Instagram posts here).

And lastly — you could require a reflection document in which students discuss whether they did address the different points from the rubric, and where they have the chance to justify for example why they did not include certain aspects in the social media post, but provide additional information in that document (for example if you would like to see the data in a table, that might not be easy to include in a podcast). Requiring this document has at least two positive effects: Making sure the students actually engage with the rubric, and levelling the playing field by giving everybody the opportunity to elaborate on things that weren’t so easily implemented in their chosen format.

If you want to make sure that students really feel it’s all fair, you could even negotiate the rubric with them, so they can up- or downvote whichever aspects they feel should count for more or less.

What do you think, would you give your students such choices? Or do you even have experience with it? We’d love to hear from you!

Negotiating a rubric of learning outcomes and letting students pick the format in which they show they’ve mastered the learning outcomes

I’m still inspired by Cathy’s work on “co-creation”, and an episode of “Lecture Breakers” (I think the first one on student engagement techniques where they talked about letting students choose the format of the artefact they do for assessment purposes; but I binge-listened, and honestly, they are all inspiring!). And something that Sam recently said stuck with me — sometimes the teacher and the students just have “to play the game”. Assessment is something that needs to happen, and there are certain rules around it that need to be followed, but there are also a lot of things that can be negotiated to come to a consensus that works for everybody. So, as a teacher, just be open about your role in the game and the rules you yourself are bound by and the ones you are open to negotiate, and then start discussing! Anyway, the combination of those three inputs gave me an idea that I would like your feedback on.

Consider you want to teach a certain topic. Traditionally you would ask students to do a certain activity. You have specific learning outcomes you want your students to reach. Whether or not they reach those outcomes, you would evaluate by asking a certain set of questions to see whether they answer them correctly, or maybe by asking them to produce an artefact like an essay or a lab report. And that would be it.

But now consider you tell students that there is this specific topic you want to teach (and why you want to teach it, how it relates to the bigger picture of the discipline and what makes it relevant. Or you could even ask them to figure that out themselves!) and that they will be free to produce any kind of artefact or performance they want for the assessment. Now you could share your learning outcomes and tell them about what learning outcomes matter most to you, and why. And then you could start discussing. Do students agree on the relative importance of learning outcomes that you show in the way you are weighing them? Are there other learning outcomes that they see as relevant that you did not include (yet)?

Once that is settled (possibly by voting, or maybe also coming to a consensus in a discussion, depending on your group and your relationship to them. And of course you can set the boundary conditions that maybe some learning outcomes need to count for at least, or not more, a certain threshold), you are ready for the next important discussion. How could students show that they have mastered a learning outcome? What kind of evidence would they have to produce? What might count as having met the outcome, what would still count as “good enough”?

Now that it’s clear what the learning outcomes are and what they mean in terms of specific skills that will need to be demonstrated, you could let students add one learning outcome that they define themselves and that is related to the format of the artefact that they want to produce (possibly public speaking with confidence when presenting the product, learning to use some software to visualise, or analysing a different dataset than you gave them themselves, …). You could have already included 10% (or however much you think that skill should “be worth”) in the rubric, or negotiate it with students.

While negotiating learning outcomes, students will already have needed to think about how each learning outcome will become visible with their chosen way of presentation, and this should be talked through with you beforehand and/or documented in a meta document, so that a very artistic presentation does not obscure that actual learning has taken place.

How much fun would it be when people can choose to give a talk, do a short video, present a poster, design an infographic, rhyme a science poem, or whatever else they might like? I imagine it would be super motivating. Plus it would help students build a portfolio that shows their subject-specific skills acquired in our class alongside other skills that they think are fun or important to develop. And maybe some artefacts could be used in science communication, engaging other people by hooking them via a format they are interested in, and then maybe they also get interested in the content? I’ve seen hugely creative ideas when we asked students to write blog posts about phenomena we had investigated in the rotating DIYnamics tanks, like a Romeo-and-Juliet-type short novel on two water drops, or an amazing comic — and there they were confined to writing. What if they could also choose to make objects like my pocket wave watching guide, or to perform a play?

I guess it could be overwhelming when the content is very difficult, the task is very big, and students then also have to consider how to show that they learned it, in a way that isn’t pre-determined. Also timing might be important here so this task does not happen at the same time as other deadlines or exams. And obviously when you suggest this to your students, they might still all want to pick the same, or at least a traditional, format, and you would have to be ok with this if you take them seriously in these negotiations. What do you think? What should we consider and look out for when trying to implement something like this?

Evaluation rubric for Instagram posts (in scicomm and/or science classes!)

Social media is a great tool in science communication, so learning how to use it well is helpful not only for people who self-identify as science communicators, but also for scientists and scientists-to-be.

Teaching social media science communication skills

I’ve explained why I think that that is generally a good idea in our recent virtual poster, but here is an even more recent example of how well it can work: In early April, Prof. Tessa M Hill encouraged her class at UC Davis to do kitchen oceanography experiments and post pictures or videos on the internet. Her student Robert Dellinger posted a video of an overturning circulation on Twitter that got me super excited (and he kindly agreed to write this guest post on it) and as of now, April 16th, it has 70 retweets and 309 likes. That’s  an incredible reach! And if you think it’s just a lucky strike, another student from that class, Linnea Byrd, posted pictures on Instagram which got 276 likes. This might be to a beautiful cover pic and an account with a high following in the first place, but that’s still a lot of people exposed to kitchen oceanography. Both are definitely examples of very successful scicomm!

Talking with Prof. Kerstin Kremer in preparation for a recent science communication course I taught at her university, I decided that I wanted to set up an “evaluation rubric” that can be used for two purposes: As tool in teaching; and to evaluate social media posts.

Making expectations transparent to students

When teaching about the use of social media in science communication, there is a fine balance between, on the one side, a lot of information on what works and what doesn’t (aka “the rules”), and on the other hand the fact that the things that work best are when those exact rules are purpusefully and skillfully broken. But in order to do that, I believe that one needs to first know “the rules”, and the rubric below gives a structured overview that can be used as guidlines when creating an Instagram post.

Grading the students’ Instagram posts

For some classes, Instagram posts are created as artefacts that contribute to the course grade. In those cases, it is very important to be very clear about what the learning outcomes are and how they will be evaluated; especially if the posts are evaluated by someone who did not teach the class themselves. For this, the rubric below might be helpful.

Evaluation rubric for the scicomm aspect of Instagram posts

Please note:

  • This rubric is an example only and needs to be adapted and/or expanded to match your classes learning outcomes. Here, the focus is exclusively on the use of Instagram as a communication tool. For examples of how to expand this rubric for use in different contexts, see below
  • If points are awarded for each category listed below, they should obviously not be weighted equally when calculating a grade, but priorized according to the class’s learning outcomes
  • I’ve only formulated the end points; obviously this could be expanded to explicitly name intermediate qualitiy levels if that makes grading easier for you; I just wanted to put up a general framework.

The basic rubric is structured into four categories: The captions/comments of a post, the use of hashtags and tags, and the use of images.

Caption / Comments Not good………………….. …………………very good
Purpose Post does not fit in the usual context of the account and its target group; no context is given for why it is posted on that account It becomes clear why the post is published on a given account for its target group, either because it fits right in, or because contextual information is given
Background The post cannot be understood without pre-existing background knowledge All relevant background information is supplied
Structure No structure obvious The text is structured according to an obvious structure (hero’s journey, chronological, pro/con, facts/discussion, …)
Comments Caption breaks off in the middle of the sentence and continues in a comment without any explanation linking the two If the text is too long for the main caption, there is a comment at the end of the main caption pointing out that the text continues in a comment below
Jargon A lot of jargon in a text for kids, or too imprecise language for highly specialized/educated readers Choice of terminology appropriate for target group
Sentence length Only 3-word sentences or one sentence for the whole paragraph Good readability because of appropriate sentence length
Spelling and grammar Seems like post has not been proofread Correct spelling and grammar
Outlook Post “just ends” The reader is given a “next step”: Link to further reading, key word to google, invitation to follow, call to action, …
Emojis Way too many or unrelated to the topic Appropriately used for the target audience and topic

 

Tags of other accounts Not good………………….. …………………very good
Fit Way too many, and for no apparent reason Relevant accounts are tagged (e.g. photographer of picture, institution that did the research, people that were involved in the project, people shown in the picture, …)

 

Hashtags Not good………………….. …………………very good
Number None, or way too many 3-11
Fit No relation of hashtags to content of post, or bad fit Hashtags describe the content of the post well and enable potentially interested audiences to find it
Language Hashtag in random languages Language matches the language of the post or complements it in a useful way (e.g. English post with English hashtags additionally uses German technical terms as hashtags to point to scicomm at a German institution)

 

Picture Not good………………….. …………………very good
Best practice Picture does not follow best practice recommendations Picture follows best practice recommendations, e.g. no polar bears to raise awareness for climate change, careful with protest imagery, causes showed at scale, … (For climate communication practices, see climatevisuals.org)
Fit Picture unrelated to content of post The picture contributes information to the post
Reference Picture is not referred to in post Each picture is referenced in the text and has a clear purpose to the narrative
Quality Picture clearly not tailored for Instagram and no explanation for why it was used anyway The focus is on the relevant aspect or it is explained why the focus is elsewhere
Rights Picture not credited to rights holder The author holds the rights and/or gives appropriate credit

Evaluation rubric for other aspects of Instagram posts

Of course, you might also want students to break some of “the rules” I gave above if your focus is on other aspects. For example, of you are very interested in how well students are working with literature, even though that is not something that is traditionally done well on Instagram, it is a very valid learning outcome that you might not want to give up, even if it breaks the traditional Instagram style. Then you could include criteria like these ones:

To practice citations Not good………………….. …………………very good
Citation number No citations Appropriate number of citations
Citation quality Cited literature not relevant for the topic discussed in the post, or list very incomplete All relevant literature to the topic is cited
Citation correctnes Incorrect use of citation style or inappropriate citation style Appropriate citation style, correctly used

Or if you are using Instagram posts in place of more traditional lab reports, of course additional learning outcomes are to be evaluated. Categories might then include, for example, the ones below. But use any criteria that you would use to evaluate a lab report!

As a lab report Not good………………….. …………………very good
Question It doesn’t become clear why experiments were done It is clearly stated what research question is being investigated
Context It doesn’t become clear if anyone else has ever done work related to the experiment presented here The experiment is placed in the context of existing research and theories
Hypothesis No hypothesis is stated A hypothesis is clearly stated and it is also justified on what basis it was formulated
Plan It is not clear which steps are being done, in which order, and why A clear plan of steps is presented together with a rationale for the steps and their order
Method It is not clear what methods are being used, and why It is clearly stated which methods are being used and for what reason they were chosen
Observations There are none Observations are clearly described
Interpretation It is not clear how conclusions are formed from the observations, or there are no conclusions There is a clear separation between observations and the conclusions that are being drawn on the basis of those observations

Now let me know what you think. Was this blogpost useful for you? What other aspect of using social media in science teaching would you be interested in?