Tag Archives: assessment

“Mandatory coursework assignments can be, and should be, eliminated!” currently reading Haugan, Lysebo & Lauvas (2017)

The claim in this article’s title, “Mandatory coursework assignments can be, and should be, eliminated!“, is quite a strong one, and maybe not fully supported by the data presented here. But the article is nevertheless worth a read (and the current reading in iEarth’s journal club!), because the arguments supporting that claim are nicely presented.

Continue reading

“Confident Assessment in Higher Education”, by Rachel Forsyth (2023)

I am so lucky to work with so many inspiring colleagues here at Lund University, and today I read my awesome colleague Rachel Forsyth’s new book on “confident assessment in higher education” (Forsyth, 2023). It is a really comprehensive introduction to assessment and totally worth a read, as an introduction to assessment or even just for a refresher of all the different aspects that need to be considered, and suggestions for how to think about them (while reading, I sent several photos of tables to a colleague because it is directly relevant for a course she is planning that we talked about the other day). For me, the most interesting part were the suggested questions to ask yourself about assignment tasks, and ways to answer them:

Continue reading

Eight criteria for authentic assessment; my takeaways from Ashford-Rowe, Herrington & Brown (2014)

“Authentic assessment” is a bit of a buzzword these days. Posing assessment tasks that resemble problems that students would encounter in the real world later on sounds like a great idea. It would make learning, even learning “for the test”, so much more relevant and motivating, and it would prepare students so much better for their lives after university. So far, so good. But what does authentic assessment actually mean when we try to design it, and is it really always desirable to its fullest meaning?

Ashford-Rowe, Herrington & Brown (2014) have reviewed the literature and, through a process of discussion and field testing, came up with eight critical elements of authentic assessment, which I am listing as the headers below, together with my thoughts on what it would mean to implement them.

1. To what extent does the assessment activity challenge the student?

For assessment to be authentic, it needs to mirror real-world problems that are not solved by just reproducing things that students learned by heart, but by creating new(-to-the-student) solutions, including analysing the task in order to choose the relevant skills and knowledge to even approach the task with.

This clearly sounds great, but as an academic developer, my mind goes directly to how this is aligned with both learning outcomes and learning activities. My fear would be that it is easy to make the assessment way too challenging if the focus is on authentic assessment alone and students are not practising on very similar tasks before already.

2. Is a performance, or product, required as a final assessment outcome?

Real-world problems require actual solutions, often in form of a product that addresses a certain need. In an assessment context, we need to balance the ideal of a functional product that provides a solution to a problem with wanting to check whether specific skills have been acquired. I.e. what would happen if students found a solution that perfectly solved the problem they were tasked with, but did it in some other way without demonstrating the skills we had planned on assessing? Would that be ok, or do we need to provide boundary conditions that make it necessary or explicit that students are to use a specific skill in their solutions?

This facet of authentic assessment strongly implicates that assessment cannot happen only in a matter of hours in a written closed-book exam, but requires more time and likely different formats (even though many of my authentic products in my job actually are written pieces).

3. Does the assessment activity require that transfer of learning has occurred, by means of demonstration of skill?

Transfer is a highest level learning outcome in both Bloom’s and the SOLO taxonomy, and definitely something that students should learn — and we should assess — at some point. How far the transfer should be, i.e. if skills and knowledge really need to be applied in a completely different domain, or just on a different example, and where the boundary between those two is, is open for debate though. And we can transfer skills that we learned in class to different contexts, or we can bring skills that we learned elsewhere into the context we focussed on in class. But again we need to keep in mind that we should only be assessing things that students actually had the chance to learn in our courses (or, arguably, in required courses before).

4. Does the assessment activity require that metacognition is demonstrated?

It is obviously an important skill to acquire to self-assess and self-direct learning, and to put it into the bigger context of the real world and ones own goals. If assessment is to mirror the demands of real-world tasks after university, reflecting on ones own performance might be a useful thing to include. But this, again, is something that clearly needs to be practice and formative feedback before it can be used in assessment.

5. Does the assessment require a product or performance that could be recognised as authentic by a client or stakeholder? (accuracy)

This point I find really interesting. How close is the assessment task to a real-world problem that people would actually encounter outside of the classroom setting? Before reading this article, that would have been my main criterion for what “authentic assessment” means.

6. Is fidelity required in the assessment environment? And the assessment tools (actual or simulated)?

Continuing the thought above: If we have a task that students might encounter in the real world, do we also provide them with the conditions they would encounter it in? For an authentic assessment situation, we should be putting students in an authentic(-ish) environment, where they have access to the same tools, the same impressions of their surroundings, the same sources of information as one would have if one was confronted with the same problem in the real world. So we need to consider how/if we could even justify for example not letting students use internet searches or conversations with colleagues when working on the task!

7. Does the assessment activity require discussion and feedback?

This follows nicely on my thoughts on the previous point. In the real world, we would discuss and receive feedback while we are working on solving a problem. If our assessment is to be authentic, this also needs to happen! But do we also assess this aspect (for example in the reflection that students do in order to demonstrate metacognition, or by assessing the quality of the discussion and feedback they give to each other), or do we require it without actually assessing it, or do we just create conditions in which it is possible and beneficial to discuss and give&receive feedback, without checking whether it actually occurred? Also, who should the students discuss with and get feedback from: us, their peers, actual authentic stakeholders? Literature shows that peer feedback is of comparable quality to teacher feedback, and that students learn both from giving and receiving, so maybe including a peer-feedback loop is a good idea. Real stakeholders would surely be motivating, but depending on the context that might be a little difficult to arrange.

8. Does the assessment activity require that students collaborate?

Collaboration is of critical importance in the real world. (How) do we include it in assessment?

I really enjoyed thinking through these eight critical elements of authentic assessment, and it definitely broadened my understanding of authentic assessment considerably, both in terms of its potential and the difficulties to implement it. What are your thoughts?


Ashford-Rowe, K., Herrington, J., & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205-222.

Using peer feedback to improve students’ writing (Currently reading Huisman et al., 2019)

I wrote about involving students in creating assessment criteria and quality definitions for their own learning on Thursday, and today I want to think a bit about involving students also in the feedback process, based on an article by Huisman et al. (2019) on “The impact of peer feedback on university students’ academic writing: a Meta-Analysis”. In that article, the available literature on peer-feedback specifically on academic writing is brought together, and it turns out that across all studies, peer feedback does improve student writing, so this is what it might mean for our own teaching:

Peer feedback is as good as teacher feedback

Great news (actually, not so new, there are many studies showing this!): Students can give feedback to each other that is of comparable quality than what teachers give them!

Even though a teacher is likely to have more expert knowledge, which might make their feedback more credible to some students (those that have a strong trust in authorities), it also makes it more relevant to other students, and there is no systematic difference between improvement after peer feedback and feedback from teaching staff. But to alleviate fears related to the quality of peer feedback is to use peer feedback purely (or mostly) formative, whereas the teacher does the assessment themselves.

Peer feedback is good for both giver and receiver

If we as teachers “use” students to provide feedback to other students, it might seem like we are pushing part of our job on the students. But: Peer feedback improves writing both for the students giving it as well as for the ones receiving it! Giving feedback means actively engaging with the quality criteria, which might improve future own writing, and doing peer feedback actually improves future writing more than students just doing self-assessment. This might be, for example, because students, both as feedback giver and receiver, are exposed to different perspectives on and approaches towards the content. So there is actual benefit to student learning in giving peer feedback!

It doesn’t hurt to get feedback from more than one peer

Thinking about the logistics in a classroom, one question is whether students should receive feedback from one or multiple peers. Turns out, in the literature it is not (significantly) clear whether it makes a difference. But gut feeling says that getting feedback from multiple peers creates redundancies in case quality of one feedback is really low, or the feedback isn’t actually given. And since students also benefit from giving peer feedback, I see no harm in having students give feedback to multiple peers.

A combination of grading and free-text feedback is best

So what kind of feedback should students give? For students receiving peer feedback, a combination of grading/ranking and free-text comments have the maximum effect, probably because it shows how current performance relates to ideal performance, and also gives concrete advise on how to close the gap. For students giving feedback, I would speculate that a combination of both would also be the most useful, because then they need to commit to a quality assessment, give reasons for their assessment and also think about what would actually improve the piece they read.

So based on the Huisman et al. (2019) study, let’s have students do a lot of formative assessment on each other*, both rating and commenting on each other’s work! And to make it easier for the students, remember to give them good rubrics (or let them create those rubrics themselves)!

Are you using student peer feedback already? What are your experiences?

*The Huisman et al. (2019) was actually only on peer feedback on academic writing, but I’ve seen studies using peer feedback on other types of tasks with similar results, and also I don’t see why there would be other mechanisms at play when students give each other feedback on things other than their academic writing…


Bart Huisman, Nadira Saab, Paul van den Broek & Jan van Driel
(2019) The impact of formative peer feedback on higher education students’ academic writing: a Meta-Analysis, Assessment & Evaluation in Higher Education, 44:6, 863-880, DOI: 10.1080/02602938.2018.1545896

Co-creating rubrics? Currently reading Fraile et al. (2017)

I’ve been a fan of using rubrics — tables that contain assessment criteria and a scale of quality definitions for each — not just in a summative way to determine grades, but in a formative way to engage students in thinking about learning outcomes and how they would know when they’ve reached them. Kjersti has even negotiated rubrics with her class, which she describes and discusses here. And now I read an article on “Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students” by Fraile et al. (2017), which I will summarise below.

Fraile et al. (2017) make the argument that — while rubrics are great for (inter-)rater reliability and many other reasons, students easily perceive them as external constraints that dampen their motivation and might lead to shallow approaches to learning, not as help for self-regulated deep learning. But if students were involved in creating the rubric, they might feel empowered and more autonomous because they are now setting their own goals and monitoring their performance against those, thus using it in ways that actually supports their learning.

This argument is then tested in a study on sports students, where a treatment group co-creates rubrics, whereas a control group uses those same rubrics afterwards. Co-creation of the rubric meant that after an introduction to the content by the teacher, students listed criteria for the activity and then discussed them in small groups. Criteria were then collected and clustered and reduced down to about eight, for which students, in changing groups, then produced two extreme quality definitions for each. Finally, the teacher compiled everything into a rubric and got final approval from the class.

So what happened? All the arguments above sounded convincing, however, results of the study are not as clear-cut as one might have hoped. Maybe the intervention wasn’t long enough or the group of students was too small to make results significant? But what does come out is that in thinking aloud protocols, the students who co-created the rubrics were reporting more self-regulated learning. They also performed better on some of the assessed tasks. And they reported more positive perceptions of rubrics, especially of transparency and understanding of criteria.

What do we learn from this study? At least that all indications are that co-creating rubrics might be beneficial to student learning, and that no drawbacks came to light. So it seems to be a good practice to adopt, especially when we are hoping for benefits beyond what was measured here, for example in terms of students feeling ownership for their own learning etc..


Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69-76.

Letting students choose the format of their assessment

I just love giving students choice: It instantly makes them more motivated and engaged! Especially when it comes to big and important tasks like assessments. One thing that I have great experience with is letting students choose the format of a cruise or lab report. After all, if writing a classical lab report isn’t a learning outcome in itself, why restrict their creativity and have them create in a format that is — let’s be honest — super boring to read for the instructor?

I have previously given students the choice between a blog post, an Instagram post, and tweets, but would next time open it up to include other formats like tictoc or podcasts or even any social media format they like. What I did was give them the choice of format, and then also the choice of actually publishing it (on either a platform that I provided, or on one they organized themselves), or “just” submitting something that could have been posted on one of those platforms but ended up just visible to me and the class.

So how do we then make sure that the different formats all have the same level of “difficulty”, that it’s a fair assignment? This is where rubrics come in. Your rubric might assess several categories: First and foremost, the one directly related to your learning outcome. In case of a lab report things like is the experimental setup described correctly, does it become clear why an experiment is being performed and how it is done, are observations clearly described and results discussed etc.. All of these things can be done equally well in a twitter thread and in a blog post.

If you are so inclined and it is part of your learning outcomes, you might also evaluate if the social media channel is used well (An example evaluation rubric for Instagram posts here).

And lastly — you could require a reflection document in which students discuss whether they did address the different points from the rubric, and where they have the chance to justify for example why they did not include certain aspects in the social media post, but provide additional information in that document (for example if you would like to see the data in a table, that might not be easy to include in a podcast). Requiring this document has at least two positive effects: Making sure the students actually engage with the rubric, and levelling the playing field by giving everybody the opportunity to elaborate on things that weren’t so easily implemented in their chosen format.

If you want to make sure that students really feel it’s all fair, you could even negotiate the rubric with them, so they can up- or downvote whichever aspects they feel should count for more or less.

What do you think, would you give your students such choices? Or do you even have experience with it? We’d love to hear from you!

Assessing participation

One example of how to give grades for participation.

One of the most difficult tasks as a teacher is to actually assess how much people have learned, along with give them a grade – a single number or letter (depending on where you are) that supposedly tells you all about how much they have learnt.

Ultimately, what assessment makes sense depends on your learning goals. But still it is sometimes useful to have a couple of methods at hand for when you might need them.

Today I want to talk about a pet peeve of mine: Assessing participation. I don’t think this is necessarily a useful measure at all, but I’ve taught courses where it was a required part of the final grade.

I’ve been through all the classical ways of assessing participation. Giving a grade for participation from memory (even if you take notes right after class) opens you up to all kinds of problems. Your memory might not be as good as you thougt it was. Some people say more memorable stuff than others, or in a more memorable way. Some people are just louder and more foreward than others. No matter how objective you are (or attempt to be) – you always end up with complaints and there is just no way to convince people (including yourself) that the grades you end up giving are fair.

An alternative approach.

So what could you do instead? One method I have read about somewhere (but cannot find the original paper any more! But similar ideas are described in Maryellen Weimer’s article “Is it time to rethink how we grade participation“) is to set a number of “good” comments or questions that students should ask per day or week. Say, if a student asks 3 good questions or makes 3 good comments, this translates to a very good grade (or a maximum number of bonus points, depending on your system). 2 comments or questions still give a good grade (or some bonus points), 1 or less are worth less. But here is the deal: Students keep track of what they say and write it down after they’ve said it. At the end of the lesson, the day, the week or whatever period you chose, the hand you a list of their three very best questions or comments. So people who said more than three things are required to limit themselves to what they think were their three best remarks.

The very clear advantage is that

  • you are now looking for quality over quantity (depending on the class size, you will need to adjust the number of comments / questions you ideally want per person). This means people who always talk but don’t really say anything might not stop, but at least they aren’t encouraged to talk even more since they will have to find a certain number of substantial contributions to write down in the end rather than make sure they have the most air time.
  • you don’t have to rely on your memory alone. Sure, when you read the comments and questions you will still need to recall whether that was actually said during class or made up afterwards, but at least you have a written document to jog your memory.
  • you have written documentation of what they contributed, so if someone wants to argue about the quality of their remarks, you can do that based on what they wrote down rather than what they think they might have meant when they said something that they recall differently from you.
  • you can choose to (and then, of course, announce!) to let people also include other contributions on their lists, like very good questions they asked you in private, or emailed you about. Or extra projects they did on the side.

I guess in the end we need to remember that the main motive for grading participation is to enhance student engagement with the course content. And the more different ways we give them to engage – and receive credit for it – the more they are actually going to do it. Plus maybe they are already doing it and we just never knew?