Tag Archives: assessment

Operationalising and assessing sustainability competencies (some ideas from Wiek et al. (2016) and Redman et al. (2020))

We put a lot of effort into teaching for sustainability, but whether or not we are actually successful in doing so remains unclear until we figure out a way to operationalise learning outcomes and, obviously, ways to assess them. Below, I am summarising two articles to get a quick idea of how one might do this.

Continue reading

Prompt engineering and other stuff I never thought I would have to teach about

Last week, I thought a very intensive “Introduction to Teaching and Learning” course where we — like all other teachers everywhere — had to address that GAI has made many of the traditional formats of assessment hard to justify. We had to come up both with guidelines for the participants in our course on how to deal with GAI in the assessment for our course, and with some kind of guidance for them as teachers.

Continue reading

Currently reading: “The impact of grades on student motivation” (Chamberlin et al., 2023)

An argument that I encounter a lot is that student assignments need to be graded in order for students to put in any effort at all. But is that true? In the literature, grades have been connected to stress and anxiety for students, more cheating, less cooperation, less thinking, less trust — so ultimately less learning. So what does grading student work do for student motivation? My summary of Chamberlin et al. (2023) below.

Continue reading

Currently reading: “Teaching with rubrics: the good, the bad, and the ugly” (Andrade, 2005)

Doing my reading for the monthly iEarth journal club… Thanks for suggesting yet another interesting article, Kirsty! This one is “Teaching with rubrics: the good, the bad, and the ugly” (Andrade, 2005) — a great introduction on how to work with rubrics (and only 2.5 pages of entertaining, easy-to-read text, plus an example rubric). My summary of the article:

Continue reading

Currently reading: “Beyond open book versus closed book: a taxonomy of restrictions in online examinations.” by Dawson, Nicola-Richmond, & Partridge (2023)

If we want to do a valid assessment of what a specific student can do, we need to know what information they had available when producing the artifact we are evaluating, who they could communicate with, and what tools they had access to. And we might want to restrict access to some or all of those, to some degree or completely. Dawson et al. (2023) develop a taxonomy of restrictions that I find really useful as an overview!

Continue reading

“Mandatory coursework assignments can be, and should be, eliminated!” currently reading Haugan, Lysebo & Lauvas (2017)

The claim in this article’s title, “Mandatory coursework assignments can be, and should be, eliminated!“, is quite a strong one, and maybe not fully supported by the data presented here. But the article is nevertheless worth a read (and the current reading in iEarth’s journal club!), because the arguments supporting that claim are nicely presented.

Continue reading

“Confident Assessment in Higher Education”, by Rachel Forsyth (2023)

I am so lucky to work with so many inspiring colleagues here at Lund University, and today I read my awesome colleague Rachel Forsyth’s new book on “confident assessment in higher education” (Forsyth, 2023). It is a really comprehensive introduction to assessment and totally worth a read, as an introduction to assessment or even just for a refresher of all the different aspects that need to be considered, and suggestions for how to think about them (while reading, I sent several photos of tables to a colleague because it is directly relevant for a course she is planning that we talked about the other day). For me, the most interesting part were the suggested questions to ask yourself about assignment tasks, and ways to answer them:

Continue reading

Eight criteria for authentic assessment; my takeaways from Ashford-Rowe, Herrington & Brown (2014)

“Authentic assessment” is a bit of a buzzword these days. Posing assessment tasks that resemble problems that students would encounter in the real world later on sounds like a great idea. It would make learning, even learning “for the test”, so much more relevant and motivating, and it would prepare students so much better for their lives after university. So far, so good. But what does authentic assessment actually mean when we try to design it, and is it really always desirable to its fullest meaning?

Ashford-Rowe, Herrington & Brown (2014) have reviewed the literature and, through a process of discussion and field testing, came up with eight critical elements of authentic assessment, which I am listing as the headers below, together with my thoughts on what it would mean to implement them.

1. To what extent does the assessment activity challenge the student?

For assessment to be authentic, it needs to mirror real-world problems that are not solved by just reproducing things that students learned by heart, but by creating new(-to-the-student) solutions, including analysing the task in order to choose the relevant skills and knowledge to even approach the task with.

This clearly sounds great, but as an academic developer, my mind goes directly to how this is aligned with both learning outcomes and learning activities. My fear would be that it is easy to make the assessment way too challenging if the focus is on authentic assessment alone and students are not practising on very similar tasks before already.

2. Is a performance, or product, required as a final assessment outcome?

Real-world problems require actual solutions, often in form of a product that addresses a certain need. In an assessment context, we need to balance the ideal of a functional product that provides a solution to a problem with wanting to check whether specific skills have been acquired. I.e. what would happen if students found a solution that perfectly solved the problem they were tasked with, but did it in some other way without demonstrating the skills we had planned on assessing? Would that be ok, or do we need to provide boundary conditions that make it necessary or explicit that students are to use a specific skill in their solutions?

This facet of authentic assessment strongly implicates that assessment cannot happen only in a matter of hours in a written closed-book exam, but requires more time and likely different formats (even though many of my authentic products in my job actually are written pieces).

3. Does the assessment activity require that transfer of learning has occurred, by means of demonstration of skill?

Transfer is a highest level learning outcome in both Bloom’s and the SOLO taxonomy, and definitely something that students should learn — and we should assess — at some point. How far the transfer should be, i.e. if skills and knowledge really need to be applied in a completely different domain, or just on a different example, and where the boundary between those two is, is open for debate though. And we can transfer skills that we learned in class to different contexts, or we can bring skills that we learned elsewhere into the context we focussed on in class. But again we need to keep in mind that we should only be assessing things that students actually had the chance to learn in our courses (or, arguably, in required courses before).

4. Does the assessment activity require that metacognition is demonstrated?

It is obviously an important skill to acquire to self-assess and self-direct learning, and to put it into the bigger context of the real world and ones own goals. If assessment is to mirror the demands of real-world tasks after university, reflecting on ones own performance might be a useful thing to include. But this, again, is something that clearly needs to be practice and formative feedback before it can be used in assessment.

5. Does the assessment require a product or performance that could be recognised as authentic by a client or stakeholder? (accuracy)

This point I find really interesting. How close is the assessment task to a real-world problem that people would actually encounter outside of the classroom setting? Before reading this article, that would have been my main criterion for what “authentic assessment” means.

6. Is fidelity required in the assessment environment? And the assessment tools (actual or simulated)?

Continuing the thought above: If we have a task that students might encounter in the real world, do we also provide them with the conditions they would encounter it in? For an authentic assessment situation, we should be putting students in an authentic(-ish) environment, where they have access to the same tools, the same impressions of their surroundings, the same sources of information as one would have if one was confronted with the same problem in the real world. So we need to consider how/if we could even justify for example not letting students use internet searches or conversations with colleagues when working on the task!

7. Does the assessment activity require discussion and feedback?

This follows nicely on my thoughts on the previous point. In the real world, we would discuss and receive feedback while we are working on solving a problem. If our assessment is to be authentic, this also needs to happen! But do we also assess this aspect (for example in the reflection that students do in order to demonstrate metacognition, or by assessing the quality of the discussion and feedback they give to each other), or do we require it without actually assessing it, or do we just create conditions in which it is possible and beneficial to discuss and give&receive feedback, without checking whether it actually occurred? Also, who should the students discuss with and get feedback from: us, their peers, actual authentic stakeholders? Literature shows that peer feedback is of comparable quality to teacher feedback, and that students learn both from giving and receiving, so maybe including a peer-feedback loop is a good idea. Real stakeholders would surely be motivating, but depending on the context that might be a little difficult to arrange.

8. Does the assessment activity require that students collaborate?

Collaboration is of critical importance in the real world. (How) do we include it in assessment?

I really enjoyed thinking through these eight critical elements of authentic assessment, and it definitely broadened my understanding of authentic assessment considerably, both in terms of its potential and the difficulties to implement it. What are your thoughts?


Ashford-Rowe, K., Herrington, J., & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205-222.

Using peer feedback to improve students’ writing (Currently reading Huisman et al., 2019)

I wrote about involving students in creating assessment criteria and quality definitions for their own learning on Thursday, and today I want to think a bit about involving students also in the feedback process, based on an article by Huisman et al. (2019) on “The impact of peer feedback on university students’ academic writing: a Meta-Analysis”. In that article, the available literature on peer-feedback specifically on academic writing is brought together, and it turns out that across all studies, peer feedback does improve student writing, so this is what it might mean for our own teaching:

Peer feedback is as good as teacher feedback

Great news (actually, not so new, there are many studies showing this!): Students can give feedback to each other that is of comparable quality than what teachers give them!

Even though a teacher is likely to have more expert knowledge, which might make their feedback more credible to some students (those that have a strong trust in authorities), it also makes it more relevant to other students, and there is no systematic difference between improvement after peer feedback and feedback from teaching staff. But to alleviate fears related to the quality of peer feedback is to use peer feedback purely (or mostly) formative, whereas the teacher does the assessment themselves.

Peer feedback is good for both giver and receiver

If we as teachers “use” students to provide feedback to other students, it might seem like we are pushing part of our job on the students. But: Peer feedback improves writing both for the students giving it as well as for the ones receiving it! Giving feedback means actively engaging with the quality criteria, which might improve future own writing, and doing peer feedback actually improves future writing more than students just doing self-assessment. This might be, for example, because students, both as feedback giver and receiver, are exposed to different perspectives on and approaches towards the content. So there is actual benefit to student learning in giving peer feedback!

It doesn’t hurt to get feedback from more than one peer

Thinking about the logistics in a classroom, one question is whether students should receive feedback from one or multiple peers. Turns out, in the literature it is not (significantly) clear whether it makes a difference. But gut feeling says that getting feedback from multiple peers creates redundancies in case quality of one feedback is really low, or the feedback isn’t actually given. And since students also benefit from giving peer feedback, I see no harm in having students give feedback to multiple peers.

A combination of grading and free-text feedback is best

So what kind of feedback should students give? For students receiving peer feedback, a combination of grading/ranking and free-text comments have the maximum effect, probably because it shows how current performance relates to ideal performance, and also gives concrete advise on how to close the gap. For students giving feedback, I would speculate that a combination of both would also be the most useful, because then they need to commit to a quality assessment, give reasons for their assessment and also think about what would actually improve the piece they read.

So based on the Huisman et al. (2019) study, let’s have students do a lot of formative assessment on each other*, both rating and commenting on each other’s work! And to make it easier for the students, remember to give them good rubrics (or let them create those rubrics themselves)!

Are you using student peer feedback already? What are your experiences?

*The Huisman et al. (2019) was actually only on peer feedback on academic writing, but I’ve seen studies using peer feedback on other types of tasks with similar results, and also I don’t see why there would be other mechanisms at play when students give each other feedback on things other than their academic writing…


Bart Huisman, Nadira Saab, Paul van den Broek & Jan van Driel
(2019) The impact of formative peer feedback on higher education students’ academic writing: a Meta-Analysis, Assessment & Evaluation in Higher Education, 44:6, 863-880, DOI: 10.1080/02602938.2018.1545896

Co-creating rubrics? Currently reading Fraile et al. (2017)

I’ve been a fan of using rubrics — tables that contain assessment criteria and a scale of quality definitions for each — not just in a summative way to determine grades, but in a formative way to engage students in thinking about learning outcomes and how they would know when they’ve reached them. Kjersti has even negotiated rubrics with her class, which she describes and discusses here. And now I read an article on “Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students” by Fraile et al. (2017), which I will summarise below.

Fraile et al. (2017) make the argument that — while rubrics are great for (inter-)rater reliability and many other reasons, students easily perceive them as external constraints that dampen their motivation and might lead to shallow approaches to learning, not as help for self-regulated deep learning. But if students were involved in creating the rubric, they might feel empowered and more autonomous because they are now setting their own goals and monitoring their performance against those, thus using it in ways that actually supports their learning.

This argument is then tested in a study on sports students, where a treatment group co-creates rubrics, whereas a control group uses those same rubrics afterwards. Co-creation of the rubric meant that after an introduction to the content by the teacher, students listed criteria for the activity and then discussed them in small groups. Criteria were then collected and clustered and reduced down to about eight, for which students, in changing groups, then produced two extreme quality definitions for each. Finally, the teacher compiled everything into a rubric and got final approval from the class.

So what happened? All the arguments above sounded convincing, however, results of the study are not as clear-cut as one might have hoped. Maybe the intervention wasn’t long enough or the group of students was too small to make results significant? But what does come out is that in thinking aloud protocols, the students who co-created the rubrics were reporting more self-regulated learning. They also performed better on some of the assessed tasks. And they reported more positive perceptions of rubrics, especially of transparency and understanding of criteria.

What do we learn from this study? At least that all indications are that co-creating rubrics might be beneficial to student learning, and that no drawbacks came to light. So it seems to be a good practice to adopt, especially when we are hoping for benefits beyond what was measured here, for example in terms of students feeling ownership for their own learning etc..


Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69-76.