Operationalising and assessing sustainability competencies (some ideas from Wiek et al. (2016) and Redman et al. (2020))

We put a lot of effort into teaching for sustainability, but whether or not we are actually successful in doing so remains unclear until we figure out a way to operationalise learning outcomes and, obviously, ways to assess them. Below, I am summarising two articles to get a quick idea of how one might do this.

Wiek et al. (2016) use key competences that are commonly used in the literature (see the later framework by Redman & Wiek, 2021) and suggest learning objectives at different levels of education for each.

Taking, for example, “Values thinking” (also sometimes called “normative competence”): In general, it means, among other things, being able to describe how values are important when talking about sustainability, and how they influence problem solving, or to be able to articulate their own values, compare and negotiate them with others, and put them into the context of bigger frames like equity or responsibility. In terms of being able to explain and apply concepts, values thinking includes e.g. liveability, harm, “win-win” synergies, and methods that students can use are e.g. risk analysis and visioning methods. If we imagine this at different levels, on a novice level there is a lot of understanding and describing, on an intermediate level there is defining of steps, conducting assessments and constructing values, and on an advanced level there are operationalisations and advanced assessments and visioning (and there are a lot of examples given in the article!).

I think this article is really helpful in curriculum development and course planning. Not necessarily to take as gospel, but to get a better and more detailed understanding of what is meant with the different competencies, and to get inspiration for what they might mean in a specific disciplinary context, and how they can be broken down into pieces that can be taught and assessed.

As to assessment: Redman et al. (2020) present a review of tools commonly used in assessment of sustainability competencies. They find three main categories:

Self-percceiving

  • Scaled self-assessment: On a 4-point Likert scale, students rate their agreement with statements relating to a sustainability competence, for example “I feel confident and competent to articulate a vision of a just and sustainable society” for normative competence. This can be done pre and post instruction. In the literature, scaled self-assessments are judged as easy to use, easy to integrate with other survey questions, and effective for formative assessment and also to practice students’ self-awareness. But there are also problems: We don’t know exactly how students interpret prompts or scales, a pre-test might not give any useful data because the students cannot possibly know what they don’t know yet, and there is poor alignment with other tools trying to measure the same. So it is important to be cautious when using scaled self-assessment, to build on other people’s experiences constructing scales, and to not over-interpret the responses.
  • Reflective writing: Typically post instruction, students write a personal reflection, but can also be done as a journal throughout the course (which I would prefer, since a) students then reflect more or less continuously and not just after the fact), and b) they practice writing that type of texts before it is used in assessment. In the literature, these are described as easy to do, great as learning activity and not “just” assessment, and also useful for further development of the course. However, it is sometimes difficult to interpret and assess because it is subjective, it takes a lot of time to read and interpret, students are, again, not reliable evaluators of their own competences, and also grading a reflection might influence what students write in their reflections based on what they think the teacher wants to hear. But maybe this is where we can learn from academic writing professionals? Emailing Lene… ;-)
  • Focus group/interview: Students (either in an individual interview, or in groups of 6-8) describe the development of their sustainability competencies over time (possibly on a provided timeline, corresponding to the course), or (and I really like this idea!) relating to pictures from their learning process. This can happen formatively during, or summatively after, a course. This helps students to link learning to the activities in which it happened, and might provoke further reflections. In contrast to reflective writing, the interviewer can react, and explore and follow interesting thoughts that they did not anticipate. However, this is time consuming to do and analyse, and also there might be elements of peer pressure influencing the responses.

Observation

  • Performance observation: Based on simulations or on real work experiences in the community, observers or community “clients” provide feedback on scales regarding e.g. how prepared a student was for the task. This is the most direct measure of whether students can perform a competence, and it can include experts from e.g. industry (or other stakeholders) as assessors in real situations. But that requires external stakeholders to have a good grasp of the learning outcomes (but a rubric can help here), and it is difficult to scale for large student groups (even more so when, as recommended, there are several observers per student involved).
  • Regular course-work: Using student theses etc to assess competences. In an ideal world, assignments are already designed in a way that competencies can be assessed, but that is not always the case.

Test-based

  • Concept mapping: Before and after instruction, students create a concept map for a sustainability issue and it is graded by number of nodes, connections, hierarchy levels, content knowledge (I really like concept maps, but they are also extremely difficult to analyse in a useful way). Concept mapping works really well to assess systems thinking, but probably not equally well for the other sustainability competences.
  • Scenario/case test: Students read a test case and respond to questions (open-ended or multiple choice). Scenarios can be constructed to closely represent a real situation while also setting up opportunities to develop or assess specific competencies, but this takes a little work. Also it is obviously only a glimpse of the much more complex reality.
  • Conventional test: Students respond to scale and open questions in the way we’ve always done. This is easily scaleable to large student groups, and everybody is comfortable with the tool as it is well known. But it is difficult to test competencies that way.

I find it quite helpful to see this compilation of published experiences, including the pros and cons, and I especially like the reminder to use pictures from the learning process to spark memories of the situations and help with reflections! That is the reason why you quite often see photos of my desk as featured images when I summarise articles here — it helps me to remember the situation I was in when I read the article and wrote the summary, and there are usually a lot of thoughts I had that did not make it into the post but that come up when I see the situation. But today, even though it is nice and sunny outside and it would be a pretty picture of my desk, I instead chose a picture I took yesterday when it was also very nice and sunny and also the life buoy somehow seems significant.


Redman, A., Wiek, A., & Barth, M. (2021). Current practice of assessing students’ sustainability competencies – a review of tools. Sustainability Science, vol. 16, pp. 117-135.

Wiek, A., Bernstein, M., Foley, R., Cohen, M., Forrest, N., Kuzdas, C., Kay, B., & Withycombe Keeler, L. (2016). Operationalising competencies in higher education for sustainable development. In: Barth, M., Michelsen, G., Rieckmann, M., Thomas, I. (Eds.) (2016). Handbook of Higher Education for Sustainable Development. Routledge, London. pp. 241-260.

Leave a Reply