Summaries of two more inspiring articles recommended by my colleagues: On educational assessment (Hager & Butler, 1996) and on variables associated with achievement in higher ed (Schneider & Preckel, 2017)

Following the call to share inspiring articles, here are two more that I’m summarising below. See the three previous ones (on assessment (Wiliams, 2011), workload (D’Eon & Yasinian, 2021), and quality (Harvey & Stensaker, 2008)) here. And please keep sending me articles that inspire you, I really enjoy reading and summarising them! :)

First up, recommended by Jenny:

“Two models of educational assessment” by Hager & Butler (1996)

Hagen & Butler (1996) look at educational assessment and how it has changed over time: from a scientific measurement model towards a judgmental model, and recommend using the latter.

The scientific measurement model aimed at summing up performance in one number (or letter) grade, based on criteria and using statistics, to be “maximally objective”, valid and reliable. Examples for this are IQ scores or multiple-choice exams. The tendency with those measures is to make the results about people rather than about their performance on that one specific task of taking the test. But it has been shown over and over that those scores are not a good predictor of future performance in, for example, a job. Not surprisingly, because what is tested is not what is actually required in the job: they test in a very limited context, usually often knowledge and not skills, often using methods that are not suitable, and with no regards to who the test-takers actually are or what their attitudes are like.

The newer judgemental model, on the other hand, is about assessing the competencies that are required in the job, relying on competency models that describe what those competencies are and what it would look like if someone had them. This is how for example problem-based learning or portfolios are typically evaluated. In this model, rather than using only one fixed dataset to come to a conclusion about performance, it is possible to gather more data when a case is unclear, and to come into dialogue with the person being assessed. This dialogue makes it possible to integrate learning and assessment more closely.

Hagen & Butler (1996) suggest a model of assessing professional development with three levels:

1. Knowledge, attitudes, and skills

This level can be assessed following the scientific measurement model and consists, for example, of multiple-choice tests of knowledge and cognitive skill, subject-specific problem-solving skills, and observation of skills in practice setting. So far, so good, but when professionals, for example, medical doctors, are asked to judge colleagues, this is not what their focus is on. So knowledge, attitudes and skills are necessary, but not enough.

2. Performance in simulations

Performance is context dependent, so on this level, artificial simulations of real-world contexts are created so that the performance can be evaluated on a macro level that depends on bringing together knowledge and skills from several domains. Usually, this is done using checklists. But again, only passing this level is not enough.

3. Personal competence in the practise domain

On this level, people are observed “on the job”. In contrast to the previous two levels, now evaluation happens without formalized checklists and criteria. This makes it very much dependant on who the judge is, but judges can and should learn from each other to get rid of personal biases: “Objectivity is the intelligent learned use of subjectivity, not a denial of it. In the judgmental model of assessment it is the assessor who delivers objectivity, not the data.”

Comparing the underlying assumptions of the intelligence approach/scientific measurement model vs the cognitive approach/judgemental model, Hagen & Butler (1996) write: “Whereas the intelligence approach encourages selection of people to fit prespecified jobs, the cognitive approach enables us to view the workplace as a set of opportunities for people to learn and grow.” And isn’t the learning and growing why we are in the job as educators in the first place?

When trying to find this article online, I came across the response by Martin (1997), supporting the original article, who warns falling for the Macnamara’s fallacy: “making the measurable important rather than the important measurable”. Which I had never heard put in those terms, but will keep in mind as a very nice way of making a very important point!

And now on to article no 2, recommended by Sandra:

“Variables associated with achievement in higher education: A systematic review of meta-analyses” by Schneider & Preckel (2017)

Schneider & Preckel (2017) is a review of meta analyses and a great start for when you want to know “what works” in higher education. I wrote a longer summery here, and here is my summary of that summary, mostly based on their “10 cornerstone findings”:

There is A LOT of evidence of what works and what doesn’t in higher education. What comes out is that it doesn’t matter so much what you choose to do, but it does matter that, whatever you do, you do it well. Ideally as a combination of teacher-centred and student-centred approaches, and with equal attention to assessment than to the rest of teaching.

Additionally, there are many small elements that, combined, have a large effect on student learning. In a nutshell: Creating a climate in which questions and discussions are encouraged and valued and feedback happens often and focussed, make it clear what learning goals are, relate course content to students’ lives, goals, dreams.

Also be aware that there are a lot of biases and obstacles depending on who the students are and their prior trajectories, and that good study strategies can help any student succeed (and study strategies are best taught within the disciplinary context, not in separate courses).

It is totally worth reading the original article!

What other articles are inspiring you right now? Let me know and I’ll include them in the list!


Hager, P., and Butler, J. (1996). Two models of educational assessment. Assessment & Evaluation in Higher Education, 21(4), 367–378. https://doi.org/10.1080/0260293960210407

Martin, S. (1997). Two Models of Educational Assessment: a response from initial teacher education: if the cap fits …, Assessment & Evaluation in Higher Education, 22:3, 337-343, DOI: 10.1080/0260293970220307

Schneider, M., Preckel, F. (2017). Variables associated with achievement in higher education: A systematic review of meta-analyses. Psychol Bull. 2017; 143(6): 565-600.

2 thoughts on “Summaries of two more inspiring articles recommended by my colleagues: On educational assessment (Hager & Butler, 1996) and on variables associated with achievement in higher ed (Schneider & Preckel, 2017)

  1. Peggy McNeal

    Hi Mirjam,
    I am sharing an article that is currently inspiring me. It is about how well we communicate through our syllabi who we are as educators. I heard about it on the Teaching in Higher Ed podcast if you want to hear an interview with one of the authors. Here is a link to information about the article:
    https://www.baylor.edu/atl/index.php?id=980364
    Enjoy!
    Peggy McNeal

    Reply
    1. Mirjam

      Hi Peggy,
      Thanks for sharing, that is such a cool article! And Teaching in Higher Ed is one of my favourite podcasts, clearly I have some catching up to do!
      Best wishes,
      Mirjam

      Reply

Leave a Reply