Tag Archives: Bloom’s taxonomy

Asking questions that aim at specific levels of the modified Bloom’s taxonomy

I’m currently preparing a couple of workshops on higher education topics, and of course it is always important to talk about learning outcomes. I had a faint memory of having developed some materials (when still working at ZLL together with one of my all time favourite colleagues, Timo Lüth) to help instructors work with the modified Bloom’s taxonomy (Anderson & Krathwohl, 2001), and when I looked it up, I realized I had not blogged about it. But since I was surprised at how helpful I still find the materials, here we go! :-)

The idea is that instructors are often told to ask specific types of questions (usually “concept” questions), but that it is really difficult to know what that means and how to do it.

So we developed a decision tree that gives an overview over all different kinds of questions. The decision tree can support you in

  • constructing questions that provoke specific cognitive processes in your students,
  • checking what exactly you are asking your students to do when posing existing questions, and
  • modifying existing questions to better match your purpose.

The nitty gritty details and the theoretical foundation are written up in Glessmer & Lüth (2016), unfortunately in german. But check out the decision trees below, I think they work pretty well on their own! We have four different versions of that decision tree, that guide you through both the cognitive and knowledge dimension until you reach the sweet spot you wanted to reach. Have fun!

Here is one example, links to the others below.

Downloads:

  • Abstract decision tree (most helpful for getting familiar with the general concept) [pdf English | pdf German]
  • Decision tree with example questions (most helpful for constructing, or classifying, or changing questions) [pdf English | pdf German]
  • Decision tree with example multiple-choice questions (most helpful as inspiration when working with multiple-choice questions) [pdf English | pdf German]
  • Comparison of our decision tree with “conventional” types of questions (if you want to find out what a “concept question” really is when classified in the Bloom taxonomy) [pdf English | pdf German]

Any comments, feedback, suggestions? Please do get in touch!

Glessmer, M. S., & Lüth, T. (2016). Lernzieltaxonomische Klassifizierung und gezielte Gestaltung von Fragen. Zeitschrift für Hochschulentwicklung, 11 (5) doi: 10.3217/zfhe-11-05/12

Finding the right instructional method for different kinds of knowledge

When reading Anderson & Krathwohl’s 2001 revised taxonomy of educational objectives, I really liked how they made clear that different kinds of knowledge require different instructional approaches as well as different kinds of assessment.

For example, if you were to teach remembering factual knowledge, you would probably spend quite some time reminding students of the details you want them to remember. You would probably also point out strategies that could help, like rehearsing facts, or techniques, like mnemonics.

To assess whether students remember factual knowledge, you might want to have them match lists of technical terms with definitions of those terms, drawings of technical parts with the names of those parts, or physical constants with their units or values.

If, on the other hand, you were to teach analyzing conceptual knowledge, good strategies would be to focus on categories and classifications so students get an idea of where a concept is located in the bigger landscape of the field they are currently studying. To better understand categories and classes, discussing examples as well as non-examples is helpful. Also, emphasizing the differences between categories helps.

To assess analysis of conceptual knowledge, you might want to give a new example of a member of a category you discussed in class. Then you might ask students which category the example belongs to, how they know which category it belongs to, or how you could modify the example so it matches a different category.

While you probably do a lot of this intuitively already, I find it helpful to think about the different categories in order to systematically find good instructional strategies. And it is especially helpful to remember that even though you might be able to classify your learning objective in one category, teaching that activity might require activities that belong to different categories. Especially, you might want to use complex processes to facilitate learning of simpler objectives.

For example when applying conceptual knowledge, you might want to give your students the chance to first classify the type of problem they are working on. Then, they should select the appropriate laws that describe the problem. Then, they need to implement the proper procedures to solve the problem. In order to be able to do that, they might need to recall meta-cognitive strategies, and then implement those. They should also check the implementation of the procedure before finally critiquing the correctness of their solution. And as you might have noticed, those steps are all over the place both along the knowledge dimension as well as along the cognitive dimension (see below).

Bloom_matrix_02

Revised taxonomy of educational objectives: Instruction for “apply conceptual knowledge”

Now thinking of assessment. Do you really only want to test whether students are able to come to the correct solution when you ask them to apply conceptual knowledge, or would you rather see how well they do along the way and test all the different categories? This is, of course, up to you, and either choice has its advantages. But it is definitely worth thinking about it.

The book gives a lot of examples of assessment for all the six categories along the cognitive process dimension, broken down to match all the sub-categories along that dimension. It’s really worth looking into that, if only for inspiration!

Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.

Anderson, L. &  Krathwohl, D. (Eds) (2001). A taxonomy for learning, teaching and assessing. A revision of Bloom’s taxonomy of educational objectives Pearson Education

Currently reading: A revision of Bloom’s taxonomy of educational objectives

I am currently reading the Anderson & Krathwohl (2001) revised taxonomy of learning outcomes. They modified some of the higher levels of the original Bloom (1956) taxonomy and now use a continuum of cognitive processes from “remember” to “create”. They also introduced a second dimension of types of knowledge, ranging from concrete to abstract. While they break those two dimensions down into discrete categories for discussion, they point out that the categories lie along a continuum, similar to how colors lie on a continuum of wave lengths of light.

Bloom_matrix_01

Revised taxonomy of educational objectives

I have recently worked a bit on how using taxonomies of learning outcomes can help me give advice to university teachers, and reading the book was really helpful, because they break down the categories and give examples how learning objectives in each of the categories can be assessed.

For example the most basic category, “remember”, can consist of either recognizing or recalling, which would be assessed in different ways. Whether students recognize something can be tested by asking verification questions: “Is it true or false that …?”. Students could also be asked to match corresponding items on two lists or to find a best choice in a multiple choice scenario. Recalling, however, might be assessed by giving a prompt and asking students to complete: “A freak wave is a wave with a wave height that ___”.

If you are interested in learning more about how learning outcomes can help you in planning your teaching, check out this awesome resource and stay tuned – I will be back with more!

Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.

Anderson, L. &  Krathwohl, D. (Eds) (2001). A taxonomy for learning, teaching and assessing. A revision of Bloom’s taxonomy of educational objectives Pearson Education

Teaching: intentional and reasoned act

I’ve talked about Bloom’s taxonomy of learning objectives before but I have to admit that I’ve only gone back and read the original Bloom (1956) book and the revised taxonomy by Anderson & Krathwohl (2001) very recently. And it has been so worth it!

I’ve spent the better part of last year coaxing people at work into writing learning objectives, and there has been a lot of opposition to doing so, mainly because people didn’t see the point of writing down learning objectives when they could just write down the content of their courses instead. But what we need to remember is that everybody does have learning objectives, even though they might not have formulated them explicitly.

In the Anderson & Krathwohl (2001) book there is a quote that I like:

Objectives are especially important in teaching because teaching is an intentional and reasoned act.

Nothing too surprising here, but when you think about it it is really beautiful, and a lesson that many of my colleagues should take to heart.

Of course teaching is reasoned: We teach what we believe is important for our students to learn. What we judge as important to learn might depend on many different factors like it is closely related to our own speciality, or our own speciality builds on it, or a group of experts we trust decided it is important, or it has been on the curriculum forever and we feel like it stood the test of time; but in the end all we teach has been judged important enough by us.

Screen shot 2015-05-01 at 6.48.39 PM

But then how we teach is also intentional. We provide materials and activities, help students gain experiences, create a learning environment. No matter how much or how little thought is put into creating the learning environment: In the end we all do our best to create an environment that is conductive to learning. Now what we deem important is highly subjective. Some people think that a lecture theatre with blackboards and a frontal lecture is the best environment, others like studio learning on projects in small groups better. But I think it is super important for educational developers to recognize that no matter whether they agree that the learning environments they encounter at work are the best possible ones, they are still (for the most part) intentional. Of course there is usually room for improvement, but I find it really dangerous to assume that people we work with are not intentional in how they approach teaching, and that they might not have very good reasons for doing exactly what they are doing.

So I guess what I am trying to say is this: Please, dear colleagues (and you know who you are!), instead of going on and on about how they are using instructional strategies that you don’t like, give the teaching staff you are working with the benefit of the doubt, and try to support them in a way they would actually like to be supported. And believe it or not: they might even be happy for you to work with them! :-)

Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.

Anderson, L. &  Krathwohl, D. (Eds) (2001). A taxonomy for learning, teaching and assessing. A revision of Bloom’s taxonomy of educational objectives Pearson Education

The structural complexity of learning outcomes

The Structure of the Observed Learning Outcome taxonomy.

I talked about the classification of learning outcomes according to Blooms’s taxonomy recently, and got a lot of feedback from readers that the examples of multiple-choice questions at different Bloom-levels were helpful. So today I want to present a different taxonomy, “Structure of the Observed Learning Outcome”, SOLO, this one classifying the structural complexity of learning outcomes.

SOLO has first been published in 1982 by Biggs and Collis, and the original book is sadly out of print (but available in Chinese). There is a lot of material out there that either describes SOLO or applies it to teaching questions, so you can get a quite good idea of the taxonomy without having read the original source (or so I hope ;-)).
SOLO has four levels of competence: unistructural, multistructional, relational and extended abstract. At the unistructural level, students can identify or name one important aspect. Multistructural then means that the student can describe or list several unconnected important aspects. At the relational level, students can combine several important aspects and analyze, compare and contrast, or explain causes. Lastly at the extended abstract level, students can generalize to a new domain, hence create, reflect or theorize.
Slide1

Visualization of the different levels of competence in the SOLO taxonomy

While competence is assumed to increase over those four levels, (in fact, there is a fifth, the prestructural, level before those four levels, where the student has completely missed the point), difficulty does not necessarily increase in a similar way.

Depending on how questions are asked, the level of competence that is being tested can be restricted. I am going to walk you through all the levels with an example on waves (following the mosquito example here). For example, asking “What is the name for waves that are higher than twice the significant wave height?” requires only a pre-structural response. There is basically no way to arrive at that answer by going through reasoning at a higher competence level.

Asking “List five different types of waves and outline their characteristics.” requires a multi-structural response. A student could, however, answer at the relational level (by comparing and contrasting properties of those five wave types) or even the extended abstract level (if the classification criteria were not only described, but also critically discussed).

A higher SOLO level would be required to answer this question: “List five different types of waves and discuss the relative risks they pose to shipping.”

At worse, this would require a multi-structural response (listing the five types of waves and the danger each poses to shipping). But a relational response is more likely (for example by picking a criterion, e.g. wave height, and discussing the relative importance of the types of waves regarding that criterion). The question could even be answered at the extended abstract level (by discussing how relevance could be assessed and how the usefulness of the chosen criteria could be assessed). Since the word “relative” is part of the question, we are clearly inviting a relational response.

In order to invite an extended abstract response, one could ask a question like this one:

“Discuss how environmental risks to shipping could be assessed. In your discussion, use different types of waves as examples.”

Is it helpful for your own teaching to think about the levels of competence that you are testing by asking yourself at which SOLO level your questions are aiming, or do you prefer Bloom’s taxonomy? Are you even combining the two? I am currently writing a post comparing SOLO and Bloom, so stay tuned!

How to ask multiple-choice questions when specifically wanting to test knowledge, comprehension or application

Multiple choice questions at different levels of Bloom’s taxonomy.

Let’s assume you are convinced that using ABCD-cards or clickers in your teaching is a good idea. But now you want to tailor your questions such as to specifically test for example knowledge, comprehension, application, analysis, synthesis or evaluation; the six educational goals described in Bloom’s taxonomy. How do you do that?

I was recently reading a paper on “the memorial consequences of multiple-choice testing” by Marsh et al. (2007), and while the focus of that paper is clearly elsewhere, they give a very nice example of one question tailored once to test knowledge (Bloom level 1) and once to test application (Bloom level 3).

For testing knowledge, they describe asking “What biological term describes an organism’s slow adjustment to new conditions?”. They give four possible answers: acclimation, gravitation, maturation, and migration. Then for testing application, they would ask “What biological term describes fish slowly adjusting to water temperature in a new tank?” and the possible answers for this question are the same as for the first question.

Even if you are not as struck by the beauty of this example as I was, you surely appreciate that this sent me on a literature search of examples how Bloom’s taxonomy can help design multiple choice questions. And indeed I found a great resource. I haven’t been able to track down the whole paper unfortunately, but the “Appendix C: MCQs and Bloom’s Taxonomy” of “Designing and Managing MCQs” by Carneson, Delpierre and Masters contains a wealth of examples. Rather than just repeating their examples, I am giving you my own examples inspired by theirs*. But theirs are certainly worth reading, too!

Bloom level 1: Knowledge

At this level, all that is asked is that students recall knowledge.

Example 1.1

Which of the following persons first explained the phenomenon of “westward intensification”?

  1. Sverdrup
  2. Munk
  3. Nansen
  4. Stommel
  5. Coriolis

Example 1.2

In oceanography, which one of the following definitions describes the term “thermocline”?

  1. An oceanographic region where a strong temperature change occurs
  2. The depth range were temperature changes rapidly
  3. The depth range where density changes rapidly
  4. A strong temperature gradient
  5. An isoline of constant temperature

Example 1.3

Molecular diffusivities depend on the property or substance being diffused. From low to high molecular diffusivities, which of the sequences below is correct?

  1. Temperature > salt > sugar
  2. Sugar > salt > temperature
  3. temperature > salt == sugar
  4. temperature > sugar > salt

Bloom level 2. Comprehension

At this level, understanding of knowledge is tested.

Example 2.1

Which of the following describes what an ADCP measures?

  1. How quickly a sound signal is reflected by plankton in sea water
  2. How the frequency of a reflected sound signal changes
  3. How fast water is moving relative to the instrument
  4. How the sound speed changes with depth in sea water

Bloom level 3: Application

Knowledge and comprehension of the knowledge are assumed, now it is about testing whether it can also be applied.

Example 3.1

What velocity will a shallow water wave have in 2.5 m deep water?

  1. 1 m/s
  2. 2 m/s
  3. 5 m/s
  4. 10 m/s

Example 3.2

Which instrument would you use to make measurements with if you wanted to calculate the volume transport of a current across a ridge?

  1. CTD
  2. ADCP
  3. ARGO float
  4. Winkler titrator

This were only the first three Bloom-levels, but this post is long enough already, so I’ll stop here for now and get back to you with the others later.

Can you see using the Bloom taxonomy as a tool you would use when preparing multiple-choice questions?

If you are reading this post and think that it is helpful for your own teaching, I’d appreciate if you dropped me a quick line; this post specifically was actually more work than play to write. But if you find it helpful I’d be more than happy to continue with this kind of content. Just lemme know! :-)

* If these questions were used in class rather than as a way of testing, they should additionally contain the option “I don’t know”. Giving that option avoids wild guessing and gives you a clearer feedback on whether or not students know (or think they know) the answer. Makes the data a whole lot easier to interpret for you!

Classifying educational goals using Bloom’s taxonomy

How can you classify different levels of skills you want your students to gain from your classes?

Learning objectives are traditionally categorized after Bloom’s (1956) taxonomy. Bloom separates learning objectives in three classes: cognitive, affective and psychomotor. Cognitive learning objectives are about what people know, understand and about their thinking processes dealing with and synthesizing that knowledge. Affective learning objectives are about feelings and emotions. Lastly psychomotor learning objectives are about what people do with their hands. Even though Bloom was trying to combine all three classes, in the context of today’s university education, the focus is clearly on cognitive learning objectives.

Cognitive learning objectives can be divided into sub-categories. From low-level to high-level processes those categories are as follows:

Knowledge Learning gains on this level can for example be tested by asking students to repeat, define or list facts, definitions, or vocabulary.

Comprehension In order to test comprehension, students can for example be asked to describe, determine, demonstrate, explain, translate or discuss.

Application Ability to apply concepts is shown for example by carrying out a procedure, calculating, solving, illustrating, transferring.

Analysis Competency on this level can be tested by asking students to contrast and compare, to analyze, to test, to categorize, to distinguish.

Synthesis Ability to synthesize can be shown by assembling, developing, constructing, designing, organizing or conceiving a product or method.

Evaluation The highest level, evaluation, can be tested by asking students to justify, assess, value, evaluate, appraise or select.

In the next post I’ll talk about how you can use this classification to help with developing good multiple-choice questions, so stay tuned!