Tag Archives: literature

Importance of designing experiments

At work, we are currently editing a brochure on designing and carrying out lab courses, and we are working on a lot of projects which aim at redesigning labs. And one question that comes up all the time is this: Does it really make a difference whether students design an experiment themselves or whether they carry out an experiment that was set up for them beforehand?

There is a nice paper, “Spending Time On Design: Does It Hurt Physics Learning?” by Etkina et al., 2007, that sets out to answer exactly this question. They are wondering: Are students in design labs able to transfer the scientific abilities to new content the next semester?

In their context, “design labs”are labs in which students design their own experiments. “Scaffolding” is provided, meaning that questions the students work on focus on the individual steps of the scientific process. The students work is guided by TAs who ask questions in response to being asked questions, but who don’t give answers. Non-Design labs, on the other hand use the same equipment, the same number or more experiments, and a guided write-up. Students have to, for example, draw free-body diagrams etc to solve problems, but theoretical assumptions were provided in the text. In this type of lab, TAs provide an overview in the beginning and do answer questions.

On the midterm and final exam, the authors find that the design group outperformed the other group, especially when they had to identify and analyze assumptions. And the difference persisted even a semester later, during which both groups had performed design labs.

So I guess yes, it does make a difference. And when we are redesigning lab courses, we should try to include as much design by the students as possible if our goal is to help them become scientists.

Eugenia Etkina, Alan Van Heuvelen, Anna Karelina, Maria Ruibal-Villasenor, & David Rosengrant (2007). Spending Time On Design: Does It Hurt Physics Learning? AIP Conf. Proc. 951 DOI: 10.1063/1.2820955

Facilitating student group work

Grouping students together for collaborative work is easy, but how do we make them work as a team?
Collaborative learning is often propagated as the ultimate tool to increase learning outcomes, help students learn at a deeper level and remember what they learned for longer, and become better team players as professionals. But many people I work with perceive “group work” as a hassle that costs a lot of time, lets weak or lazy students hide behind others, breeds conflict, and is deemed more of a “kindergarten” method than worthy of being used at a university.
I recently found a paper that addresses all those issues and – even better – provides instruction on how to organize student team work! “Turning Student Groups into Effective Teams” by Oakley et al., 2004. I’ll give a brief summary of their main points below.

 

Should you even form teams?
Do you form them or let them form themselves? The authors are clear on this point:
“Instructors should form teams rather than allowing students to self-select.” As we’ve seen over and over, if students are allowed to find themselves together in the groups they’d like to work in, weak students will likely end up working together, and strong students will end up working together. This is, for obvious reasons, not optimal for the weak groups, but also the strong groups don’t benefit as much from the assignments as they could when working in mixed groups: Strong students tend to divvy up the work among themselves and put pieces together in the end without much discussion of how the individual pieces fit, ignoring the bigger picture. Forming student groups rather than having them self-select will raise objections from the students, but it is probably worth facing that discussion anyway.

 

Then how do you form groups?
The authors present two guidelines, based on previous research:
1. Make sure groups are diverse in ability and that they have common free time slots outside of class so they have a chance to meet up.
2. Make sure at-risk minority students are well included in their groups
Team sizes, they say, are optimally between 3 and 5 members.
The second guide line on at-risk minority students is interesting: In the case of women being the minority you are currently concerned about, they suggest to form groups with all men, all women, two of each, two or three women and one man, but not one woman and two or three men, because the isolation that woman might feel within her team could reinforce the feeling of isolation at university.

 

And what data do you need to form groups?
This is where I am not sure the authors’ advice can be applied to our situation. Of course, it is desirable to know grades in previous courses etc, but collecting that data is problematic in our legal system.

 

And what if I want to re-form groups?
The authors announce that they will re-shuffle after 4-6 weeks unless they get individual signed requests to stay together from all team members. Which they report they do from most teams except the really dysfunctional ones. They also report that difficult (domineering or uncooperative) team members usually behave a lot better in the new teams.

 

So now we have groups. But how do we build effective teams?
The authors say “With a group, the whole is often equal to or less than the sum of its parts; with a team, the whole is always greater.”, so investing into team building is definitely worthwhile. The fist thing they recommend is to

 

-> establish expectations
This consists of two steps: Set out clear guidelines and have team members formulate a common set of expectations of one another. The authors provide forms to help guide the process, a statement of policies and an agreement on expectations. The former gives guidelines of how good teamwork should be done, the latter is a form that students sign and hand in.
A nice tip is to have students name their teams, maybe based on common interests, to help build identity in the team.

 

-> give instructions on effective team practices
In order for students to learn to work in teams effectively, the authors give several pieces of advice that they tell students:
– Stick to your assigned roles! It will make teamwork run more smoothly, plus each roles comes with a skill set that you are expected to practice while filling that role, so don’t cheat yourself out of that learning experience
– Don’t “divide and conquer”. If you split up the work and only stick it back together in the end, you won’t learn enough about all parts of the project to fully understand what we want you to understand.
– Come up with solutions individually and then discuss them as a team. If you are always listening to the fastest person on your team coming up with ideas, you won’t get the practice yourself that you need later.

 

Dealing with problematic team members
Have you ever been on a team where everybody pulled their fair share of the weight, nobody tried domineering the group, nobody refused to work in the team, and everybody had the same goal? Right, me neither. So what can you do?
The authors suggest handing out a short text on “coping with hitchhikers and couch potatoes on teams” and ask students to write a short essay on it. Having them write something about the text makes sure they have actually read it – and maybe even thought about it. The authors state – and I find this super interesting even though not surprising – that “probably the best predictor of a problematic team member is a sloppy and superficial response to this assignment.”

 

-> firing students from teams, or students quitting
The authors present a model of “firing” problematic students from teams, or individual students resigning, where the whole group has to go through a counseling session with the instructor. Both parties learn to actively listen, repeating the complainer’s case back to the complainer. This, the authors say, almost always resolves the problem because by verbalizing someone else’s position, a reflexion process sets in. If things are not resolved, however, a week later a letter is sent notifying everybody on the team and the instructor of the intention of firing or quitting. A week later, if things haven’t improved, a second letter is sent, again to everybody on the team plus the instructor, finalizing the decision. Apparently this hardly ever happens because things have resolved themselves before.

 

For those students that do get fired there are several possible models: They can either get zeros on the team assignments for the rest of the year, or find another team that is willing to take them on. The authors point out the importance of having those rules written out in the time and age of lawsuits.

 

-> the crisis clinic
Another measure that the authors suggest is to occasionally run “crisis clinics”, i.e. short sessions on problematic behaviors, like for example hitchhiking, and putting students together to brainstorm how to deal with those issues. Collecting ideas serves two purposes: To show hitchhikers how frustrated the rest of the group might get with their behavior, and also to equip everybody with the strategies to deal with that kind of behavior.

But it is also important to point out to students that if they continue putting a hitchhiker’s name on the group assignment, they can’t complain later.

 

Puuuuh. The authors continue on, talking about peer grading and going through a long list of FAQs, but I think for today I’ve written enough. But check out the paper, there is so much more in there than I could talk about here!

Barbara Oakley, Rebecca Brent, Richard M. Felder, & Imad Elhajj (2004). Turning Student Groups into Effective Teams New Forums Press, Inc., P. O. Box 876, Stillwater, OK

Finding the right instructional method for different kinds of knowledge

When reading Anderson & Krathwohl’s 2001 revised taxonomy of educational objectives, I really liked how they made clear that different kinds of knowledge require different instructional approaches as well as different kinds of assessment.

For example, if you were to teach remembering factual knowledge, you would probably spend quite some time reminding students of the details you want them to remember. You would probably also point out strategies that could help, like rehearsing facts, or techniques, like mnemonics.

To assess whether students remember factual knowledge, you might want to have them match lists of technical terms with definitions of those terms, drawings of technical parts with the names of those parts, or physical constants with their units or values.

If, on the other hand, you were to teach analyzing conceptual knowledge, good strategies would be to focus on categories and classifications so students get an idea of where a concept is located in the bigger landscape of the field they are currently studying. To better understand categories and classes, discussing examples as well as non-examples is helpful. Also, emphasizing the differences between categories helps.

To assess analysis of conceptual knowledge, you might want to give a new example of a member of a category you discussed in class. Then you might ask students which category the example belongs to, how they know which category it belongs to, or how you could modify the example so it matches a different category.

While you probably do a lot of this intuitively already, I find it helpful to think about the different categories in order to systematically find good instructional strategies. And it is especially helpful to remember that even though you might be able to classify your learning objective in one category, teaching that activity might require activities that belong to different categories. Especially, you might want to use complex processes to facilitate learning of simpler objectives.

For example when applying conceptual knowledge, you might want to give your students the chance to first classify the type of problem they are working on. Then, they should select the appropriate laws that describe the problem. Then, they need to implement the proper procedures to solve the problem. In order to be able to do that, they might need to recall meta-cognitive strategies, and then implement those. They should also check the implementation of the procedure before finally critiquing the correctness of their solution. And as you might have noticed, those steps are all over the place both along the knowledge dimension as well as along the cognitive dimension (see below).

Bloom_matrix_02

Revised taxonomy of educational objectives: Instruction for “apply conceptual knowledge”

Now thinking of assessment. Do you really only want to test whether students are able to come to the correct solution when you ask them to apply conceptual knowledge, or would you rather see how well they do along the way and test all the different categories? This is, of course, up to you, and either choice has its advantages. But it is definitely worth thinking about it.

The book gives a lot of examples of assessment for all the six categories along the cognitive process dimension, broken down to match all the sub-categories along that dimension. It’s really worth looking into that, if only for inspiration!

Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.

Anderson, L. &  Krathwohl, D. (Eds) (2001). A taxonomy for learning, teaching and assessing. A revision of Bloom’s taxonomy of educational objectives Pearson Education

Currently reading: A revision of Bloom’s taxonomy of educational objectives

I am currently reading the Anderson & Krathwohl (2001) revised taxonomy of learning outcomes. They modified some of the higher levels of the original Bloom (1956) taxonomy and now use a continuum of cognitive processes from “remember” to “create”. They also introduced a second dimension of types of knowledge, ranging from concrete to abstract. While they break those two dimensions down into discrete categories for discussion, they point out that the categories lie along a continuum, similar to how colors lie on a continuum of wave lengths of light.

Bloom_matrix_01

Revised taxonomy of educational objectives

I have recently worked a bit on how using taxonomies of learning outcomes can help me give advice to university teachers, and reading the book was really helpful, because they break down the categories and give examples how learning objectives in each of the categories can be assessed.

For example the most basic category, “remember”, can consist of either recognizing or recalling, which would be assessed in different ways. Whether students recognize something can be tested by asking verification questions: “Is it true or false that …?”. Students could also be asked to match corresponding items on two lists or to find a best choice in a multiple choice scenario. Recalling, however, might be assessed by giving a prompt and asking students to complete: “A freak wave is a wave with a wave height that ___”.

If you are interested in learning more about how learning outcomes can help you in planning your teaching, check out this awesome resource and stay tuned – I will be back with more!

Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.

Anderson, L. &  Krathwohl, D. (Eds) (2001). A taxonomy for learning, teaching and assessing. A revision of Bloom’s taxonomy of educational objectives Pearson Education

Teaching: intentional and reasoned act

I’ve talked about Bloom’s taxonomy of learning objectives before but I have to admit that I’ve only gone back and read the original Bloom (1956) book and the revised taxonomy by Anderson & Krathwohl (2001) very recently. And it has been so worth it!

I’ve spent the better part of last year coaxing people at work into writing learning objectives, and there has been a lot of opposition to doing so, mainly because people didn’t see the point of writing down learning objectives when they could just write down the content of their courses instead. But what we need to remember is that everybody does have learning objectives, even though they might not have formulated them explicitly.

In the Anderson & Krathwohl (2001) book there is a quote that I like:

Objectives are especially important in teaching because teaching is an intentional and reasoned act.

Nothing too surprising here, but when you think about it it is really beautiful, and a lesson that many of my colleagues should take to heart.

Of course teaching is reasoned: We teach what we believe is important for our students to learn. What we judge as important to learn might depend on many different factors like it is closely related to our own speciality, or our own speciality builds on it, or a group of experts we trust decided it is important, or it has been on the curriculum forever and we feel like it stood the test of time; but in the end all we teach has been judged important enough by us.

Screen shot 2015-05-01 at 6.48.39 PM

But then how we teach is also intentional. We provide materials and activities, help students gain experiences, create a learning environment. No matter how much or how little thought is put into creating the learning environment: In the end we all do our best to create an environment that is conductive to learning. Now what we deem important is highly subjective. Some people think that a lecture theatre with blackboards and a frontal lecture is the best environment, others like studio learning on projects in small groups better. But I think it is super important for educational developers to recognize that no matter whether they agree that the learning environments they encounter at work are the best possible ones, they are still (for the most part) intentional. Of course there is usually room for improvement, but I find it really dangerous to assume that people we work with are not intentional in how they approach teaching, and that they might not have very good reasons for doing exactly what they are doing.

So I guess what I am trying to say is this: Please, dear colleagues (and you know who you are!), instead of going on and on about how they are using instructional strategies that you don’t like, give the teaching staff you are working with the benefit of the doubt, and try to support them in a way they would actually like to be supported. And believe it or not: they might even be happy for you to work with them! :-)

Bloom, B., Englehart, M. Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York, Toronto: Longmans, Green.

Anderson, L. &  Krathwohl, D. (Eds) (2001). A taxonomy for learning, teaching and assessing. A revision of Bloom’s taxonomy of educational objectives Pearson Education

Conceptual change and wooly hats

“Conceptual change” is one of the big words that gets thrown into every conversation on teaching and learning these days. But most people I talk to don’t really have a clear idea of what conceptual change actually means, let alone how you would go about to change concepts. But I found this article by Watson and Kopnicek (1990) that tells a nice story of conceptual change happening for some students of a primary school class.
In the story, children claim that sweaters, down sleeping bags, hats are “hot”. So they expect to be able to measure rising temperatures when they put their thermometers inside of those hot garments, and when this does not happen, they try for days to improve the measuring conditions in order to not distort the measurement by drafts of cold air that must be currently keeping the measurements at room temperatures, even though the thermometers are in places where they are bound to get hot. At some point, the children reach a point where they are confused because they cannot get the thermometers to measure what they know to be true: That it must be hot inside hot garments. Eventually, most of the children come to be convinced that there is a difference between bodies that produce heat (like the sun or their own bodies) and other things that just keep heat from disappearing.
What I really like about this article is that it illustrates very well why children have the mental models they have about heat: “For nine winters, experience had been the children’s teacher. Every hat they had worn, every sweater they had donned contained heat. “Put on your warm clothes,” parents and teachers had told them. So when they began to study heat one spring day, who could blame them for thinking as they did?”. Also, it describes the struggles the children have with the cognitive dissonance that arises when their observations just don’t match their expectations, and the long time it takes until they are willing to consider that their initial model might not be correct.
Those struggles felt very real to me when reading the article. And I think it is very important to always remember that if we are asking someone to change their concept of something, what we are putting them through is a really difficult process. Not just intellectually, but also emotionally.
The article ends with recommendations of how to support conceptual change and with children running around with thermometers in their hats, testing the new theory that – even though they had found that a hat itself could not produce warmth – a body heating the hat would finally lead to rising temperature readings on the thermometers.
 Go check it out for yourselves!

Peer instruction! Combine it with individual thinking or discussions with the whole class?

Make sure it stays silent during the first step of the clicker process.

When using clickers in class, there are many different possible ways of implementing clicker questions and peer instruction, for example the Mazur sequence (which is our default sequence) and the Physics Education Research Group at UMass (PERG) sequence. Let’s recall:

The Mazur sequence:
1. A concept question is asked
2. Students think individually for a couple of minutes
3. Students vote on the question
4. The result of the vote is shown as a histogram
5. Students are asked to convince their neighbor of their answer (“peer instruction”)
6. Students vote again on the same question
7. The result of the second vote is shown as a histogram
8. Lecturer explains correct response and why the distractors were incorrect

The PERG sequence:
1. A concept question is asked
2. Students discuss the question for a couple of minutes in small groups
3. Students vote (individually or as a group)
4. The result of the vote is shown as a histogram
5. Students discuss their answers with the whole class, lecturer facilitates the discussion
6. Lecturer explains correct response and why the distractors were incorrect

So the difference here is that in the Mazur sequence, students get the chance to think and vote individually before entering the peer-instruction phase, whereas in the PERG sequence, students first discuss and then discuss in an even bigger group (which is, in my experience, basically what happens when you don’t explicitly ask students to think for them selves first in the Mazur sequence).

Firstly, for both models students report that the clickers helped them learn compared to a conventional lecture, because they were more actively involved, felt motivated by receiving the immediate feedback, and felt that the instructor adapted instruction to meet their learning needs.

Secondly, in both cases students liked peer instruction, for many of the reasons we use it: They felt like they were convinced by the best arguments in the discussion, thus practicing putting forward strong arguments as well as learning the “actual content” of the course. They also mention how scaffolding, i.e. learning something from someone who only just learned it themselves is easier than learning from an expert, helps, because it is more accessible both in language and in explanation itself.

But do the different sequences make a difference? Rhetorical question, of course they do!

Almost all students preferred starting with individual thinking and voting rather than with peer discussion. They state that the individual vote forced them to think for themselves, whereas in an initial peer discussion they might slide into a passive role and unthinkingly accept answers from others.

As for class-wide discussions, while some students liked hearing both correct and incorrect responses from outside their own peer group, and some also liked the pressure that comes with knowing that you might be called upon to answer a questions as a motivator for staying focussed in class, there are drawbacks to it, too. For example, it takes a lot of time, it is easy to drift away from the question and it can easily become confusing, in addition to threatening. Benefit of class-wide discussion is seen mostly in cases where the class was clearly divided between two answer choices.

So based on this study, we should definitely make sure to have students vote individually before peer discussion, and this means enforcing silence in the classroom while the students think about what to vote.

 —

David J. Nicol, & James T. Boyle (2003). Peer Instruction versus Class-wide Discussion in Large Classes: a comparison of two interaction methods in the wired classroom Studies in Higher Education, 28 (4)

How theories influence the scientific process

Observations are not as objective as we thought they were.

Today I want to talk about the paper “The theory-ladenness of observations and the theory-ladenness of the rest of the scientific process” by Brewer and Lambert (2001). I’ve been thinking about the topic of theory-ladenness of observations quite a bit recently in the context of a brochure we are currently writing on lab instruction, and I found this paper really interesting, especially because it a lot of examples are offered* which makes it an entertaining read. The authors discuss the influence of theory not only on observation, but also on attention, data interpretation, data production, memory and scientific communication. Here is my summary:

How perception is influenced by theory has for example been shown in a study where a picture similar to the one below is shown to participants, after one group had been primed with an unambiguous picture of a young woman, while the other group had seen an unambiguous figure of an old woman. Almost all participants of either group perceived the picture below as whatever had been shown to them in the unambiguous picture.

Screen Shot 2015-03-06 at 11.37.38

Old woman or young woman?

Another example is the figure below, where showing study participants a lot of animals (but no rats) dramatically shifts whether participants see a man or a rat when looking at the figure below. (Both figures are my renditions of the actual figures, btw!)

Screen Shot 2015-03-06 at 11.37.48

Man or rat?

Other studies find that students whose hypothesis is that a plastic ball will fall faster than a metal ball are more likely to report that their observations support that theory than the other group, who said that both balls would fall at equal speeds.

For all these cases, the observations were either ambiguous or difficult to make, resulting in weak bottom-up information which was easily overridden by the top-down theories. However, if the bottom-up information was strong enough, it would still be able to override the top-down information.

But even looking at the history of science, similar examples can be found, for example when the belief that some planets had moons made it difficult to observe the rings of Saturn as rings rather than moons. It does seem that perception is indeed theory-laden.

Attention is under cognitive control, too. You probably know the video where you are supposed to count how many times a basket ball is passed between players (or touches the ground, or whatever). For those of you who don’t know what I am talking about, I was asked to edit my original post so as not to tell you about it and not spoil the surprise when you do watch the video. But Malte might be writing a guest post for me on this topic :-)

Similar things have been observed throughout the history of science, too, because attention seems to be theory-directed: For example there is evidence of 22 pre-discovery observations of Uranus, that have at the time been rejected for many different reasons.

But here again: If the bottom-up processes are strong enough, you will see it even if you did not expect to see it.

Data evaluation and interpretation are also influenced by existing theories. It has been shown that scientists, probably not consciously, try to avoid having to change their theories. Data that is consistent with participants’ theories is considered more believable.

Additionally, having a theory can help make sense of and interpret data. The authors give the example of “The haystack was important because the cloth ripped”, which makes a lot more sense if a theoretical framework is given, e.g. “parachute”, and which is a lot easier to remember with that theoretical framework, too.

Even though again top-down processes play an important role in data interpretation, these are typically constrained by bottom-up processes.

In data production, “intellectual phase locking” has been observed, i.e. that measurements of “constants” tend to cluster around the same value for a while, and then jump to a different value, where they cluster again. This is indicative of the tendency to believe earlier, established measurements more than newer measurements, and hence tune instrumentation towards the established values of certain properties.

And I am sure we can all think of moments where a new piece of instrumentation showed something and we rejected it right away because it did not match the value we expected. And probably our assumption that the new equipment needed to be fine-tuned was correct. But then maybe it was not and we just missed the discovery of our life.

Last but not least, memory. Here it has been shown that memory errors are based on pre-existing theories: Information confirming a theory is easier to recall than information that deviates from the theory, which might even be distorted to match the theory better. The recognition of this has led to attempts to counteract memory errors, like for example lab book-keeping.

Again – I am sure we can all think of situations where our memory played tricks on us.

Ok, and one more: Communication. According to the authors, formalized ways of communication are structured as to reduce and organize information. For example, in a standard peer review process, information that doesn’t seem relevant to the topic at hand gets kicked out – a process that is clearly theory-laden.

This, to me, was actually the most scary point the authors make, because we are used to think that structuring a paper and omitting all non-relevant information improves the work. And I never stopped to think about how all the information that did not make it into my papers might not have been “objectively” not relevant, but might have been discarded based on my subjective perception of relevance to the topic.

To summarize: Theories DO influence perception. However, if bottom-up evidence is strong enough, it might still be able to override the theoretical top-down mis-perceptions. The authors conclude saying that “theories may have their greatest impact, not in observation, but in other cognitive processes such as data gathering, interpretation, and evaluation.

Wow. Now I will retire to my winter garden to think about what that means not only for my teaching, but maybe even more for my research…

P.S.: Discussing with my colleague P I realized that we might have to define the term “theory” at some point, because my understanding is clearly different from his…

*since this is an overview paper, the examples come from all kinds of different papers which I am not referring to here, because I haven’t looked at them. But they are properly referenced in the Brewer&Lambert (2001) paper, so please go check them out there!

Desirable difficulties

Initial harder learning might make for better longterm retrieval.

A lot of the discussions at my university on how to improve learning focus on how to make it easier for students to learn. That never sat quite right with me without me really having a solid basis for that feeling, so today I want to share with you an article by Adi Jaffe, who argues for “desirable difficulties in the classroom” – difficulties that make the learning process harder in the short term, but more successful in the long term.

A couple of such desirable difficulties are given in the article, some of which I want to discuss here:

Being tested on items repeatedly even after successful retrieval attempts helps long-term learning – I discuss this paper in my post titled “testing drives learning” based on The Critical Importance of Retrieval for Learning” by Karpicke and Roediger (2008), so go check it out, and for now let’s just remember that dropping an item from practice after it has been successfully recalled isn’t a good idea.

Having learners generate target material themselves rather than passively consuming it. In the paper that is referred to in the above article, students get paragraphs of text which they have to order before being able to read the whole text. Other methods might be to have students read different parts of a text and then having them reconstruct the whole text from the pieces each of them knows. Intuitively, this makes sense to me, and it is something we have been applying.

Spacing lessons on a topic out rather than massing them together (see for example Dempster (1990). “The spacing effect: A case study in the failure to apply the results of psychological research.”). This one goes against the trend of grouping instruction on specific topics together, both in terms of having “math week”, followed by “biology week”, etc, and in terms of having absolutely coherent curricula inside a specific discipline.

I have to say, I am struggling with this one. I do see the research is pretty unambiguous, but… What about all those nicely designed teaching materials that build knowledge, baby step by baby step, that we put so much effort into? Sometimes letting go of an ideal can be really hard. (On the other hand – it does make me feel a whole lot better about not properly proof-reading what I post on this blog. Desirable difficulties, people!)

Speaking of nicely designed materials, let’s get to the last point I want to discuss:

Making fonts harder to read to improve processing. This one I found really interesting: In their 2010 study “Fortune favors the bold (and the italicized): Effects of disfluency on educational outcomes”, Diemand-Yauman, Oppenheimer and Vaughan find that changing fonts from something really clear to something slightly less clear yields improvements in educational outcomes. And this holds both for laboratory as well as for classroom settings.

Thinking back to my days in school when we were often given texts that had been repeatedly photocopied of old photocopies of type-written documents, this seems intuitive to me. If I spent slightly more time reading the texts, I actually did think a little more about them, too. Deciphering did help me process, and remember. But the days of the crappy photocopies are long gone, and now we are given perfectly font-set documents on crisp white paper. However such an intervention would be really easy to implement.

I feel like I would like to read a little more on this topic before actually suggesting this at my university, but I am looking forward to digging into the literature on the last two points! How about you? Ready to go for desirable difficulties?

P.S.: There is even some research that suggests that learning in an instructional design that doesn’t cater to your preferred learning style might be one of those “desirable difficulties”. But I’ll save that one for a later date :-)

Dempster, F. (1988). The spacing effect: A case study in the failure to apply the results of psychological research. American Psychologist, 43 (8), 627-634 DOI: 10.1037//0003-066X.43.8.627

Diemand-Yauman C, Oppenheimer DM, & Vaughan EB (2011). Fortune favors the bold (and the Italicized): effects of disfluency on educational outcomes. Cognition, 118 (1), 111-5 PMID: 21040910

Karpicke, J., & Roediger, H. (2008). The Critical Importance of Retrieval for Learning Science, 319 (5865), 966-968 DOI: 10.1126/science.1152408

 

What do I want from my students – sense-making or answer-making?

On different approaches to peer-instruction and why one might want to use them.

Having sat in many different lectures by many different professors over the last year, and having given feedback on the methods used in most of those lectures, I find myself wondering how we can define a standard or even best practice for using clickers. Even when professors go through the classical Mazur steps, there are so many different ways they interpret those! Do we, for example, make sure that the first vote is really an individual vote, so that no interaction happened between students before they have to make this very first decision? I have not seen that implemented at my university. But does that matter? And why would one decide for or against it? I would guess that in most cases I have observed there was really no conscious decision being made – things just happen to happen a certain way.

A paper that I liked a lot and which describes a framework for describing and capturing instructional choices is “Not all interactive engagement is the same: Variations in physics professors’ implementation of Peer Instruction” by Turpen and Finkelstein (2009). I don’t want to talk about their framework as such, but there are a couple of questions they ask that I think are a helpful basis for reflection on our own teaching practices. For example there are questions clustering around the topic of listening to students and using the information from their answers. For example “what do I want students to learn, and what do I want to learn from the students?” might seem basic at first, but it is really not. What do I want students to learn? No matter what it is, what this question implies is “is the clicker question I am about to ask going to help them in that endeavor?”. The clicker question might be just testing knowledge, or it might make students think about a specific concept which they might get an even better grasp of by reflecting on your question.

And what do I want to learn from my students? The initial reaction of people I have talked with over the last year or so is puzzlement at this question. Why would I want to learn anything from my students? I am there to teach, they are there to learn. But is there really any point in asking questions if you are not trying to learn from them? Maybe not “learn” as in “learn new content”, but learn about their grasp of the topic, their learning process, where they are at right now. Do I use clicker questions as a way to test their knowledge, to inform my next steps during the class, to help them get a deeper understanding of the topic, to make them discuss? Those are all worthwhile goals, for sure, but they are different. And any one clicker question might or might not be able to help with all of those goals.

Another question is “do I need to listen to students’ ideas and reasoning and what are the benefits of listening to students’ ideas?”. Again, this is a question that I am guessing many people I have recently worked with would find strange. Why would I listen to student reasoning that doesn’t lead to the correct answer, or student reasoning that is different from how I want them to reason? Yes, I might learn something about where they go wrong, which might make it easier for me support them in getting it right. But isn’t it a really bad idea to expose the other students to something that is wrong? I would argue that no, it is not a bad idea. Students need to learn to distinguish between good reasoning and bad reasoning. And they can only do that if they see both good and bad reasoning, and learn to figure out why one is good and one is bad. I know many people are very reluctant of having students explain their reasoning that lead them to a wrong answer. It takes time and it doesn’t seem to lead towards the correct answer. But then what do we want? Answer-making or sense-making? Sense-making might involve taking a wrong turn occasionally, and realizing why it was a wrong turn before taking the right turn in the end. If the wrong answer isn’t elicited, it can’t be confronted or resolved.

I would really recommend you go read that paper. The authors are describing different instructional choices different instructors made, for example how they interact with students during the clicker questions. Did they leave the stage? Did they answer student questions? Did they discuss with students? (And yes, answering questions and discussing with students is not necessarily the same!). Even though there is not one single best practice to using clickers, it is definitely beneficial to reflect on different kinds of practice, or, at to at least become aware that there ARE different kinds of practice. Plenty to think about!