#TeachingTuesday: Student feedback and how to interpret it in order to improve teaching

Student feedback has become a fixture in higher education. But even though it is important to hear student voices when evaluating teaching and thinking of ways to improve it, students aren’t perfect judges of what type of teaching leads to the most learning, so their feedback should not be taken onboard without critical reflection. In fact, there are many studies that investigate specific biases that show up in student evaluations of teaching. So in order to use student feedback to improve teaching (both on the individual level when we consider changing aspects of our classes based on student feedback, as well as at an institutional level when evaluating teachers for personnel decisions), we need to be aware of the biases that student evaluations of teaching come with.

While student satisfaction may contribute to teaching effectiveness, it is not itself teaching effectiveness. Students may be satisfied or dissatisfied with courses for reasons unrelated to learning outcomes – and not in the instructor’s control (e.g., the instructor’s gender).
Boring et al. (2016)

What student evaluations of teaching tell us

In the following, I am not presenting a coherent theory (and if you know of one please point me to it!), these are snippets of current literature on student evaluations of teaching, many of which I found referenced in this annotated literature review on student evaluations of teaching by Eva (2018). The aim of my blogpost is not to provide a comprehensive literature review, rather than pointing out that there is a huge body of literature that teachers and higher ed administrators should know exists somewhere out there, that they can draw upon when in doubt (and ideally even when not in doubt ;-)).

6 second videos are enough to predict teacher evaluations

This is quite scary, so I thought it made sense to start out with this study. Ambady and Rosenthal (1993) found that silent videos shorter than 30 seconds, in some case as short as 6 seconds, significantly predicted global end-of-semester student evaluations of teachers. These are videos that do not even include a sound track. Let this sink in…

Student responses to questions of “effectiveness” do not measure teaching effectiveness

And let’s get this out of the way right away: When students are asked to judge teaching effectiveness, that answer does not measure actual teaching effectiveness.

Stark and Freishtat (2014) give “an evaluation of course evaluations”. They conclude that student evaluations of teaching, though providing valuable information about students’ experiences, do not measure teaching effictiveness. Instead, ratings are even negatively associated with direct measures of teaching effectiveness and are influenced by gender, ethnicity and attractiveness of the instructor.

Uttl et al. (2017) conducted a meta-analysis of faculty’s teaching effectiveness and found that “student evaluation of teaching ratings and student learning are not related”. They state that “institutions focused on student learning and career success may want to abandon [student evaluation of teaching] ratings as a measure of faculty’s teaching effectiveness”.

Students have their own ideas of what constitutes good teaching

Nasser-Abu Alhija (2017) showed that out of five dimensions of teaching (goals to be achieved, long-term student development, teaching methods and characteristics, relationships with students, and assessment), students viewed the assessment dimension as most important and the long-term student development dimension as least important. To students, the grades that instructors assigned and the methods they used to do this were the main aspects in judging good teaching and good instructors. Which is fair enough — after all, good grades help students in the short term — but that’s also not what we usually think of when we think of “good teaching”.

Students learn less from teachers they rate highly

Kornell and Hausman (2016) review recent studies and report that when learning is measured at the end of the respective course, the “best” teachers got the highest ratings, i.e. the ones where the students felt that they had learned the most (which is congruent with Nasser-Abu Alhija (2017)’s findings of what students value in teaching). But when learning was measured during later courses, i.e. when meaningful deep learning was considered, other teachers seem to have more effective. Introducing desirable difficulties is thus good for learning, but bad for student ratings.

Appearances can be deceiving

Carpenter et al. (2013) compared a fluent video (instructor standing upright, maintaining eye contact, speaking fluidly without notes) and a disfluent video (instructor slumping, looking away, speaking haltingly with notes). They found that even though the amount of learning that took place when students watched either of the videos wasn’t influenced by the lecturer’s fluency or lack thereof, the disfluent lecturer was rated lower than the fluent lecturer.

The authors note that “Although fluency did not significantly affect test performance in the present study, it is possible that fluent presentations usually accompany high-quality content. Furthermore, disfluent presentations might indirectly impair learning by encouraging mind wandering, reduced class attendance, and a decrease in the perceived importance of the topic.”

Student expect more support from their female professors

When students rate teachers effectiveness, they do that based on their assumption of how effective a teacher should be, and it turns out that they have different expectations depending on the gender of their teachers. El-Alayi et al. (2018) found that “female professors experience more work demands and special favour requests, particularly from academically entitled students”. This was both true when male and female faculty reported on their experiences, as well as when students were asked what their expectations of fictional male and female teachers were. 

Student teaching evaluations punish female teachers

Boring (2017) found that even when learning outcomes were the same for students in courses taught by male and female teachers, female teachers received worse ratings than male teachers. This got even worse when teachers didn’t act in accordance to the stereotypes associated with their gender.

MacNell et al. (2015) found that believing that an instructor was female (in a study of online teaching where male and female names were sometimes assigned according to the actual gender of the teacher and sometimes not) was sufficient to rate that person lower than an instructor that was believed (correctly or not) to be male.

White male students challenge women of color’s authority, teaching competency, and scholarly expertise, as well as offering subtle and not so subtle threats to their persons and their careers

This title was drawn from the abstract of Pittman (2010)’s article that I unfortunately didn’t have access to, but thought an important enough point to include anyway.

There are very many more studies on race, and especially women of color, in teaching contexts, which all show that they are facing a really unfair uphill battle.

Students will punish a percieved accent

Rubin and Smith (1990) investigated “effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants” in North America and found that 40% of undergraduates avoid classes instructed by nonnative English-speaking teaching assistants, even though the actual accentedness of teaching assistants did not actually influence student learning outcomes. Nevertheless, students judged teaching assistants they perceived as speaking with a strong accent as poorer teachers.

Similarly, Sanchez and Khan (2016) found that “presence of an instructor accent […] does not impact learning, but does cause learners to rate the instructor as less effective”.

Student will rate minorities differently

Ewing et al. (2003) report that lecturers that were identified as gay or lesbian received lower teaching ratings than other lecturers with undisclosed sexual orientation when they, according to other measures, were perfoming very well. Poor teaching performance was, however, rated more positively, possibly to avoid discriminating against openly gay or lesbian lecturers.

Students will punish age

Stonebraker and Stone (2015) find that “age does affect teaching effectiveness, at least as perceived by students. Age has a negative impact on student ratings of faculty members that is robust across genders, groups of academic disciplines and types of institutions”. Apparently, when it comes to students, from your mid-40ies on, you aren’t an effective teacher any more (unless you are still “hot” and “easy”).

Student evaluations are sensitive to student’s gender and grade expectation

Boring et al. (2016) find that “[student evaluation of teaching] are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness.

What can we learn from student evaluations then?

Pay attention to student comments but understand their limitations. Students typically are not well situated to evaluate pedagogy.
Stark and Freishtat (2014)

Does all of the above mean that student evaluations are biased in so many ways that we can’t actually learn anything from them? I do think that there are things that should not be done on the basis of student evaluations (e.g. rank teacher performance), and I do think that most times, student evaluations of teaching should be taken with a pinch of salt. But there are still ways in which the information gathered is useful.

Even though student satisfaction is not the same as teaching effectiveness, it might still be desirable to know how satisfied students are with specific aspects of a course. And especially open formats like for example the “continue, start, stop” method are great for gaining a new perspective on the classes we teach and potentially gaining fresh ideas of how to change things up.

Also tracking ones own evaluation over time is helpful since — apart from aging — other changes are hopefully intentional and can thus tell us something about our own development, at least assuming that different student cohorts evaluate teaching performance in a similar way. Also getting student feedback at a later date might be helpful, sometimes students only realize later which teachers they learnt from the most or what methods were actually helpful rather than just annoying.

A measure that doesn’t come directly from student evaluations of teaching but that I find very important to track is student success in later courses. Especially when that isn’t measured in a single grade, but when instructors come together and discuss how students are doing in tasks that build on previous courses. Having a well-designed curriculum and a very good idea of what ideas translate from one class to the next is obviously very important.

It is also important to keep in mind that, as Stark and Freishtat (2014) point out, statistical methods are only valid if there are enough responses to actually do statistics on them. So don’t take very few horrible comments to heart and ignore the whole bunch of people who are gushing about how awesome your teaching is!

P.S.: If you are an administrator or on an evaluation committee and would like to use student evaluations of teaching, the article by Linse (2017) might be helpful. They give specific advice on how to use student evaluations both in decision making as well as when talking to the teachers whose evaluations ended up on your desk.

Literature:

Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64(3), 431–441. https://doi.org/10.1037/0022-3514.64.3.431

Boring, A. (2017). Gender biases in student evaluations of teachers. Journal of Public Economics, 145(13), 27–41. https://doi.org/10.1016/j.jpubeco.2016.11.006

Boring, A., Dial, U. M. R., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research, (January), 1–36. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: Instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20(6), 1350–1356. https://doi.org/10.3758/s13423-013-0442-z

El-Alayi, A., Hansen-Brown, A. A., & Ceynar, M. (2018). Dancing backward in high heels: Female professors experience more work demands and special favour requests, particularly from academically entitled students. Sex Roles. https://doi.org/10.1007/s11199-017-0872-6

Eva, N. (2018), Annotated literature review: student evaluations of teaching (SET), https://hdl.handle.net/10133/5089

Ewing, V. L., Stukas, A. A. J., & Sheehan, E. P. (2003). Student prejudice against gay male and lesbian lecturers. Journal of Social Psychology, 143(5), 569–579. http://web.csulb.edu/~djorgens/ewing.pdf

Kornell, N. & Hausman, H. (2016). Do the Best Teachers Get the Best Ratings? Front. Psychol. 7:570. https://doi.org/10.3389/fpsyg.2016.00570

Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94- 106. https://doi.org/10.1016/j.stueduc.2016.12.004

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291– 303. https://doi.org/10.1007/s10755-014-9313-4

Nasser-Abu Alhija, F. (2017). Teaching in higher education: Good teaching through students’ lens. Studies in Educational Evaluation, 54, 4-12. https://doi.org/10.1016/j.stueduc.2016.10.006

Pittman, C. T. (2010). Race and Gender Oppression in the Classroom: The Experiences of Women Faculty of Color with White Male Students. Teaching Sociology, 38(3), 183–196. https://doi.org/10.1177/0092055X10370120

Rubin, D. L., & Smith, K. A. (1990). Effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants. International Journal of Intercultural Relations, 14, 337–353. https://doi.org/10.1016/0147-1767(90)90019-S

Sanchez, C. A., & Khan, S. (2016). Instructor accents in online education and their effect on learning and attitudes. Journal of Computer Assisted Learning, 32, 494–502. https://doi.org/10.1111/jcal.12149

Stark, P. B., & Freishtat, R. (2014). An Evaluation of Course Evaluations. ScienceOpen, 1–26. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1

Stonebraker, R. J., & Stone, G. S. (2015). Too old to teach? The effect of age on college and university professors. Research in Higher Education, 56(8), 793–812. https://doi.org/10.1007/s11162-015-9374-y

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. http://dx.doi.org/10.1016/j.stueduc.2016.08.007

Thermal forcing vs rotation tank experiments in more detail than you ever wanted to know

This is the long version of the two full “low latitude, laminar, tropical Hadley circulation” and “baroclinic instability, eddying, extra-tropical circulation” experiments. A much shorter version (that also includes the end cases “no rotation” and “no thermal forcing”) can be found here.

Several of my friends were planning on teaching with DIYnamics rotating tables right now. Unfortunately, that’s currently impossible. Fortunately, though, I have one at home and enjoy playing with it enough that I’m

  1. Playing with it
  2. Making videos of me playing with it
  3. Putting the videos on the internet
  4. Going to do video calls with my friends’ classes, so that the students can at least “remote control” the hands-on experiments they were supposed to be doing themselves.

Here is me introducing the setup:

Today, I want to share a video I filmed on thermal forcing vs rotation. To be clear: This is not a polished, stand-alone teaching video. It’s me rambling while playing. It’s supposed to give students an initial idea of an experiment we’ll be doing together during a video call, and that they’ll be discussing in much more depth in class. It’s also meant to prepare them for more “polished” videos, which are sometimes so polished that it’s hard to actually see what’s going on. If everything looks too perfect it almost looks unreal, know what I mean? Anyway, this is as authentic as it gets, me playing in my kitchen. Welcome! :-)

In the video, I am showing the two full experiments: For small rotations we get a low latitude, laminar, tropical Hadley circulation case. Spinning faster, we get a baroclinic instability, eddying, extra-tropical case. And as you’ll see, I didn’t know which circulation I was going to get beforehand, because I didn’t do the maths before running it. I like surprises, and luckily it worked out well!

Birthday on the beach

After, at that point, more than two week of self-isolation in my flat with only my early morning walks away from the flat, and only runs with my friend J. in human company that wasn’t virtual, my parents came to see me the day after my Birthday (which was already a while ago, I’ve been so busy posting all the rotating experiments from my kitchen and Teaching Tuesdays!). But it was a lovely day and I want to share the pictures.

Here, for example, I find it so fascinating how the same wave crest is breaking in one spot, then fairly pointy a little further away, then very flat and round a little further still, and then one can’t even make out the wave any more. Just because it’s shallower in some spots than in others!

And I always think that it’s super cool how vegetation takes energy out of a wave field. Look at the mirror-like water surface in the puddles in the foreground!

Same here. This little bay is sheltered by the wave breaker groyne, but what little is propagating around them and into the bay gets dampened out by the floating seaweed stuff.

And this picture shows very nicely how the groyne is sheltering the water right behind it both from wind and waves.

And one more of those…

And another study of wave breaking, and the broken turbulent wash running up on the beach.

Maybe it’s just me, but I can’t get enough of these.

:-)

Oh, now we have a bird’s wake in the sheltered water! Also the sky is blue (well, in some spots anyway ;-))

In the picture below, I was really fascinated how relatively long waves got reflected into groups of three short waves by some weirdly shaped beach.

The whole beach was full of dried out starfish. They looked so beautiful!

And smelled horribly. But I brought some home for Christmas decorations anyway. And I’m sure they’ll be done stinking eventually. Hopefully before Christmas…

At some point, there were a few drops of rain despite it clearly still being sunny (see reflection below!)

Did I mention I love these roses?

And here are my parents, looking for fossils.

Like this fossilised sea urchin I found :-)

And I was looking at pattern in the sand. Like below where we see exactly how high the last couple of waves went, and where the few raindrops fell that day.

No raindrops here, but pretty intricate pattern of “high waters” in the waves!

And a bird’s foot prints.

That was a beautiful day! :-)

Thermal forcing vs rotation

The first experiment we ever ran with our DIYnamics rotating tank was using a cold beer bottle in the center of a rotating tank full or lukewarm water. This experiment is really interesting because, depending on the rotation of the tank, it will display different regimes. For small rotations we get a low latitude, laminar, tropical Hadley circulation case. Spinning faster, we get a baroclinic instability, eddying, extra-tropical case. Both are really interesting, and in the movie below I am showing four experimentsm ranging from “thermal forcing, no rotation”, over two experiments which include both thermal forcing and rotation at different rates to show both the “Hadley cell” and “baroclinic instability” case, to “no thermal forcing, just rotation”. Enjoy!

Foam stripes and sand ripples

So you might have seen my novel on the formation of sand ripples last week, and the tl;dr: I have a vague idea of how sand ripples form, but it’s not as clear to me as I would like.

But imagine my delight when, after two days of foam stripes like this one…

…there was a distinctly different ripple pattern directly underneath the foam stripe!!!

In some places, there was even a tiny little bit of foam left. Where? Right on top the anomalous stripe in the ripple pattern!!!

So now I still don’t understand what’s going on in the sand, but at least it’s lining up exactly with a phenomenon in the waves that I don’t understand, either! :-D

Ekman layers in my kitchen

Several of my friends were planning on teaching with DIYnamics rotating tables right now. Unfortunately, that’s currently impossible. Fortunately, though, I have one at home and enjoy playing with it enough that I’m

  1. Playing with it
  2. Making videos of me playing with it
  3. Putting the videos on the internet
  4. Going to do video calls with my friends’ classes, so that the students can at least “remote control” the hands-on experiments they were supposed to be doing themselves.

Here is me introducing the setup:

Today, I want to share a video I filmed on Ekman layers. To be clear: This is not a polished, stand-alone teaching video. It’s me rambling while playing. It’s supposed to give students an initial idea of an experiment we’ll be doing together during a video call, and that they’ll be discussing in much more depth in class. It’s also meant to prepare them for more “polished” videos, which are sometimes so polished that it’s hard to actually see what’s going on. If everything looks too perfect it almost looks unreal, know what I mean? Anyway, this is as authentic as it gets, me playing in my kitchen. Welcome! :-)

In the video, I am stopping a tank that was spun up into solid body rotation, to watch a bottom Ekman layer develop. Follow along for the whole journey:

Now. What are you curious about? What would you like to try? What would you do differently? Any questions for me? :-)

Rossby-#WaveWatchingWednesday

Several of my friends were planning on teaching with DIYnamics rotating tables right now. Unfortunately, that’s currently impossible. Fortunately, though, I have one at home and enjoy playing with it enough that I’m

  1. Playing with it
  2. Making videos of me playing with it
  3. Putting the videos on the internet
  4. Going to do video calls with my friends’ classes, so that the students can at least “remote control” the hands-on experiments they were supposed to be doing themselves.

Here is me introducing the setup:

Today, I want to share a video I filmed on planetary Rossby waves. To be clear: This is not a polished, stand-alone teaching video. It’s me rambling while playing. It’s supposed to give students an initial idea of an experiment we’ll be doing together during a video call, and that they’ll be discussing in much more depth in class. It’s also meant to prepare them for more “polished” videos, which are sometimes so polished that it’s hard to actually see what’s going on. If everything looks too perfect it almost looks artificial, know what I mean? Anyway, this is as authentic as it gets, me playing in my kitchen. Welcome! :-)

In the video, I am using an ice cube, melting on a sloping bottom in a rotating tank, to create planetary Rossby waves. Follow along with the whole process:

Also check out the video below that shows both a top- and side view of a planetary Rossby wave, both filmed with co-rotating cameras.

Previous blog posts with more movies for example here.

Now. What are you curious about? What would you like to try? What would you do differently? Any questions for me? :-)

#TeachingTuesday: Some things I read about making good lecture videos

Just imagine you had written an article on “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, like Choe et al. (2019) did. What excellent timing to inform teaching decisions all around the world!

Choe et al. compare 8 different video styles (all of which can be watched as supplementary material to the article which is really helpful!), 6 to replace “normal lectures” and two that complement them, to investigate the influence of video style on both how much students are learning from each, and how they feel watching them.

The “normal lecure” videos were different combinations of the lecturer and information on slides/blackboards/tablets/…: a “classic classroom” where the lecturer is filmed in front of a blackboard and a screen, a “weatherman” style in front of a green screen on which the lecture slides are later imposed, a “learning glass” where the lecturer is seen writing on a board, a “pen tablet” where the lecturer can draw on the slides, a “talking head” where the lecturer is superimposed on the slides in a little window, and “slides on/off” where the video switches between showing slides or the lecturer.

And the good news: Turns out that the style you choose for your recorded video lecture doesn’t really affect student learning outcomes very much. Choe et al. did, however, deduce strengths and weaknesses of each of the lecture formats, and from that come up with a list of best practices for student engagement, which I find very helpful. Therein, they give tips for different stages of the video production, related to the roles (lecturer and director of the video), and content covered in the videos, and these are really down-to-earth, practical tips like “cooler temperatures improve speaker comfort”.  And of course all the things like “not too much text on slides” and “readable font” are mentioned, too; always a good reminder!

One thing they point out that I wasn’t so clear to me before is that it’s important that the lecturer is visible and that they maintain eye contact with the camera. Of course that adds a layer of difficulty to recording lectures — and a lot of awkward feelings and extra work in terms of what to wear and actually having to shower and stuff — but in the big scheme of things if it creates a better user experience, maybe it’s not such a big sacrifice. Going forward, I’ll definitely keep that in mind!

Especially making the distinction between the roles of “lecturer” and “director” was a really helpful way for me to think about making videos, even though I am playing both roles myself. But it reminds me of how many considerations (should) go into a video besides “just” giving the lecture! If you look at the picture above, you’ll see that I’ve started sketching out what I want to be able to show on a future video, and what that means for how many cameras I need, where to place them, and how to orient them (portrait or landscape). When I made the (german) instructions for kitchen oceanography, I filmed myself in portrait mode, thinking of posting them to my Instagram stories, but then ended up editing a landscape video for which I then needed to fill all the awkward space around the portrait movie. Would have been helpful to think about it in these terms before!

Choe et al. even include a “best practice” video in their supplementary material, which I find super helpful. Because even though in some cases it might be feasible to professionally produce lectures in a studio, but that’s not what I (or most people frantically producing video lectures) these days have access to. So seeing something that is professionally produced but that doesn’t (seem) to require incredibly complicated technology or fancy editing is reassuring. In fact, even though the lecturer appears to have been filmed in front of a green screen, I think in the end it’s not too unsimilar to what I did in the (german) instructions for kitchen oceanography mentioned above: A lecturer on one side, the slides (in a portrait format) on the other.

In addition to the six “lecture” videos, there was a “demo” video where the lecturer showed a simple demonstration, and an “interview” video, where the lecturer was answering questions that were shown on a screen (so no second person there). Those obviously can’t replace a traditional lecture, but can be very useful for specific learning outcomes!

The “demo” type video is the one I am currently most interested in, since that’s where I can best contribute my expertise in a niche where other people appreciate getting some input. Also, according to Choe at al., students found that type of video engaging, entertaining, and of high learning value. All the more reason for me to do a couple more demo videos over the next couple of days, I’m already on it!

References:

Ronny C. Choe, Zorica Scuric, Ethan Eshkol, Sean Cruser, Ava Arndt, Robert Cox, Shannon P. Toma, Casey Shapiro, Marc Levis-Fitzgerald, Greg Barnes, and H. Crosbie (2019). “Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos”, CBE—Life Sciences Education, Vol. 18, No. 4. Published Online: 1 Nov 2019
https://doi.org/10.1187/cbe.18-08-0171

Such a pretty #friendlywaves!

My long time Twitter friend Anne shared these beautiful pictures and I absolutely had to do a #friendlywaves post where I explain other people’s wave pictures.

Take a moment to admire the beautiful picture below. Wouldn’t you love to be there? I certainly would!

What can we learn from this picture? First — it’s a windy day! Not stormy, but definitely not calm, either. See how the water outside of the surf zone is dark blue and looks a little choppy? That’s the local wind doing that.

And then there are the waves that we see breaking in the foreground. Without knowing where the picture was taken, I would think that they traveled in from a large water body where there was a long fetch so they could built up over quite some distance. And then they meet the coast!

You see breaking waves of two kinds: the one marked with red ovals below, where there is hardly any buildup of the wave before it meets a rock and breaks into white, foamy turbulence. The other type of breaking waves, the ones where I marked the crests with green lines, build up over a short distance before they break because there is a more gradual decrease in water depth. The stope is still quite steep so the waves change from deep water (where they can’t feel the sea floor and have a fairly low amplitude, so we can’t distinguish wave crests further offshore than the two I marked in green) to shallow water waves that feel the sea floor and build up to break.

In contrast, let’s look at the lovely picture below.

Here, we have a sandy beach on which the waves can run out. There slope right at the water’s edge is not very steep, but seeing that we can only really spot two wave crests there has to be a change in gradient. About where the offshore wave crest is in the picture below, or possibly a little further offshore, the water depth must suddenly increase, otherwise there would be more wave crest visible further offshore. Since there aren’t any, water must be a lot deeper there.

But what I found really cool about the picture above are the trains of standing waves in the little stream that flows into the sea here. I find it so fascinating to see standing waves break in the upstream direction — so completely unintuitive, isn’t it? So much so that I dug out some pics from January for you and posted them last Friday in preparation for today’s post. Sometimes I actually plan my posts, believe it or not!

Standing waves don’t move in space because the flow of the current they are sitting on is exactly as fast as they are moving, only in the opposite direction. What is happening in the picture is that in those standing waves sit on ripples in the sand. The waves become so steep that they are constantly falling back down onto the current, get carried up the ripples again, in an endless loop. So fascinating!