Irregular wave ripples, and some left on the beach when the water is gone

I’m getting more and more fascinated with wave ripples. I kinda understand how they form, but not enough to be able to explain as much about them as I would like to.

For example below: Why is the pattern so different where sand has been washed on top of the shallow stones? Yes, the water depth is different there, which will have an influence on the wave field, which will, in turn, have an influence on sediment transport. But HOW?

Here is another example. The wave ripples look choppy everywhere (and kinda cool!), but on that shallow, flat surface of the stone the wave length is completely different as is the orientation. And this is still all submerged, you can kinda estimate the water depth from the tips of the kelp stuff just reaching the surface.

Same day, slightly different area. Isn’t it cool to see how in the upper left there are no wave ripples but those streaks of larger pebbles?

And look at this. Utter chaos in the middle, little more orderly ripples to the sides! Why???

Or here. Steep wave ripple crests, long and shallow troughs in which larger stuff has been deposited (or is it the finer grain that has been transported away from the troughts into the crests and the coarser stuff got washed free and just stayed?). Estimates of water depth with help of kelp tips just breaking the water surface again…

Different day, more orderly wave ripples. But wavelength changes with distance from the sea wall. And weird things happen on the shallow stones…

On a low water day, parts of the sea floor got exposed. Now. I know for sure there were ripples all the way to the seawall. But at some point, the water retreated. When did they get smoothed out? The problem is that I can only really observe the seafloor when the water is calm, yet ripple marks form when there are waves. What happens during the transitional period? Or here, when the water level sinks?

Another interesting pics with some ripple marks that are still there, and then these little, smooth spots that recently fell dry (within maybe 15 minutes or so — I took pictures that same morning when there was still water on top and you could only see that there was a bump under the surface by “reading the wave field”. And then when I came back, the water level had fallen and this piece of mud had been exposed). Did the ripples there get smoothed out when there was still water on top, or at what point did it happen?

Or here where we have these interesting rip-current like structures right at the bottom of the sea wall:

Here is another thing I find fascinating: Ripples towards the sea wall, and then these streaks of larger stuff probably aligned with the main direction of the waves (I think the larger stuff is less dense than the sand, though, maybe pieces of broken shells?). What has to happen in order to transition between these two regimes?

Also note how there is no sand on the large flat stones today!

And same spot, different day: More ripples gone, and even less sand on the large flat stones!

So how do I figure out what is really going on here? I guess I would need to capture both the wave field and the sea floor over time. Web cams above & below water level, plus measure water depth? Any suggestions?

Behind the scenes of the #KitchenOceanography article published in SCIENCE NOTES Magazin

Recently I had the privilege to work with photographer David Carrenon Hansen on pictures for an article that was published in SCIENCE NOTES Magazin today. This issue of the mono thematic, german science communication magazine is on “the sea”. And obviously, if you can’t be at, on, in the sea, you have to recreate it in your own kitchen — enter #KitchenOceanography!

It was quite an experience to see how a professional photographer interprets the experiments I routinely just snap pictures of with my phone, so here are some impressions! (All pictures are mine. Which is quite obvious when you compare them with the professional pictures that were published in SCIENCE NOTES Magazin, but just so you are aware…)

First time the photographer David and I met up, we just did — I don’t actually know what. Not what I was expecting needed to happen, anyway. For example, we looked at dummies (plastic ice cubes and some freezer frost to stand in for freshwater and salt water ice) on a plexiglass pane lit from below. Which looks quite fun! But this is not how I usually do kitchen oceanography!

Speaking of cultures clashing, this is the next thing that happened: test runs on the “eddy in a jar” experiment. I would never have stirred the jar with a power drill! But it definitely looks interesting with the large vortex.

Next time we met up, things were a bit more like what I had expected. Even though I had NO IDEA how long time it takes to fiddle with the lights and camera settings and what have you if you want to have artistically pleasing images rather than ones that just show the physics. Here, for example, is my overturning experiment.

This picture always makes me want to say “Schwester, Tupfer!” since it reminds me so much of what (I imagine) a surgery might look like with the green backdrop and the lights…

It’s funny to see my little overturning tank set into scene like this. Not what it is used to! (It’s the same one I’ve been using for decades, everywhere from primary schools to university teaching, but never this carefully lit!).

And I have to admit, it brings out features of the flow quite nicely!

And I love the reflection at the surface below!

This is what it looks like when David takes pictures.

And here is the setting.

Also interesting: At this point I would have long aborted the experiment, because for my taste colors were way too mixed to clearly distinguish the flow pattern that I want to highlight. But clearly that’s really when it starts being of artistic interest!

Still on the same setup…

New setup, showing pretty things — but not what I want to show. Here the dye wasn’t dripped on the cooling pad as I would have done it, but rather squirted diagonally into the tank.

But here is the one thing that always makes me happy: Salt fingers!

Curious about the actual pictures David took of the experiments? Then check out SCIENCE NOTES Magazin! :-) And curious about the experiments themselves? Here are my instructions (in german).

#TeachingTuesday: Student feedback and how to interpret it in order to improve teaching

Student feedback has become a fixture in higher education. But even though it is important to hear student voices when evaluating teaching and thinking of ways to improve it, students aren’t perfect judges of what type of teaching leads to the most learning, so their feedback should not be taken onboard without critical reflection. In fact, there are many studies that investigate specific biases that show up in student evaluations of teaching. So in order to use student feedback to improve teaching (both on the individual level when we consider changing aspects of our classes based on student feedback, as well as at an institutional level when evaluating teachers for personnel decisions), we need to be aware of the biases that student evaluations of teaching come with.

While student satisfaction may contribute to teaching effectiveness, it is not itself teaching effectiveness. Students may be satisfied or dissatisfied with courses for reasons unrelated to learning outcomes – and not in the instructor’s control (e.g., the instructor’s gender).
Boring et al. (2016)

What student evaluations of teaching tell us

In the following, I am not presenting a coherent theory (and if you know of one please point me to it!), these are snippets of current literature on student evaluations of teaching, many of which I found referenced in this annotated literature review on student evaluations of teaching by Eva (2018). The aim of my blogpost is not to provide a comprehensive literature review, rather than pointing out that there is a huge body of literature that teachers and higher ed administrators should know exists somewhere out there, that they can draw upon when in doubt (and ideally even when not in doubt ;-)).

6 second videos are enough to predict teacher evaluations

This is quite scary, so I thought it made sense to start out with this study. Ambady and Rosenthal (1993) found that silent videos shorter than 30 seconds, in some case as short as 6 seconds, significantly predicted global end-of-semester student evaluations of teachers. These are videos that do not even include a sound track. Let this sink in…

Student responses to questions of “effectiveness” do not measure teaching effectiveness

And let’s get this out of the way right away: When students are asked to judge teaching effectiveness, that answer does not measure actual teaching effectiveness.

Stark and Freishtat (2014) give “an evaluation of course evaluations”. They conclude that student evaluations of teaching, though providing valuable information about students’ experiences, do not measure teaching effictiveness. Instead, ratings are even negatively associated with direct measures of teaching effectiveness and are influenced by gender, ethnicity and attractiveness of the instructor.

Uttl et al. (2017) conducted a meta-analysis of faculty’s teaching effectiveness and found that “student evaluation of teaching ratings and student learning are not related”. They state that “institutions focused on student learning and career success may want to abandon [student evaluation of teaching] ratings as a measure of faculty’s teaching effectiveness”.

Students have their own ideas of what constitutes good teaching

Nasser-Abu Alhija (2017) showed that out of five dimensions of teaching (goals to be achieved, long-term student development, teaching methods and characteristics, relationships with students, and assessment), students viewed the assessment dimension as most important and the long-term student development dimension as least important. To students, the grades that instructors assigned and the methods they used to do this were the main aspects in judging good teaching and good instructors. Which is fair enough — after all, good grades help students in the short term — but that’s also not what we usually think of when we think of “good teaching”.

Students learn less from teachers they rate highly

Kornell and Hausman (2016) review recent studies and report that when learning is measured at the end of the respective course, the “best” teachers got the highest ratings, i.e. the ones where the students felt that they had learned the most (which is congruent with Nasser-Abu Alhija (2017)’s findings of what students value in teaching). But when learning was measured during later courses, i.e. when meaningful deep learning was considered, other teachers seem to have more effective. Introducing desirable difficulties is thus good for learning, but bad for student ratings.

Appearances can be deceiving

Carpenter et al. (2013) compared a fluent video (instructor standing upright, maintaining eye contact, speaking fluidly without notes) and a disfluent video (instructor slumping, looking away, speaking haltingly with notes). They found that even though the amount of learning that took place when students watched either of the videos wasn’t influenced by the lecturer’s fluency or lack thereof, the disfluent lecturer was rated lower than the fluent lecturer.

The authors note that “Although fluency did not significantly affect test performance in the present study, it is possible that fluent presentations usually accompany high-quality content. Furthermore, disfluent presentations might indirectly impair learning by encouraging mind wandering, reduced class attendance, and a decrease in the perceived importance of the topic.”

Student expect more support from their female professors

When students rate teachers effectiveness, they do that based on their assumption of how effective a teacher should be, and it turns out that they have different expectations depending on the gender of their teachers. El-Alayi et al. (2018) found that “female professors experience more work demands and special favour requests, particularly from academically entitled students”. This was both true when male and female faculty reported on their experiences, as well as when students were asked what their expectations of fictional male and female teachers were. 

Student teaching evaluations punish female teachers

Boring (2017) found that even when learning outcomes were the same for students in courses taught by male and female teachers, female teachers received worse ratings than male teachers. This got even worse when teachers didn’t act in accordance to the stereotypes associated with their gender.

MacNell et al. (2015) found that believing that an instructor was female (in a study of online teaching where male and female names were sometimes assigned according to the actual gender of the teacher and sometimes not) was sufficient to rate that person lower than an instructor that was believed (correctly or not) to be male.

White male students challenge women of color’s authority, teaching competency, and scholarly expertise, as well as offering subtle and not so subtle threats to their persons and their careers

This title was drawn from the abstract of Pittman (2010)’s article that I unfortunately didn’t have access to, but thought an important enough point to include anyway.

There are very many more studies on race, and especially women of color, in teaching contexts, which all show that they are facing a really unfair uphill battle.

Students will punish a percieved accent

Rubin and Smith (1990) investigated “effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants” in North America and found that 40% of undergraduates avoid classes instructed by nonnative English-speaking teaching assistants, even though the actual accentedness of teaching assistants did not actually influence student learning outcomes. Nevertheless, students judged teaching assistants they perceived as speaking with a strong accent as poorer teachers.

Similarly, Sanchez and Khan (2016) found that “presence of an instructor accent […] does not impact learning, but does cause learners to rate the instructor as less effective”.

Student will rate minorities differently

Ewing et al. (2003) report that lecturers that were identified as gay or lesbian received lower teaching ratings than other lecturers with undisclosed sexual orientation when they, according to other measures, were perfoming very well. Poor teaching performance was, however, rated more positively, possibly to avoid discriminating against openly gay or lesbian lecturers.

Students will punish age

Stonebraker and Stone (2015) find that “age does affect teaching effectiveness, at least as perceived by students. Age has a negative impact on student ratings of faculty members that is robust across genders, groups of academic disciplines and types of institutions”. Apparently, when it comes to students, from your mid-40ies on, you aren’t an effective teacher any more (unless you are still “hot” and “easy”).

Student evaluations are sensitive to student’s gender and grade expectation

Boring et al. (2016) find that “[student evaluation of teaching] are more sensitive to students’ gender bias and grade expectations than they are to teaching effectiveness.

What can we learn from student evaluations then?

Pay attention to student comments but understand their limitations. Students typically are not well situated to evaluate pedagogy.
Stark and Freishtat (2014)

Does all of the above mean that student evaluations are biased in so many ways that we can’t actually learn anything from them? I do think that there are things that should not be done on the basis of student evaluations (e.g. rank teacher performance), and I do think that most times, student evaluations of teaching should be taken with a pinch of salt. But there are still ways in which the information gathered is useful.

Even though student satisfaction is not the same as teaching effectiveness, it might still be desirable to know how satisfied students are with specific aspects of a course. And especially open formats like for example the “continue, start, stop” method are great for gaining a new perspective on the classes we teach and potentially gaining fresh ideas of how to change things up.

Also tracking ones own evaluation over time is helpful since — apart from aging — other changes are hopefully intentional and can thus tell us something about our own development, at least assuming that different student cohorts evaluate teaching performance in a similar way. Also getting student feedback at a later date might be helpful, sometimes students only realize later which teachers they learnt from the most or what methods were actually helpful rather than just annoying.

A measure that doesn’t come directly from student evaluations of teaching but that I find very important to track is student success in later courses. Especially when that isn’t measured in a single grade, but when instructors come together and discuss how students are doing in tasks that build on previous courses. Having a well-designed curriculum and a very good idea of what ideas translate from one class to the next is obviously very important.

It is also important to keep in mind that, as Stark and Freishtat (2014) point out, statistical methods are only valid if there are enough responses to actually do statistics on them. So don’t take very few horrible comments to heart and ignore the whole bunch of people who are gushing about how awesome your teaching is!

P.S.: If you are an administrator or on an evaluation committee and would like to use student evaluations of teaching, the article by Linse (2017) might be helpful. They give specific advice on how to use student evaluations both in decision making as well as when talking to the teachers whose evaluations ended up on your desk.

Literature:

Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64(3), 431–441. https://doi.org/10.1037/0022-3514.64.3.431

Boring, A. (2017). Gender biases in student evaluations of teachers. Journal of Public Economics, 145(13), 27–41. https://doi.org/10.1016/j.jpubeco.2016.11.006

Boring, A., Dial, U. M. R., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research, (January), 1–36. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AETBZC.v1

Carpenter, S. K., Wilford, M. M., Kornell, N., & Mullaney, K. M. (2013). Appearances can be deceiving: Instructor fluency increases perceptions of learning without increasing actual learning. Psychonomic Bulletin & Review, 20(6), 1350–1356. https://doi.org/10.3758/s13423-013-0442-z

El-Alayi, A., Hansen-Brown, A. A., & Ceynar, M. (2018). Dancing backward in high heels: Female professors experience more work demands and special favour requests, particularly from academically entitled students. Sex Roles. https://doi.org/10.1007/s11199-017-0872-6

Eva, N. (2018), Annotated literature review: student evaluations of teaching (SET), https://hdl.handle.net/10133/5089

Ewing, V. L., Stukas, A. A. J., & Sheehan, E. P. (2003). Student prejudice against gay male and lesbian lecturers. Journal of Social Psychology, 143(5), 569–579. http://web.csulb.edu/~djorgens/ewing.pdf

Kornell, N. & Hausman, H. (2016). Do the Best Teachers Get the Best Ratings? Front. Psychol. 7:570. https://doi.org/10.3389/fpsyg.2016.00570

Linse, A. R. (2017). Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94- 106. https://doi.org/10.1016/j.stueduc.2016.12.004

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What’s in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291– 303. https://doi.org/10.1007/s10755-014-9313-4

Nasser-Abu Alhija, F. (2017). Teaching in higher education: Good teaching through students’ lens. Studies in Educational Evaluation, 54, 4-12. https://doi.org/10.1016/j.stueduc.2016.10.006

Pittman, C. T. (2010). Race and Gender Oppression in the Classroom: The Experiences of Women Faculty of Color with White Male Students. Teaching Sociology, 38(3), 183–196. https://doi.org/10.1177/0092055X10370120

Rubin, D. L., & Smith, K. A. (1990). Effects of accent, ethnicity, and lecture topic on undergraduates’ perceptions of nonnative English-speaking teaching assistants. International Journal of Intercultural Relations, 14, 337–353. https://doi.org/10.1016/0147-1767(90)90019-S

Sanchez, C. A., & Khan, S. (2016). Instructor accents in online education and their effect on learning and attitudes. Journal of Computer Assisted Learning, 32, 494–502. https://doi.org/10.1111/jcal.12149

Stark, P. B., & Freishtat, R. (2014). An Evaluation of Course Evaluations. ScienceOpen, 1–26. https://doi.org/10.14293/S2199-1006.1.SOR-EDU.AOFRQA.v1

Stonebraker, R. J., & Stone, G. S. (2015). Too old to teach? The effect of age on college and university professors. Research in Higher Education, 56(8), 793–812. https://doi.org/10.1007/s11162-015-9374-y

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. http://dx.doi.org/10.1016/j.stueduc.2016.08.007

Thermal forcing vs rotation tank experiments in more detail than you ever wanted to know

This is the long version of the two full “low latitude, laminar, tropical Hadley circulation” and “baroclinic instability, eddying, extra-tropical circulation” experiments. A much shorter version (that also includes the end cases “no rotation” and “no thermal forcing”) can be found here.

Several of my friends were planning on teaching with DIYnamics rotating tables right now. Unfortunately, that’s currently impossible. Fortunately, though, I have one at home and enjoy playing with it enough that I’m

  1. Playing with it
  2. Making videos of me playing with it
  3. Putting the videos on the internet
  4. Going to do video calls with my friends’ classes, so that the students can at least “remote control” the hands-on experiments they were supposed to be doing themselves.

Here is me introducing the setup:

Today, I want to share a video I filmed on thermal forcing vs rotation. To be clear: This is not a polished, stand-alone teaching video. It’s me rambling while playing. It’s supposed to give students an initial idea of an experiment we’ll be doing together during a video call, and that they’ll be discussing in much more depth in class. It’s also meant to prepare them for more “polished” videos, which are sometimes so polished that it’s hard to actually see what’s going on. If everything looks too perfect it almost looks unreal, know what I mean? Anyway, this is as authentic as it gets, me playing in my kitchen. Welcome! :-)

In the video, I am showing the two full experiments: For small rotations we get a low latitude, laminar, tropical Hadley circulation case. Spinning faster, we get a baroclinic instability, eddying, extra-tropical case. And as you’ll see, I didn’t know which circulation I was going to get beforehand, because I didn’t do the maths before running it. I like surprises, and luckily it worked out well!

Birthday on the beach

After, at that point, more than two week of self-isolation in my flat with only my early morning walks away from the flat, and only runs with my friend J. in human company that wasn’t virtual, my parents came to see me the day after my Birthday (which was already a while ago, I’ve been so busy posting all the rotating experiments from my kitchen and Teaching Tuesdays!). But it was a lovely day and I want to share the pictures.

Here, for example, I find it so fascinating how the same wave crest is breaking in one spot, then fairly pointy a little further away, then very flat and round a little further still, and then one can’t even make out the wave any more. Just because it’s shallower in some spots than in others!

And I always think that it’s super cool how vegetation takes energy out of a wave field. Look at the mirror-like water surface in the puddles in the foreground!

Same here. This little bay is sheltered by the wave breaker groyne, but what little is propagating around them and into the bay gets dampened out by the floating seaweed stuff.

And this picture shows very nicely how the groyne is sheltering the water right behind it both from wind and waves.

And one more of those…

And another study of wave breaking, and the broken turbulent wash running up on the beach.

Maybe it’s just me, but I can’t get enough of these.

:-)

Oh, now we have a bird’s wake in the sheltered water! Also the sky is blue (well, in some spots anyway ;-))

In the picture below, I was really fascinated how relatively long waves got reflected into groups of three short waves by some weirdly shaped beach.

The whole beach was full of dried out starfish. They looked so beautiful!

And smelled horribly. But I brought some home for Christmas decorations anyway. And I’m sure they’ll be done stinking eventually. Hopefully before Christmas…

At some point, there were a few drops of rain despite it clearly still being sunny (see reflection below!)

Did I mention I love these roses?

And here are my parents, looking for fossils.

Like this fossilised sea urchin I found :-)

And I was looking at pattern in the sand. Like below where we see exactly how high the last couple of waves went, and where the few raindrops fell that day.

No raindrops here, but pretty intricate pattern of “high waters” in the waves!

And a bird’s foot prints.

That was a beautiful day! :-)

Thermal forcing vs rotation

The first experiment we ever ran with our DIYnamics rotating tank was using a cold beer bottle in the center of a rotating tank full or lukewarm water. This experiment is really interesting because, depending on the rotation of the tank, it will display different regimes. For small rotations we get a low latitude, laminar, tropical Hadley circulation case. Spinning faster, we get a baroclinic instability, eddying, extra-tropical case. Both are really interesting, and in the movie below I am showing four experimentsm ranging from “thermal forcing, no rotation”, over two experiments which include both thermal forcing and rotation at different rates to show both the “Hadley cell” and “baroclinic instability” case, to “no thermal forcing, just rotation”. Enjoy!

Foam stripes and sand ripples

So you might have seen my novel on the formation of sand ripples last week, and the tl;dr: I have a vague idea of how sand ripples form, but it’s not as clear to me as I would like.

But imagine my delight when, after two days of foam stripes like this one…

…there was a distinctly different ripple pattern directly underneath the foam stripe!!!

In some places, there was even a tiny little bit of foam left. Where? Right on top the anomalous stripe in the ripple pattern!!!

So now I still don’t understand what’s going on in the sand, but at least it’s lining up exactly with a phenomenon in the waves that I don’t understand, either! :-D

Ekman layers in my kitchen

Several of my friends were planning on teaching with DIYnamics rotating tables right now. Unfortunately, that’s currently impossible. Fortunately, though, I have one at home and enjoy playing with it enough that I’m

  1. Playing with it
  2. Making videos of me playing with it
  3. Putting the videos on the internet
  4. Going to do video calls with my friends’ classes, so that the students can at least “remote control” the hands-on experiments they were supposed to be doing themselves.

Here is me introducing the setup:

Today, I want to share a video I filmed on Ekman layers. To be clear: This is not a polished, stand-alone teaching video. It’s me rambling while playing. It’s supposed to give students an initial idea of an experiment we’ll be doing together during a video call, and that they’ll be discussing in much more depth in class. It’s also meant to prepare them for more “polished” videos, which are sometimes so polished that it’s hard to actually see what’s going on. If everything looks too perfect it almost looks unreal, know what I mean? Anyway, this is as authentic as it gets, me playing in my kitchen. Welcome! :-)

In the video, I am stopping a tank that was spun up into solid body rotation, to watch a bottom Ekman layer develop. Follow along for the whole journey:

Now. What are you curious about? What would you like to try? What would you do differently? Any questions for me? :-)

Rossby-#WaveWatchingWednesday

Several of my friends were planning on teaching with DIYnamics rotating tables right now. Unfortunately, that’s currently impossible. Fortunately, though, I have one at home and enjoy playing with it enough that I’m

  1. Playing with it
  2. Making videos of me playing with it
  3. Putting the videos on the internet
  4. Going to do video calls with my friends’ classes, so that the students can at least “remote control” the hands-on experiments they were supposed to be doing themselves.

Here is me introducing the setup:

Today, I want to share a video I filmed on planetary Rossby waves. To be clear: This is not a polished, stand-alone teaching video. It’s me rambling while playing. It’s supposed to give students an initial idea of an experiment we’ll be doing together during a video call, and that they’ll be discussing in much more depth in class. It’s also meant to prepare them for more “polished” videos, which are sometimes so polished that it’s hard to actually see what’s going on. If everything looks too perfect it almost looks artificial, know what I mean? Anyway, this is as authentic as it gets, me playing in my kitchen. Welcome! :-)

In the video, I am using an ice cube, melting on a sloping bottom in a rotating tank, to create planetary Rossby waves. Follow along with the whole process:

Also check out the video below that shows both a top- and side view of a planetary Rossby wave, both filmed with co-rotating cameras.

Previous blog posts with more movies for example here.

Now. What are you curious about? What would you like to try? What would you do differently? Any questions for me? :-)