Very preliminary thoughts on AI tools and teaching

This morning, I attended Rachel Forsyth’s presentation on “Handling the impact of AI tools: LU working group recommendations”, and here are some notes.

Let’s start with some disclaimers:

  • This blog post is not an introduction to how to deal with AI in teaching, these are just my notes to myself, filtered through my prior knowledge, interests, and general context. If you are looking for a good overview to get started, check out for example https://www.education.lu.se/en/teaching-tips/generative-ai-tools-education
  • All I am writing below are my thoughts loosely inspired by what Rachel said, so attribute the good points to her and the other stuff to me misunderstanding

That said: Even though we might not want to, we will have to get used to generative AI being around, it will not go away and it will very likely become more powerful and better integrated in all kinds of tools that are part of our routines (like Word etc). And I actually see that as more of an opportunity than a threat. I have used ChatGPT to write Swedish instagram captions for our diving club’s Insta (totally not relevant in this context, but check it out!), and I think it’s fun to play around with. But I still usually ask a native speaker to check the Swedish captions (-> 1), and I usually add something like “this caption was written by ChatGPT” (-> 2). And these are two important “but”s!

1. Professional judgment

A lot of skill is needed in determining how useful AI outputs actually are. In case of my Swedish Instagram captions, I mainly use AI to translate, and I do that not to save time, but because it tends to give better translations than if I do it myself. But I don’t trust my Swedish skills enough to feel confident to edit, or post without editing. So I ask someone to proofread.

For AI outputs in languages that I master and topics I know a lot about, I can have a quick read and then know right away whether I am happy to post, or what exactly I would want to change. But that’s because I have years of training and have built a professional judgment that has become so ingrained that I don’t even think about it any more. That is something that our students haven’t had the chance to build yet, and where we need to support them. Interestingly enough, many of the tasks that help students reflect on the process, their learning, the quality of results, might also work in the context of assessment that is adapted to the potential of AI supported work. On her cool blog “Assessment in Higher Education”, Rachel gives examples like “Here is the prompt and the output for a protocol which was produced by XX tool. In your professional opinion, is it safe, efficient and suitable for the client? Explain your reasoning and make suggestions for improved prompts.” or “You can use ChatGPT to generate a response, and you must say that you have done this, and add a critique of the response. What was right? What was wrong? What was missing?”, and I think this is a good direction forward, at least for now. The second example is very close to what I do in my courses: Use AI if you like, but then add a section on how you used it, e.g. for what purpose (generate ideas, structure, …), what type of prompts did you ask, what type of edits did you make, and how you made sure that the quality was good enough.

This leads me towards the second point:

2. Citing ChatGPT

How do we then, in a good way, acknowledge that we used AI tools? Some examples for how to cite ChatGPT are given here.

Of course, citing ChatGPT is better than not citing it, but it is not always good enough. At Lund University, if examiners say “no AI”, using it anyways constitutes cheating because you are misleading. One thing that I really appreciate about LU’s policy document though is that when writing it, they had a loop with student representatives to make sure that it is written in a way that is actually understandable and not just legal speak!

As tempting as it might seem to use AI detectors, fact is that they really don’t work. They produce a lot of false positives for non-first language writers, and don’t detect a lot of cases that were actually written by AI. A really interesting post explaining the background: “Why AI detectors think the US constitution was written by AI”.

That article also brings up the cost of false accusations, which I think is really important to consider. If you falsely accused me of cheating, that would definitely lower the threshold to actually cheat next time round!

But there are also other costs associated with AI, not just the false accusations of having used it; and many of them are about accessibility, for example will costs for subscriptions or similar prohibit students from using the same quality AI? What about the data that we share, not just when we sign up, but with every prompt we submit? And one that needs to be talked about much more: Training an AI, but also using it, has a huge environmental impact. We need to be very considerate in how we use it. I get so upset when I see lots of “fun” AI pictures on social media posted by people who probably aren’t even aware that there is a footprint associated with them playing…

So many things to think about!

P.S.: Did you wonder about the point with the image above? When I think of AI-generated pictures, I somehow see that image of “salmon in a river” in front of me, where you see a river with rapids, and salmon filets jumping in it. But I wanted to have a featured image that was a photo I had taken, not an AI generated one since I just ranted about how people waste so much resources on AI generated images for no reason. So I opended my phone and wanted to scroll through pictures in my camera roll to find something that I liked. But some algorithm on my very smart phone showed me a collection called “summer” with this picture as cover photo before I could even reach my camera roll. Since seeing this picture of our trip to DeepSpot makes me happy, I decided that with all this backstory, this is actually a great cover image for a post about how I really have no idea yet how we should use AI in teaching, but that in general, AI is pretty cool and we should see and use the opportunities.

Leave a Reply