Prompt engineering and other stuff I never thought I would have to teach about

Last week, I thought a very intensive “Introduction to Teaching and Learning” course where we — like all other teachers everywhere — had to address that GAI has made many of the traditional formats of assessment hard to justify. We had to come up both with guidelines for the participants in our course on how to deal with GAI in the assessment for our course, and with some kind of guidance for them as teachers.

With respect to our own participants and my co-teacher, we have the luxury of being able to negotiate over several sessions how they can use GAI, co-creating the policy that is ultimately be binding in this course (my suggestion was along the lines of “Use whatever tools can help you achieve the best learning, knowing that the full responsibility for your learning and for the artifacts you submit rests with you. As always, you must disclose, and be able to fully explain and defend, your process and decisions”). But one thing that we didn’t anticipate was that many of our participants report not having a lot of experience using GAI and therefore we felt that we needed to give them a crash course first, so they can play around a bit, gain some experience, and then we can have an informed discussion. So this is how we ended up speaking about prompt engineering and the CLEAR framework (Lo, 2023). I am super grateful that a colleague that I have not even met yet shared that article just in time on the “GAI tools and education at LU” Teams (thanks for setting that up, Rachel! So much helpful stuff coming through there!)

Prompt engineering is about how to “talk” to GAI in a way that results in the best responses (because, as in all communication, the quality of the question of course has an influence on the quality of the answer. Are you sometimes surprised when you see people google stuff and you just know they are not going to get a useful answer out of their prompts? Yes, me too, so this is very similar).

CLEAR stands for

  • Concise: Prompts should be short and to the point. When I see videos about people using ChatGPT, I see a lot of polite phrases like “please tell me…”, “could you provide me with…”, etc, that do not actually contribute anything meaningful to the prompt. No need for niceties or fluff, the more to the point the better!
  • Logical: Prompts should be structured (“give me this first, and then that”) and thus already provide the story line and logic that we are hoping GAI will flesh out for us
  • Explicit: Provide the GAI with the exact output specifications you want: “five examples of x”, “cause and effects of y”, “three sentences on z”, “explained for a five-year old”, “using easy language”, …
  • Adaptive: Play with prompts to optimize the output. Maybe you need to be even more concise, or provide more specifications on the structure you want, or ask the same question in smaller chunks. Develop an intuition
  • Reflective: Be critical towards what responses you get from GAI, and based on your own assessment of the response, keep adapting future prompts.

I don’t really feel equipped to talk about information literacy in general and we actually had a brilliant librarian colleague come in to talk about this, among other things, but we felt like we could not wait for her visit and had to do something ourselves the day before already, so CLEAR is what we did. And it definitely seems to be a useful framework that captures important aspects.

Putting together the slides to present the framework, it of course quickly became clear that this could not be the only thing that I was “throwing on the wall”; if this framework deserved an explicit presentation, then so did the very clear warning that at this moment, we cannot ask students to use ChatGPT because it is not conforming with GDPR, and if we don’t ask and only some have access, this introduces biases (at Uni Oslo I don’t know how they solved the GDPR issue, but they pay for students to get ChatGPT 4, so at least everybody as access to the same tools and it doesn’t depend on your willingness/ability to pay 23.80€/month. Seems like a good solution for now, and in the long (short) run, those tools will become available everywhere anyway, I am pretty sure…). And then of course pointing participants to the “GAI tools and education at LU” Teams and generally the great support available at the Unit for Educational Service, including an introductory, open Canvas course on GAI.

But then, I introduced the framework and participants worked with ChatGPT to find more information on keywords they remembered and found meaningful from the previous course day. They mostly found the CLEAR framework useful and stressed that in their experience it helped to be to the point, short, structured, etc.. But I got some pushback on the “skip the niceties” — someone pointed out that since the LLM is trained on human ways of interaction, it reacts better if it is also addressed in the way that humans would address each other; someone else said that they, in the context of a seminar at their department on how to use GAI in research, found out that frequent iterations sometimes mess up initially good answers rather than improving them (which has also been my experience). So maybe next time we teach, we need to either find a better framework or modify this one?

Of course, ChatGPT makes it obvious that we need to reconsider traditional assessment methods (as we should have done already before). A study by Nikolic et al. (2023) tested what ChatGPT 3.5 can do when confronted with engineering education assessments, and that makes it very clear that many assessment types can be done perfectly well by GAI already. For example, most quizzes can be passed, and including figures or tables that require “translating” into language so they become accessible is only a short-term solution as it is only a matter of time, they will soon be accessible to the GAI directly. Using a lot of oral presentations, as suggested by Nikolic et al. (2023) does not only carry the risk they mention, i.e. memorization of answers to the topic if too clearly stated before, but oral examinations are notoriously prone to biases of all sorts, and making them (even more) unpredictable also adds stress that will lead to increased test anxiety etc.. Also focussing on in-person exams might seem a good idea, but then it’s questionable how authentic the assessment can be under those conditions, which could partly be compensated for by focussing more on practical skills. But in general, just because GAI can’t do something today doesn’t guarantee that it won’t be able to do it tomorrow. So we need to find a way to use it as a tool in learning and thus also in authentic assessment.

P.S.: Featured image chosen because of the “if we collectively ignore it, will it go away?” sentiment that I encounter a lot when it comes to GAI, and that I can definitely relate to…


Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720.

Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G. M., Grundy, S., … & Sandison, C. (2023). ChatGPT versus engineering education assessment: a multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity. European Journal of Engineering Education, 1-56.

Leave a Reply