Mirjam Sophia Glessmer

Currently readin Schilke & Reimann (2025) on “The transparency dilemma: How AI disclosure erodes trust”.

When we teach, we have all these policies of how students need to let us know how they used GenAI, and of course the same holds for research publications. Research also finds that students expect a two-way transparency (Luo 2024), meaning they would like for us to disclose how we use GenAI.  In general, the traditional understanding is that openness, also about flaws or mistake, helps build trust. But Schilke & Reimann (2025) find that, analogous to the declaration of conflicts of interest, “transparency can backfire when one discloses AI usage. […] Ironically, people who try to be trustworthy—by transparently disclosing AI usage—are trusted less.

The mechanism that might explain this is about what gives and destroys legitimacy. Legitimacy is the perception that someone is conforming to all the rules. Interestingly, trust around legitimacy seems to be asymmetric — conforming to the rules only gains you a tiny amount of trust, whereas breaking them looses you a lot! Legitimacy might be undermined when it turns out that people are outsourcing part of their tasks to AI, despite the (potentially unwritten) assumptions or rules in a workplace that tasks are to be done by people themselves. When people in such an environment now — in order “to signal honesty and instil confidence” — disclose that they have used GenAI, they instead “may instead sow seeds of doubts. Openly disclosing practices to provide reassurance often draws heightened attention to them and raises questions about their appropriateness. Such disclosure, particularly when intended to pre-emptively dispel fears or doubts, can induce reactance, making evaluators more sceptical and resistant to the disclosed information“. So “while AI disclosure may aim to preempt misgivings, it can paradoxically invite greater scrutiny and scepticism about the legitimacy of the disclosing party’s practices“.

And whether or not we trust AI to do a job, we trust people using AI to do the same job even less than just the AI: “Whereas algorithm aversion refers to a general scepticism toward algorithms due to possible errors, human disclosure of the involvement of AI introduces a legitimacy discount arising from role ambiguity, leading the human actor to be trusted even less than an autonomous AI agent performing the same task“. And this is consistent across contexts: Schilke & Reimann (2025) is an enormous study with 13 different experiments in lots of different contexts.

One experiment is on students, who turned out to trust their teacher less when the teacher disclosed that they had used AI for grading of a test, than when they said that they used a TA or when didn’t say anything. Also in marketing, they find that “trust in the content creator goes down and trust in the brand itself erodes“. In general, Schilke & Reimann (2025) find that “individuals condemn others for using AI—even when doing so themselves“. So “while downplaying AI as a mere tool might seem tempting, our results indicate that trust-related concerns persist when people learn that AI has played a role, regardless of how it is labeled“.

People who have positive attitudes towards AI or perceive it to be accurate have a less strong reaction. But this effect seems to not decrease for people who use, or are very familiar, with AI themselves. And generally, the worst possible outcome is if using AI is exposed (rather than disclosed or not disclosed), that looses someone the most trust. So using AI might turn out more costly than previously thought: “we anticipate that the social and psychological costs stemming from negative reactions to AI-sourced ideation will be exceptionally high and need to be accounted for in appraising the overall value of AI usage for such purposes. We also wonder to what extent AI can truly ignite human creativity rather than constrain it through its focus on repackaging existing content, thus potentially commoditizing people’s ideas rather than fostering genuinely novel breakthroughs.

Schilke & Reimann (2025) recommend to either make disclosure voluntary (to protect people from exposure — not a valid model in teaching) OR mandatory (but then make sure that people know it will be enforced. They suggest through AI detectors, but those are … questionable). If you want people to be able to use AI, they suggest to create an environment where AI usage is collectively valid, thus minimising the trust erosion that comes with disclosure.

But then the question is — should we want such an environment, and is AI usage really valid? In their article, they write that they shift from operational outcomes to social outcomes, but I guess I am shifting the question back…

And in another article that caught my eye, Shekar et al. (2025) write about “People Overtrust AI-Generated Medical Advice despite Low Accuracy“: “Our study reveals that participants rate AI-generated responses, particularly high-accuracy ones, as equal to or better than doctors’ responses across all metrics, while maintaining higher trust in responses attributed to doctors. However, responses labeled as doctor assisted by AI showed no significant improvement in evaluations, complicating the ideal solution of combining AI’s comprehensive responses with physician trust.


Schilke, O., & Reimann, M. (2025). The transparency dilemma: How AI disclosure erodes trust. Organizational Behavior and Human Decision Processes188, 104405.

Shekar, S., Pataranutaporn, P., Sarabu, C., Cecchi, G. A., & Maes, P. (2025). People Overtrust AI-Generated Medical Advice despite Low Accuracy. NEJM AI, AIoa2300015.


Please enjoy some pictures from a recent dip!

 

 

 

 

 

 

 

Leave a Reply

    Share this post via

    Contact me!

    Subscribe to Blog via Email

    Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Search "Adventures in Teaching and Oceanography"

    Archives