How politely should we talk to GAI in order to get the best results? Currently reading Yin et al. (2024)

Thanks to my awesome colleague Rachel and her Teams team I see so many interesting articles on GAI these days! For example the one by Yin et al. (2024) on how politely we should be talking to GAI in order to get the best results.

The idea in that article is that due to the way they are trained, GAIs reproduce human communication pattern, so that one aspect to consider in prompt engineering is how a human would likely react to the tone of our prompt. If we ask politely, people would typically react in a helpful way, but if we are rude or overly polite, they might not.

And indeed, Yin et al. (2024) find that polite prompts lead to the best results, and that the ruder a prompt, the more likely a biased or otherwise incorrect answer, or even a refusal to answer, becomes, but that also highly respectful prompts don’t always perform better than moderately polite ones. And in comparing English, Chinese and Japanese, they find that the best level politeness depends on the respective culture and is not uniform across languages.

How obvious on the one hand but fascinating on the other, too!


Yin, Z., Wang, H., Horio, K., Kawahara, D., & Sekine, S. (2024). Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance. arXiv preprint arXiv:2402.14531.

One thought on “How politely should we talk to GAI in order to get the best results? Currently reading Yin et al. (2024)

  1. Telkom University

    What strategies can individuals employ to effectively communicate with General Artificial Intelligence (GAI) while maintaining politeness, as discussed in Yin et al.’s (2024) article?

    Reply

Leave a Reply