Do conversational agents have a theory of mind? A single case study of ChatGPT with the Hinting, False Beliefs and False Photographs, and Strange Stories paradigms - Centre de recherche en Épidémiologie et Santé des Populations Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2023

Do conversational agents have a theory of mind? A single case study of ChatGPT with the Hinting, False Beliefs and False Photographs, and Strange Stories paradigms

Résumé

In this short report we consider the possible manifestation of theory-of-mind skills by the recently proposed OpenAI's ChatGPT conversational agent. To tap into these skills, we used an indirect speech understanding task, the hinting task, and a new text version of a False Belief/False Photographs paradigm, and the Strange Stories paradigm. The hinting task is usually used to assess individuals with autism or schizophrenia by requesting them to infer hidden intentions from short conversations involving two characters. Our results show that the artificial model has quite limited performances on the Hinting task when either original scoring or revised SCOPE's rating scales are used. To better understand this limitation, we introduced slightly modified versions of the hinting task in which either cues about the presence of a communicative intention were added or a specific question about the character's intentions were asked. Only the latter demonstrated enhanced performances. In addition, the use of a False Belief/False Photographs paradigm to assess belief attribution skills demonstrates that ChatGPT keeps track of successive physical states of the world and may refer to a character's erroneous expectations about the world. No dissociation between the conditions was found. The Strange Stories were associated with correct performances but we could not be sure that the algorithm had no prior knowledge of it. These findings suggest that ChatGPT may answer about a character's intentions or beliefs when the question focuses on these mental states, but does not use such references spontaneously on a regular basis. This may guide AI designers to improve inference models by privileging mental states concepts in order to help chatbots having more natural conversations. This work offers an illustration of the possible application of psychological constructs and paradigms to a cognitive entity of a radically new nature, which leads to a reflection on the experimental methods that should in the future propose evaluation tools designed to allow the comparison of human performances and strategies with those of the machine.
Fichier principal
Vignette du fichier
ChatGPT and ToM_12fev23_Zenodo.pdf (394.42 Ko) Télécharger le fichier
Supplementary material.xlsx (86.21 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
licence
licence

Dates et versions

hal-03991530 , version 1 (15-02-2023)
hal-03991530 , version 2 (21-06-2023)

Licence

Identifiants

Citer

Eric Brunet-Gouet, Nathan Vidal, Paul Roux. Do conversational agents have a theory of mind? A single case study of ChatGPT with the Hinting, False Beliefs and False Photographs, and Strange Stories paradigms. 2023. ⟨hal-03991530v1⟩
739 Consultations
548 Téléchargements

Altmetric

Partager

Gmail Mastodon Facebook X LinkedIn More