4 INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (France)
4 INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement (France)
Résumé
In this work, we explore the complex social norms underlying human decision-making in social scenarios and propose a machine learning model to replicate and understand these decisions. Focusing on the distribution of rewards, efforts, and risks between individuals, we conducted experiments involving 188 human participants in an online decision-making game. We then developed an XGBoost-based model to predict their decisions accurately. To assess the model's alignement with social norms, we conducted a Turing test which showed that our model was perceived as making morally acceptable decisions, similar to those of human participants. Furthermore, we embodied the model in a robot negotiator, to observe how participants perceived and accepted decisions made by a robotic agent that automatically distributed token reward, effort and risk among participant dyads by perceieving their physical characteristics. Our findings contribute towards the development of a moral robot, and enabling decision making considering social norms.
Domaines
Sciences de l'ingénieur [physics]Origine | Fichiers produits par l'(les) auteur(s) |
---|
Ganesh Gowrishankar : Connectez-vous pour contacter le contributeur
https://hal.science/hal-04695550
Soumis le : jeudi 12 septembre 2024-13:50:47
Dernière modification le : vendredi 13 décembre 2024-03:32:53
Dates et versions
- HAL Id : hal-04695550 , version 1
Citer
Sandra Victor, Bruno Yun, Chefou MamadouToura, Enzo Indino, Pierre Bisquert, et al.. Towards a moral robot: Reward, effort, and risk distribution to humans following social norms. 2024. ⟨hal-04695550⟩
Collections
32
Consultations
21
Téléchargements