Ultimatum bargaining: Algorithms vs. Humans

被引:0
|
作者
Ozkes, Ali I. [1 ,2 ]
Hanaki, Nobuyuki [3 ,4 ]
Vanderelst, Dieter [5 ]
Willems, Jurgen [6 ]
机构
[1] Univ Cote Azur, SKEMA Business Sch, GREDEG, Valbonne, France
[2] Univ PSL, Univ Paris Dauphine, CNRS, LAMSADE, Paris, France
[3] Osaka Univ, Inst Social & Econ Res, Suita, Japan
[4] Univ Limassol, Nicosia, Cyprus
[5] Univ Cincinnati, Dept Elect Engn & Comp Syst, Cincinnati, OH USA
[6] WU Vienna Univ Econ & Business, Inst Publ Management & Governance, Vienna, Austria
关键词
Ultimatum bargaining; Human-ai interaction; Social preferences; Fairness; Equity; Explainability;
D O I
10.1016/j.econlet.2024.111979
中图分类号
F [经济];
学科分类号
02 ;
摘要
We study human behavior in ultimatum game when interacting with either human or algorithmic opponents. We examine how the type of the AI algorithm (mimicking human behavior, optimising gains, or providing no explanation) and the presence of a human beneficiary affect sending and accepting behaviors. Our experimental data reveal that subjects generally do not differentiate between human and algorithmic opponents, between different algorithms, and between an explained and unexplained algorithm. However, they are more willing to forgo higher payoffs when the algorithm's earnings benefit a human.
引用
收藏
页数:4
相关论文
共 50 条