On preferences and reward policies over rankings

被引:0
|
作者
Faella, Marco [1 ]
Sauro, Luigi [1 ]
机构
[1] Univ Naples Federico II, Dept Elect Engn & Informat Technol, Via Claudio 21, I-80125 Naples, Italy
关键词
Reasoning over rankings; Self-interest theories; Tie-breaking rules; Tournaments; TIE-BREAKING; TOURNAMENTS;
D O I
10.1007/s10458-024-09656-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the rational preferences of agents participating in a mechanism whose outcome is a ranking (i.e., a weak order) among participants. We propose a set of self-interest axioms corresponding to different ways for participants to compare rankings. These axioms vary from minimal conditions that most participants can be expected to agree on, to more demanding requirements that apply to specific scenarios. Then, we analyze the theories that can be obtained by combining the previous axioms and characterize their mutual relationships, revealing a rich hierarchical structure. After this broad investigation on preferences over rankings, we consider the case where the mechanism can distribute a fixed monetary reward to the participants in a fair way (that is, depending only on the anonymized output ranking). We show that such mechanisms can induce specific classes of preferences by suitably choosing the assigned rewards, even in the absence of tie breaking.
引用
收藏
页数:36
相关论文
共 50 条
  • [41] Polarized preferences versus polarizing policies
    Gordon, Sanford C.
    Landa, Dimitri
    PUBLIC CHOICE, 2018, 176 (1-2) : 193 - 210
  • [42] On the effectiveness of reward-based policies: Are we using the proper concept of tax reward?
    Lisi, Gaetano
    ECONOMICS AND BUSINESS LETTERS, 2022, 11 (01): : 41 - 45
  • [43] REWARD PREFERENCES OF SALESPEOPLE: HOW DO COMMISSIONS RATE?
    Lopez, Tary Burnthorne
    Hopkins, Christopher D.
    Raymond, Mary Anne
    JOURNAL OF PERSONAL SELLING & SALES MANAGEMENT, 2006, 26 (04) : 381 - 390
  • [44] Reward learning from human preferences and demonstrations in Atari
    Ibarz, Borja
    Leike, Jan
    Pohlen, Tobias
    Irving, Geoffrey
    Legg, Shane
    Amodei, Dario
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [45] Public Employees' Performance-based Reward Preferences
    Tozlu, Ahmet
    MALIYE DERGISI, 2015, (168): : 249 - 272
  • [46] Learning Optimal Advantage from Preferences and Mistaking It for Reward
    Knox, W. Bradley
    Hatgis-Kessell, Stephane
    Adalgeirsson, Sigurdur Orn
    Booth, Serena
    Dragan, Anca
    Stone, Peter
    Niekum, Scott
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 9, 2024, : 10066 - 10073
  • [47] PROBABILITY-REWARD PREFERENCES OF RHESUS-MONKEYS
    HILL, CW
    RIOPELLE, AJ
    AMERICAN JOURNAL OF PSYCHOLOGY, 1985, 98 (01): : 77 - 84
  • [48] Societal values and individual values in reward allocation preferences
    Olsen, Jesse E.
    CROSS CULTURAL MANAGEMENT-AN INTERNATIONAL JOURNAL, 2015, 22 (02) : 187 - 200
  • [49] Learning Reward Functions by Integrating Human Demonstrations and Preferences
    Palan, Malayandi
    Shevchuk, Gleb
    Landolfi, Nicholas C.
    Sadigh, Dorsa
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [50] Multi-Objective POMDPs with Lexicographic Reward Preferences
    Wray, Kyle Hollins
    Zilberstein, Shlomo
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 1719 - 1725