Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks

被引:0
|
作者
Kong, Yuqing [1 ]
机构
[1] Peking Univ, Beijing, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the setting where participants are asked multiple similar possibly subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to incentivize honest reports and some of them achieve dominantly truthfulness: truth-telling is a dominant strategy and strictly dominate other "non-permutation strategy" with some mild conditions. However, a major issue hinders the practical usage of those mechanisms: they require the participants to perform an infinite number of tasks. When the participants perform a finite number of tasks, these mechanisms only achieve approximated dominant truthfulness. The existence of a dominantly truthful multi-task peer prediction mechanism that only requires a finite number of tasks remains to be an open question that may have a negative result, even with full prior knowledge. This paper answers this open question by proposing a new mechanism, Determinant based Mutual Information Mechanism (DMI-Mechanism), that is dominantly truthful when the number of tasks is >= 2C. C is the number of choices for each question (C = 2 for binary-choice questions). DMI-Mechanism also pays truth-telling higher than any strategy profile and strictly higher than uninformative strategy profiles (informed truthfulness). In addition to the truthfulness properties, DMI-Mechanism is also easy to implement since it does not require any prior knowledge (detail-free) and only requires C 2 participants. The core of DMI-Mechanism is a novel information measure, Determinant based Mutual Information (DMI). DMI generalizes Shannon's mutual information and the square of DMI has a simple unbiased estimator. In addition to incentivizing honest reports, DMI-Mechanism can also be transferred into an information evaluation rule that identifies high-quality information without verification when there are >= 3 participants. To the best of our knowledge, DMI-Mechanism is both the first detail-free informed-truthful mechanism and the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of tasks.
引用
收藏
页码:2398 / 2411
页数:14
相关论文
共 50 条
  • [1] Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks
    Kong, Yuqing
    [J]. PROCEEDINGS OF THE THIRTY-FIRST ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS (SODA'20), 2020, : 2398 - 2411
  • [2] Dominantly Truthful Peer Prediction Mechanisms with a Finite Number of Tasks
    Kong, Yuqing
    [J]. JOURNAL OF THE ACM, 2024, 71 (02)
  • [3] Informed Truthfulness in Multi-Task Peer Prediction
    Shnayder, Victor
    Agarwal, Arpit
    Frongillo, Rafael
    Parkes, David C.
    [J]. EC'16: PROCEEDINGS OF THE 2016 ACM CONFERENCE ON ECONOMICS AND COMPUTATION, 2016, : 179 - 196
  • [4] Multi-Task Learning for Dense Prediction Tasks: A Survey
    Vandenhende, Simon
    Georgoulis, Stamatios
    Van Gansbeke, Wouter
    Proesmans, Marc
    Dai, Dengxin
    Van Gool, Luc
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) : 3614 - 3633
  • [5] Information Diffusion Enhanced by Multi-Task Peer Prediction
    Ito, Kensuke
    Ohsawa, Shohei
    Tanaka, Hideyuki
    [J]. IIWAS2018: THE 20TH INTERNATIONAL CONFERENCE ON INFORMATION INTEGRATION AND WEB-BASED APPLICATIONS & SERVICES, 2014, : 94 - 102
  • [6] Efficient and Scalable Multi-Task Regression on Massive Number of Tasks
    He, Xiao
    Alesiani, Francesco
    Shaker, Ammar
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3763 - 3770
  • [7] Multi-task Learning with Labeled and Unlabeled Tasks
    Pentina, Anastasia
    Lampert, Christoph H.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [8] Multi-task classification with sequential instances and tasks
    Xu, Wei
    Liu, Wei
    Chi, Haoyuan
    Huang, Xiaolin
    Yang, Jie
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2018, 64 : 59 - 67
  • [9] TASK AWARE MULTI-TASK LEARNING FOR SPEECH TO TEXT TASKS
    Indurthi, Sathish
    Zaidi, Mohd Abbas
    Lakumarapu, Nikhil Kumar
    Lee, Beomseok
    Han, Hyojung
    Ahn, Seokchan
    Kim, Sangha
    Kim, Chanwoo
    Hwang, Inchul
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7723 - 7727
  • [10] A Tale of Two Tasks: Automated Issue Priority Prediction with Deep Multi-task Learning
    Li, Yingling
    Che, Xing
    Huang, Yuekai
    Wang, Junjie
    Wang, Song
    Wang, Yawen
    Wang, Qing
    [J]. PROCEEDINGS OF THE16TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON EMPIRICAL SOFTWARE ENGINEERING AND MEASUREMENT, ESEM 2022, 2022, : 1 - 11