Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms

被引:81
|
作者
Jia, Ruoxi [1 ]
Dao, David [2 ]
Wang, Boxin [3 ]
Hubis, Frances Ann [2 ]
Gurel, Nezihe Merve [2 ]
Li, Bo [4 ]
Zhang, Ce [2 ]
Spanos, Costas J. [1 ]
Song, Dawn [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[4] UIUC, Champaign, IL USA
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2019年 / 12卷 / 11期
关键词
D O I
10.14778/3342263.3342637
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Given a data set D containing millions of data points and a data consumer who is willing to pay for $X to train a machine learning (ML) model over D, how should we distribute this $X to each data point to reflect its "value"? In this paper, we define the "relative value of data" via the Shapley value, as it uniquely possesses properties with appealing real-world interpretations, such as fairness, rationality and decentralizability. For general, bounded utility functions, the Shapley value is known to be challenging to compute: to get Shapley values for all N data points, it requires O(2(N)) model evaluations for exact computation and O(N log N) for (epsilon, delta)-approximation. In this paper, we focus on one popular family of ML models relying on K-nearest neighbors (KNN). The most surprising result is that for unweighted KNN classifiers and regressors, the Shapley value of all N data points can be computed, exactly, in O(N log N) time - an exponential improvement on computational complexity! Moreover, for (epsilon, delta)-approximation, we are able to develop an algorithm based on Locality Sensitive Hashing (LSH) with only sublinear complexity O (N-h(epsilon,N-K) log N) when is not too small and epsilon is not too large. We empirically evaluate our algorithms on up to 10 million data points and even our exact algorithm is up to three orders of magnitude faster than the baseline approximation algorithm. The LSH-based approximation algorithm can accelerate the value calculation process even further. We then extend our algorithm to other scenarios such as (1) weighed KNN classifiers, (2) different data points are clustered by different data curators, and (3) there are data analysts providing computation who also requires proper valuation. Some of these extensions, although also being improved exponentially, are less practical for exact computation (e.g., O (N-K) complexity for weigthed KNN). We thus propose an Monte Carlo approximation algorithm, which is O (N (log N)(2)/(log K)(2)) times more efficient than the baseline approximation algorithm.
引用
收藏
页码:1610 / 1623
页数:14
相关论文
共 50 条
  • [41] Nearest neighbor imputation algorithms: a critical evaluation
    Lorenzo Beretta
    Alessandro Santaniello
    BMC Medical Informatics and Decision Making, 16
  • [42] Evaluation of fast algorithms for finding the nearest neighbor
    Lubiarz, S
    Lockwood, P
    1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I - V: VOL I: PLENARY, EXPERT SUMMARIES, SPECIAL, AUDIO, UNDERWATER ACOUSTICS, VLSI; VOL II: SPEECH PROCESSING; VOL III: SPEECH PROCESSING, DIGITAL SIGNAL PROCESSING; VOL IV: MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS - VOL V: STATISTICAL SIGNAL AND ARRAY PROCESSING, APPLICATIONS, 1997, : 1491 - 1494
  • [43] Task-specific experience and task-specific talent: Decomposing the productivity of high school teachers
    Cook, Jason B.
    Mansfield, Richard K.
    JOURNAL OF PUBLIC ECONOMICS, 2016, 140 : 51 - 72
  • [44] AN EFFICIENT NEAREST NEIGHBOR SEARCH METHOD
    SOLEYMANI, MR
    MORGERA, SD
    IEEE TRANSACTIONS ON COMMUNICATIONS, 1987, 35 (06) : 677 - 679
  • [45] Efficient Nearest Neighbor Language Models
    He, Junxian
    Neubig, Graham
    Berg-Kirkpatrick, Taylor
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 5703 - 5714
  • [46] Nearest neighbor imputation algorithms: a critical evaluation
    Beretta, Lorenzo
    Santaniello, Alessandro
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2016, 16
  • [47] Efficient k-Nearest-Neighbor Search Algorithms for Historical Moving Object Trajectories
    Yun-Jun Gao
    Chun Li
    Gen-Cai Chen
    Ling Chen
    Xian-Ta Jiang
    Chun Chen
    Journal of Computer Science and Technology, 2007, 22 : 232 - 244
  • [48] Task-Specific Adaptive Differential Privacy Method for Structured Data
    Utaliyeva, Assem
    Shin, Jinmyeong
    Choi, Yoon-Ho
    SENSORS, 2023, 23 (04)
  • [49] Efficient k-nearest-neighbor search algorithms for historical moving object trajectories
    Gao, Yun-Jun
    Li, Chun
    Chen, Gen-Cai
    Chen, Ling
    Jiang, Xian-Ta
    Chen, Chun
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2007, 22 (02) : 232 - 244
  • [50] Adversarial task-specific learning
    Fu, Xin
    Zhao, Yao
    Liu, Ting
    Wei, Yunchao
    Li, Jianan
    Wei, Shikui
    NEUROCOMPUTING, 2019, 362 : 118 - 128