Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms

被引:81
|
作者
Jia, Ruoxi [1 ]
Dao, David [2 ]
Wang, Boxin [3 ]
Hubis, Frances Ann [2 ]
Gurel, Nezihe Merve [2 ]
Li, Bo [4 ]
Zhang, Ce [2 ]
Spanos, Costas J. [1 ]
Song, Dawn [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[4] UIUC, Champaign, IL USA
来源
PROCEEDINGS OF THE VLDB ENDOWMENT | 2019年 / 12卷 / 11期
关键词
D O I
10.14778/3342263.3342637
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Given a data set D containing millions of data points and a data consumer who is willing to pay for $X to train a machine learning (ML) model over D, how should we distribute this $X to each data point to reflect its "value"? In this paper, we define the "relative value of data" via the Shapley value, as it uniquely possesses properties with appealing real-world interpretations, such as fairness, rationality and decentralizability. For general, bounded utility functions, the Shapley value is known to be challenging to compute: to get Shapley values for all N data points, it requires O(2(N)) model evaluations for exact computation and O(N log N) for (epsilon, delta)-approximation. In this paper, we focus on one popular family of ML models relying on K-nearest neighbors (KNN). The most surprising result is that for unweighted KNN classifiers and regressors, the Shapley value of all N data points can be computed, exactly, in O(N log N) time - an exponential improvement on computational complexity! Moreover, for (epsilon, delta)-approximation, we are able to develop an algorithm based on Locality Sensitive Hashing (LSH) with only sublinear complexity O (N-h(epsilon,N-K) log N) when is not too small and epsilon is not too large. We empirically evaluate our algorithms on up to 10 million data points and even our exact algorithm is up to three orders of magnitude faster than the baseline approximation algorithm. The LSH-based approximation algorithm can accelerate the value calculation process even further. We then extend our algorithm to other scenarios such as (1) weighed KNN classifiers, (2) different data points are clustered by different data curators, and (3) there are data analysts providing computation who also requires proper valuation. Some of these extensions, although also being improved exponentially, are less practical for exact computation (e.g., O (N-K) complexity for weigthed KNN). We thus propose an Monte Carlo approximation algorithm, which is O (N (log N)(2)/(log K)(2)) times more efficient than the baseline approximation algorithm.
引用
收藏
页码:1610 / 1623
页数:14
相关论文
共 50 条
  • [1] Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
    Wang, Jiachen T.
    Mittal, Prateek
    Jia, Ruoxi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [2] Efficient nearest neighbor classification with data reduction and fast search algorithms
    Sánchez, JS
    Sotoca, JM
    Pla, F
    2004 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN & CYBERNETICS, VOLS 1-7, 2004, : 4757 - +
  • [3] Efficient Algorithms for Bayesian Nearest Neighbor Gaussian Processes
    Finley, Andrew O.
    Datta, Abhirup
    Cook, Bruce D.
    Morton, Douglas C.
    Andersen, Hans E.
    Banerjee, Sudipto
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2019, 28 (02) : 401 - 414
  • [4] Scalable Nearest Neighbor Algorithms for High Dimensional Data
    Muja, Marius
    Lowe, David G.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (11) : 2227 - 2240
  • [5] Space Efficient Data Structures for Nearest Larger Neighbor
    Jayapaul, Varunkumar
    Jo, Seungbum
    Raman, Venkatesh
    Satti, Srinivasa Rao
    COMBINATORIAL ALGORITHMS, IWOCA 2014, 2015, 8986 : 176 - 187
  • [6] Space efficient data structures for nearest larger neighbor
    Jayapaul, Varunkumar
    Jo, Seungbum
    Raman, Rajeev
    Raman, Venkatesh
    Satti, Srinivasa Rao
    JOURNAL OF DISCRETE ALGORITHMS, 2016, 36 : 63 - 75
  • [7] Efficient distributed data condensation for nearest neighbor classification
    Angiulli, Fabrizio
    Folino, Gianluigi
    EURO-PAR 2007 PARALLEL PROCESSING, PROCEEDINGS, 2007, 4641 : 338 - +
  • [8] Efficient Nearest-Neighbor Data Sharing in GPUs
    Nematollahi, Negin
    Sadrosadati, Mohammad
    Falahati, Hajar
    Barkhordar, Marzieh
    Drumond, Mario Paulo
    Sarbazi-Azad, Hamid
    Falsafi, Babak
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2021, 18 (01)
  • [9] Efficient Algorithms to Monitor Continuous Constrained k Nearest Neighbor Queries
    Hasan, Mahady
    Cheema, Muhammad Aamir
    Qu, Wenyu
    Lin, Xuemin
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, PT I, PROCEEDINGS, 2010, 5981 : 233 - +
  • [10] EFFICIENT ALGORITHMS FOR COMPUTING 2 NEAREST-NEIGHBOR PROBLEMS ON A RAP
    KAO, TW
    HORNG, SJ
    PATTERN RECOGNITION, 1994, 27 (12) : 1707 - 1716