Differentiable Submodular Maximization

被引:0
|
作者
Tschiatschek, Sebastian [1 ]
Sahin, Aytunc [2 ]
Krause, Andreas [2 ]
机构
[1] Microsoft Res Cambridge, Cambridge, England
[2] Swiss Fed Inst Technol, Zurich, Switzerland
基金
瑞士国家科学基金会; 欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider learning of submodular functions from data. These functions are important in machine learning and have a wide range of applications, e.g. data summarization, feature selection and active learning. Despite their combinatorial nature, submodular functions can be maximized approximately with strong theoretical guarantees in polynomial time. Typically, learning the submodular function and optimization of that function are treated separately, i.e. the function is first learned using a proxy objective and subsequently maximized. In contrast, we show how to perform learning and optimization jointly. By interpreting the output of greedy maximization algorithms as distributions over sequences of items and smoothening these distributions, we obtain a differentiable objective. In this way, we can differentiate through the maximization algorithms and optimize the model to work well with the optimization algorithm. We theoretically characterize the error made by our approach, yielding insights into the tradeoff of smoothness and accuracy. We demonstrate the effectiveness of our approach for jointly learning and optimizing on synthetic maximum cut data, and on real world applications such as product recommendation and image collection summarization.
引用
收藏
页码:2731 / 2738
页数:8
相关论文
共 50 条
  • [1] Decision-Oriented Learning with Differentiable Submodular Maximization for Vehicle Routing Problem
    Shi, Guangyao
    Tokekar, Pratap
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 11135 - 11140
  • [2] Differentiable Greedy Algorithm for Monotone Submodular Maximization: Guarantees, Gradient Estimators, and Applications
    Sakaue, Shinsaku
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 28 - +
  • [3] Distributed Maximization of Submodular and Approximately Submodular Functions
    Ye, Lintao
    Sundaram, Shreyas
    [J]. 2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 2979 - 2984
  • [4] Distributed Submodular Maximization
    Mirzasoleiman, Baharan
    Karbasi, Amin
    Sarkar, Rik
    Krause, Andreas
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17
  • [5] Differentiable Learning of Submodular Models
    Djolonga, Josip
    Krause, Andreas
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [6] Stochastic Submodular Maximization
    Asadpour, Arash
    Nazerzadeh, Hamid
    Saberi, Amin
    [J]. INTERNET AND NETWORK ECONOMICS, PROCEEDINGS, 2008, 5385 : 477 - 489
  • [7] Online Continuous Submodular Maximization
    Chen, Lin
    Hassani, Hamed
    Karbasi, Amin
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [8] Robust Sequence Submodular Maximization
    Sallam, Gamal
    Zheng, Zizhan
    Wu, Jie
    Ji, Bo
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [9] Robust Adaptive Submodular Maximization
    Tang, Shaojie
    [J]. INFORMS JOURNAL ON COMPUTING, 2022, 34 (06) : 3277 - 3291
  • [10] Regularized Submodular Maximization at Scale
    Kazemi, Ehsan
    Minaee, Shervin
    Feldman, Moran
    Karbasi, Amin
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139