Submodular Maximization via Gradient Ascent: The Case of Deep Submodular Functions

被引:0
|
作者
Bai, Wenruo [1 ]
Noble, William S. [2 ,3 ]
Bilmes, Jeff A. [1 ,2 ]
机构
[1] Dept Elect & Comp Engn, Seattle, WA 98195 USA
[2] Dept Comp Sci & Engn, Seattle, WA 98195 USA
[3] Dept Genome Sci, Seattle, WA 98195 USA
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
FUNCTION SUBJECT; ALGORITHMS; LOCATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We study the problem of maximizing deep submodular functions (DSFs) [13, 3] subject to a matroid constraint. DSFs are an expressive class of submodular functions that include, as strict subfamilies, the facility location, weighted coverage, and sums of concave composed with modular functions. We use a strategy similar to the continuous greedy approach [6], but we show that the multilinear extension of any DSF has a natural and computationally attainable concave relaxation that we can optimize using gradient ascent. Our results show a guarantee of max(0<delta<1)(1-epsilon-delta-e(-delta 2 Omega(k))) with a running time of O(n(2) /epsilon(2) ) plus time for pipage rounding [6] to recover a discrete solution, where k is the rank of the matroid constraint. This bound is often better than the standard 1 - 1/e guarantee of the continuous greedy algorithm, but runs much faster. Our bound also holds even for fully curved (c = 1) functions where the guarantee of 1 - c/e degenerates to 1 - 1/e where c is the curvature of f [37]. We perform computational experiments that support our theoretical results.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Deep Submodular Functions: Definitions & Learning
    Dolhansky, Brian
    Bilmes, Jeff
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [22] Online Submodular Maximization via Online Convex Optimization
    Salem, Tareq Si
    Ozcan, Gozde
    Nikolaou, Iasonas
    Terzi, Evimaria
    Ioannidis, Stratis
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 15038 - 15046
  • [23] Maximizing Submodular Functions under Submodular Constraints
    Padmanabhan, Madhavan R.
    Zhu, Yanhui
    Basu, Samik
    Pavan, A.
    [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1618 - 1627
  • [24] Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap
    Mokhtari, Aryan
    Hassani, Hamed
    Karbasi, Amin
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [25] lDecentralized Gradient Tracking for Continuous DR-Submodular Maximization
    Xie, Jiahao
    Hang, Chao Z.
    Shen, Zebang
    Mi, Chao
    Qian, Hui
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [26] Phrase Table Pruning via Submodular Function Maximization
    Nishino, Masaaki
    Suzuki, Jun
    Nagata, Masaaki
    [J]. PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2016), VOL 2, 2016, : 406 - 411
  • [27] Greedy Modality Selection via Approximate Submodular Maximization
    Cheng, Runxiang
    Balasubramaniam, Gargi
    He, Yifei
    Tsai, Yao-Hung Hubert
    Zhao, Han
    [J]. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 389 - 399
  • [28] Constrained submodular maximization via greedy local search
    Sarpatwar, Kanthi K.
    Schieber, Baruch
    Shachnai, Hadas
    [J]. OPERATIONS RESEARCH LETTERS, 2019, 47 (01) : 1 - 6
  • [29] Online Continuous Submodular Maximization
    Chen, Lin
    Hassani, Hamed
    Karbasi, Amin
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [30] Robust Adaptive Submodular Maximization
    Tang, Shaojie
    [J]. INFORMS JOURNAL ON COMPUTING, 2022, 34 (06) : 3277 - 3291