Stochastic Variance Reduction for DR-Submodular Maximization

被引:1
|
作者
Lian, Yuefang [1 ,3 ]
Du, Donglei [2 ]
Wang, Xiao [3 ]
Xu, Dachuan [1 ]
Zhou, Yang [4 ]
机构
[1] Beijing Univ Technol, Beijing Inst Sci & Engn Comp, Beijing 100124, Peoples R China
[2] Univ New Brunswick, Fac Management, Fredericton, NB E3B 9Y2, Canada
[3] Peng Cheng Lab, Shenzhen 518066, Peoples R China
[4] Shandong Normal Univ, Sch Math & Stat, Jinan 250014, Shandong, Peoples R China
基金
加拿大自然科学与工程研究理事会; 芬兰科学院; 中国国家自然科学基金;
关键词
Stochastic algorithm; DR-Submodular; Gradient estimator; Variance reduction; Frank-Wolfe algorithm; FUNCTION SUBJECT; MINIMIZATION;
D O I
10.1007/s00453-023-01195-z
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Stochastic optimization has experienced significant growth in recent decades, with the increasing prevalence of variance reduction techniques in stochastic optimization algorithms to enhance computational efficiency. In this paper, we introduce two projection-free stochastic approximation algorithms for maximizing diminishing return (DR) submodular functions over convex constraints, building upon the Stochastic Path Integrated Differential EstimatoR (SPIDER) and its variants. Firstly, we present a SPIDER Continuous Greedy (SPIDER-CG) algorithm for the monotone case that guarantees a (1- e(-1))OPT- epsilon approximation after O(epsilon(-1)) iterations and O(epsilon(-2)) stochastic gradient computations under the mean-squared smoothness assumption. For the non-monotone case, we develop a SPIDER Frank-Wolfe (SPIDER-FW) algorithm that guarantees a 1/4 (1- minx (x is an element of C)||x||(infinity))OPT- epsilon approximation withO(epsilon(-1)) iterations and O(epsilon (-2)) stochastic gradient estimates. To address the practical challenge associated with a large number of samples per iteration, we introduce a modified gradient estimator based on SPIDER, leading to a Hybrid SPIDER-FW (Hybrid SPIDER-CG) algorithm, which achieves the same approximation guarantee as SPIDER-FW (SPIDER-CG) algorithm with only O(1) samples per iteration. Numerical experiments on both simulated and real data demonstrate the efficiency of the proposed methods.
引用
收藏
页码:1335 / 1364
页数:30
相关论文
共 50 条
  • [1] Stochastic Variance Reduction for DR-Submodular Maximization
    Yuefang Lian
    Donglei Du
    Xiao Wang
    Dachuan Xu
    Yang Zhou
    Algorithmica, 2024, 86 : 1335 - 1364
  • [2] Continuous DR-submodular Maximization: Structure and Algorithms
    Bian, An
    Levy, Kfir Y.
    Krause, Andreas
    Buhmann, Joachim M.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [3] Continuous Profit Maximization: A Study of Unconstrained Dr-Submodular Maximization
    Guo, Jianxiong
    Wu, Weili
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2021, 8 (03) : 768 - 779
  • [4] Online Non-Monotone DR-Submodular Maximization
    Nguyen Kim Thang
    Srivastav, Abhinav
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9868 - 9876
  • [5] Constrained Submodular Maximization via New Bounds for DR-Submodular Functions
    Buchbinder, Niv
    Feldman, Moran
    PROCEEDINGS OF THE 56TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING, STOC 2024, 2024, : 1820 - 1831
  • [6] lDecentralized Gradient Tracking for Continuous DR-Submodular Maximization
    Xie, Jiahao
    Hang, Chao Z.
    Shen, Zebang
    Mi, Chao
    Qian, Hui
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [7] Non-Monotone DR-Submodular Function Maximization
    Soma, Tasuku
    Yoshida, Yuichi
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 898 - 904
  • [8] Subspace Selection via DR-Submodular Maximization on Lattices
    Nakashima, So
    Maehara, Takanori
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4618 - 4625
  • [9] A Stochastic Non-monotone DR-Submodular Maximization Problem over a Convex Set
    Lian, Yuefang
    Xu, Dachuan
    Du, Donglei
    Zhou, Yang
    COMPUTING AND COMBINATORICS, COCOON 2022, 2022, 13595 : 1 - 11
  • [10] Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
    Niazadeh, Rad
    Roughgarden, Tim
    Wang, Joshua R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31