Learning Sparse Combinatorial Representations via Two-stage Submodular Maximization

被引:0
|
作者
Balkanski, Eric [1 ]
Krause, Andreas [2 ]
Mirzasoleiman, Baharan [2 ]
Singer, Yaron [1 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Swiss Fed Inst Technol, Zurich, Switzerland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of learning sparse representations of data sets, where the goal is to reduce a data set in manner that optimizes multiple objectives. Motivated by applications of data summarization, we develop a new model which we refer to as the two-stage submodular maximization problem. This task can be viewed as a combinatorial analogue of representation learning problems such as dictionary learning and sparse regression. The two-stage problem strictly generalizes the problem of cardinality constrained submodular maximization, though the objective function is not submodular and the techniques for submodular maximization cannot be applied. We describe a continuous optimization method which achieves an approximation ratio which asymptotically approaches 1 - 1/e. For instances where the asymptotics do not kick in, we design a local-search algorithm whose approximation ratio is arbitrarily close to 1/2. We empirically demonstrate the effectiveness of our methods on two multi-objective data summarization tasks, where the goal is to construct summaries via sparse representative subsets w.r.t. to predefined objectives.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] SAR Target Configuration Recognition via Two-Stage Sparse Structure Representation
    Liu, Ming
    Chen, Shichao
    Wu, Jie
    Lu, Fugang
    Wang, Xili
    Xing, Mengdao
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2018, 56 (04): : 2220 - 2232
  • [32] Structural damage detection with two-stage modal information and sparse Bayesian learning
    Zou, Yunfeng
    Yang, Guochen
    Lu, Xuandong
    He, Xuhui
    Cai, Chenzhi
    [J]. STRUCTURES, 2023, 58
  • [33] A Two-Stage Approach for Learning a Sparse Model with Sharp Excess Risk Analysis
    Li, Zhe
    Yang, Tianbao
    Zhang, Lijun
    Jin, Rong
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2224 - 2230
  • [34] Improving Word Translation via Two-Stage Contrastive Learning
    Li, Yaoyiran
    Liu, Fangyu
    Collier, Nigel
    Korhonen, Anna
    Vulic, Ivan
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4353 - 4374
  • [35] An Efficient Two-Stage Sparse Representation Method
    Peng, Chengyu
    Cheng, Hong
    Ko, Manchor
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2016, 30 (01)
  • [36] Two-Stage Multiscale Search for Sparse Targets
    Bashan, Eran
    Newstadt, Gregory
    Hero, Alfred O., III
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2011, 59 (05) : 2331 - 2341
  • [37] Budgeted stream-based active learning via adaptive submodular maximization
    Fujii, Kaito
    Kashima, Hisashi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [38] Improving imbalance classification via ensemble learning based on two-stage learning
    Liu, Na
    Wang, Jiaqi
    Zhu, Yongtong
    Wan, Lihong
    Li, Qingdu
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2024, 17
  • [39] Two-Stage Metric Learning
    Wang, Jun
    Sun, Ke
    Sha, Fei
    Marchand-Maillet, Stephane
    Kalousis, Alexandros
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 370 - 378
  • [40] Two-Stage Robust Combinatorial Optimization with Priced Scenarios
    Rischke, Roman
    [J]. OPERATIONS RESEARCH PROCEEDINGS 2013, 2014, : 377 - 382