Learning Sparse Combinatorial Representations via Two-stage Submodular Maximization

被引:0
|
作者
Balkanski, Eric [1 ]
Krause, Andreas [2 ]
Mirzasoleiman, Baharan [2 ]
Singer, Yaron [1 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Swiss Fed Inst Technol, Zurich, Switzerland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of learning sparse representations of data sets, where the goal is to reduce a data set in manner that optimizes multiple objectives. Motivated by applications of data summarization, we develop a new model which we refer to as the two-stage submodular maximization problem. This task can be viewed as a combinatorial analogue of representation learning problems such as dictionary learning and sparse regression. The two-stage problem strictly generalizes the problem of cardinality constrained submodular maximization, though the objective function is not submodular and the techniques for submodular maximization cannot be applied. We describe a continuous optimization method which achieves an approximation ratio which asymptotically approaches 1 - 1/e. For instances where the asymptotics do not kick in, we design a local-search algorithm whose approximation ratio is arbitrarily close to 1/2. We empirically demonstrate the effectiveness of our methods on two multi-objective data summarization tasks, where the goal is to construct summaries via sparse representative subsets w.r.t. to predefined objectives.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Sparse Array Beampattern Synthesis via Two-Stage Penalty Dual Decomposition Method
    Cui, Zhenmao
    Deng, Peipei
    Chen, Boshen
    Cheng, Binbin
    Yu, Yang
    An, Jianfei
    [J]. IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 2023, 71 (05) : 4285 - 4300
  • [42] Two-stage PD speech clustering envelope and convolution sparse transfer learning algorithm
    Zhang, Xiaoheng
    Li, Yongming
    Wang, Pin
    [J]. Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2022, 43 (11): : 151 - 161
  • [43] Portfolio management via two-stage deep learning with a joint cost
    Yun, Hyungbin
    Lee, Minhyeok
    Kang, Yeong Seon
    Seok, Junhee
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2020, 143
  • [44] Deep Model Compression via Two-Stage Deep Reinforcement Learning
    Zhan, Huixin
    Lin, Wei-Ming
    Cao, Yongcan
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 238 - 254
  • [45] Two-Stage Learning to Predict Human Eye Fixations via SDAEs
    Han, Junwei
    Zhang, Dingwen
    Wen, Shifeng
    Guo, Lei
    Liu, Tianming
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (02) : 487 - 498
  • [46] Correction to: Two-stage simulation–optimization profit maximization model
    Kenneth Ko
    [J]. Journal of Revenue and Pricing Management, 2022, 21 (2) : 146 - 146
  • [47] Identifying cell type specific TF combinatorial regulation via a two-stage statistical method
    Liu, Kairong
    Hutchins, Andrew
    Wang, Yong
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2020), 2020, : 350 - 357
  • [48] Smooth sparse coding via marginal regression for learning sparse representations
    Balasubramanian, Krishnakumar
    Yu, Kai
    Lebanon, Guy
    [J]. ARTIFICIAL INTELLIGENCE, 2016, 238 : 83 - 95
  • [49] Target Tracking Using Two-stage Sparse Coding
    Liu, Yang
    Ji, Xiaofei
    Li, Yibo
    Yi, Guohan
    [J]. PROCEEDINGS 2013 INTERNATIONAL CONFERENCE ON MECHATRONIC SCIENCES, ELECTRIC ENGINEERING AND COMPUTER (MEC), 2013, : 1263 - 1266
  • [50] Learning Fair Representations via Rate-Distortion Maximization
    Chowdhury, Somnath Basu Roy
    Chaturvedi, Snigdha
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1159 - 1174