An l2/l1 regularization framework for diverse learning tasks

被引:4
|
作者
Wang, Shengzheng [1 ]
Peng, Jing [1 ]
Liu, Wei [1 ]
机构
[1] Shanghai Maritime Univ, Merchant Marine Coll, Shanghai 201306, Peoples R China
关键词
l(2)/l(1) regularization; Diverse tasks; Regularized empirical risk minimization; Machine learning;
D O I
10.1016/j.sigpro.2014.11.010
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Regularization plays an important role in learning tasks, to incorporate prior knowledge about a problem and thus improve learning performance. Well known regularization methods, including l(2) and l(1) regularization, have shown great success in a variety of conventional learning tasks, and new types of regularization have also been developed to deal with modem problems, such as multi-task learning. In this paper, we introduce the l(2)/l(1) regularization for diverse learning tasks. The l(2)/l(1) regularization is a mixed norm defined over the parameters of the diverse learning tasks. It adaptively encourages the diversity of features among diverse learning tasks, i.e., when a feature is responsible for some tasks it is unlikely to be responsible for the rest of the tasks. We consider two applications of the l(2)/l(1) regularization framework, i.e., learning sparse self-representation of a dataset for clustering and learning one-vs.-rest binary classifiers for multi-class classification, both of which confirm the effectiveness of the new regularization framework over benchmark datasets. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:206 / 211
页数:6
相关论文
共 50 条
  • [41] Processing hyponymy in L1 and L2
    Sharifian, F
    [J]. JOURNAL OF PSYCHOLINGUISTIC RESEARCH, 2002, 31 (04) : 421 - 436
  • [42] L1 in the L2 classroom: why not?
    Galindo Merino, Ma Mar
    [J]. ESTUDIOS DE LINGUISTICA-UNIVERSIDAD DE ALICANTE-ELUA, 2011, (25): : 163 - 204
  • [43] Prosody Perception in L1 and L2
    Nesterenko, Irina
    [J]. PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON SPEECH PROSODY, VOLS I AND II, 2012, : 398 - 401
  • [44] THE L1 = L2 HYPOTHESIS - A RECONSIDERATION
    ELLIS, R
    [J]. SYSTEM, 1985, 13 (01) : 9 - 24
  • [45] The Group-Lasso: l1,∞ Regularization versus l1,2 Regularization
    Vogt, Julia E.
    Roth, Volker
    [J]. PATTERN RECOGNITION, 2010, 6376 : 252 - 261
  • [46] A HIERARCHICAL SPARSITY-SMOOTHNESS BAYESIAN MODEL FOR l0 + l1 + l2 REGULARIZATION
    Chaari, Lotfi
    Batatia, Hadj
    Dobigeon, Nicolas
    Tourneret, Jean-Yves
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [47] Are L1 and L2 strategies transferable? An exploration of the L1 and L2 writing strategies of Chinese graduate students
    Guo, Xiaoqian
    Huang, Li-Shih
    [J]. LANGUAGE LEARNING JOURNAL, 2020, 48 (06): : 715 - 737
  • [48] The effects of L1 and L2 writers' varying politeness modification in English emails on L1 and L2 readers
    Hendriks, Berna
    van Meurs, Frank
    Kakisina, Bob
    [J]. JOURNAL OF PRAGMATICS, 2023, 204 : 33 - 49
  • [49] L1 Attrition in subject expressions among Chinese L1 Attriters of L2 English and L2 Portuguese
    Peng, YingYing
    [J]. INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2023, 58 : 250 - 250
  • [50] Calculating the Galois group of L1(L2(y))=0, L1,L2 completely reducible operators
    Berman, PH
    Singer, MF
    [J]. JOURNAL OF PURE AND APPLIED ALGEBRA, 1999, 139 (1-3) : 3 - 23