Multi-task Regularization of Generative Similarity Models

被引:0
|
作者
Cazzanti, Luca [1 ]
Feldman, Sergey [2 ]
Gupta, Maya R. [2 ]
Gabbay, Michael [1 ]
机构
[1] Univ Washington, Appl Phys Lab, Seattle, WA 98105 USA
[2] Univ Washington, Dept Elect Engn, Seattle, WA 98105 USA
来源
关键词
similarity; generative similarity-based classification; discriminant analysis; multi-task learning; regularization;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We investigate a multi-task approach to similarity discriminant analysis, where we propose treating the estimation of the different class-conditional distributions of the pairwise similarities as multiple tasks. We show that regularizing these estimates together using a least-squares regularization weighted by a task-relatedness matrix can reduce the resulting maximum a posteriori classification errors. Results are given for benchmark data sets spanning a range of applications. In addition, we present a new application of similarity-based learning to analyzing the rhetoric of multiple insurgent groups in Iraq. We show how to produce the necessary task relatedness information from standard given training data, as well as how to derive task-relatedness information if given side information about the class relatedness.
引用
收藏
页码:90 / +
页数:3
相关论文
共 50 条
  • [41] Multi-Task Sparse Metric Learning for Monitoring Patient Similarity Progression
    Suo, Qiuling
    Zhong, Weida
    Ma, Fenglong
    Yuan, Ye
    Huai, Mengdi
    Zhang, Aidong
    2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 477 - 486
  • [42] Similarity preserving preserving multi-task learning for radar target recognition
    He, Hua
    Du, Lan
    Liu, Yue
    Ding, Jun
    INFORMATION SCIENCES, 2018, 436 : 388 - 402
  • [43] Multi-Task Learning Tracking Method Based on the Similarity of Dynamic Samples
    Shi Zaifeng
    Sun Cheng
    Cao Qingjie
    Wang Zhe
    Fan Qiangqiang
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (16)
  • [44] Efficient Inference in Multi-task Cox Process Models
    Aglietti, Virginia
    Damoulas, Theodoros
    Bonilla, Edwin, V
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 537 - 546
  • [45] Multi-task Gaussian process models for biomedical applications
    Duerichen, Robert
    Pimentel, Marco A. F.
    Clifton, Lei
    Schweikard, Achim
    Clifton, David A.
    2014 IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS (BHI), 2014, : 492 - 495
  • [46] Communication-efficient distributed multi-task learning with matrix sparsity regularization
    Zhou, Qiang
    Chen, Yu
    Pan, Sinno Jialin
    MACHINE LEARNING, 2020, 109 (03) : 569 - 601
  • [47] Flexible latent variable models for multi-task learning
    Zhang, Jian
    Ghahramani, Zoubin
    Yang, Yiming
    MACHINE LEARNING, 2008, 73 (03) : 221 - 242
  • [48] Flexible latent variable models for multi-task learning
    Jian Zhang
    Zoubin Ghahramani
    Yiming Yang
    Machine Learning, 2008, 73 : 221 - 242
  • [49] Multi-Task Learning for Interpretation of Brain Decoding Models
    Kia, Seyed Mostafa
    Vega-Pons, Sandro
    Olivetti, Emanuele
    Avesani, Paolo
    MACHINE LEARNING AND INTERPRETATION IN NEUROIMAGING, MLINI 2014, 2016, 9444 : 3 - 11
  • [50] Training Complex Models with Multi-Task Weak Supervision
    Ratner, Alexander
    Hancock, Braden
    Dunnmon, Jared
    Sala, Frederic
    Pandey, Shreyash
    Re, Christopher
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4763 - 4771