Can Visual Recognition Benefit from Auxiliary Information in Training?

被引:20
|
作者
Zhang, Qilin [1 ]
Hua, Gang [1 ]
Liu, Wei [2 ]
Liu, Zicheng [3 ]
Zhang, Zhengyou [3 ]
机构
[1] Stevens Inst Technol, Hoboken, NJ 07030 USA
[2] IBM Thomas J Watson Res Ctr, Yorktown Hts, NY USA
[3] Microsoft Res, Redmond, WA USA
来源
关键词
D O I
10.1007/978-3-319-16865-4_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We examine an under-explored visual recognition problem, where we have a main view along with an auxiliary view of visual information present in the training data, but merely the main view is available in the test data. To effectively leverage the auxiliary view to train a stronger classifier, we propose a collaborative auxiliary learning framework based on a new discriminative canonical correlation analysis. This framework reveals a common semantic space shared across both views through enforcing a series of nonlinear projections. Such projections automatically embed the discriminative cues hidden in both views into the common space, and better visual recognition is thus achieved on the test data that stems from only the main view. The efficacy of our proposed auxiliary learning approach is demonstrated through three challenging visual recognition tasks with different kinds of auxiliary information.
引用
收藏
页码:65 / 80
页数:16
相关论文
共 50 条