共 50 条
SVDML: Semantic and Visual Space Deep Mutual Learning for Zero-Shot Learning
被引:0
|作者:
Lu, Nannan
[1
]
Luo, Yi
[1
]
Qiu, Mingkai
[1
]
机构:
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221100, Jiangsu, Peoples R China
来源:
基金:
中国国家自然科学基金;
关键词:
Zero-shot Learning;
Semantic Representation;
Visual Representation;
Mutual Learning;
D O I:
10.1007/978-981-99-8546-3_31
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
The key challenge of zero-shot learning (ZSL) is how to identify novel objects for which no samples are available during the training process. Current approaches either align the global features of images to the corresponding class semantic vectors or use unidirectional attentions to locate the local visual features of images via semantic attributes to avoid interference from other noise in the image. However, they still have not found a way to establish a robust correlation between the semantic and visual representation. To solve the issue, we propose a Semantic and Visual space Deep Mutual Learning (SVDML), which consists of three modules: class representation learning, attribute embedding, and mutual learning, to establish the intrinsic semantic relations between visual features and attribute features. SVDML uses two kinds of prototype generators to separately guide the learning of global and local features of images and achieves interaction between two learning pipelines by mutual learning, so that promotes the recognition of the fine-grained features and strengthens the knowledge generalization ability in zero-shot learning. The proposed SVDML yields significant improvements over the strong baselines, leading to the new state-of the-art performances on three popular challenging benchmarks.
引用
收藏
页码:383 / 395
页数:13
相关论文