Gaussian Process Latent Random Field

被引:0
|
作者
Zhong, Guoqiang [1 ]
Li, Wu-Jun [2 ]
Yeung, Dit-Yan [2 ]
Hou, Xinwen [1 ]
Liu, Cheng-Lin [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit NLPR, Beijing 100190, Peoples R China
[2] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Kowloon, Hong Kong, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Gaussian process latent variable model (GPLVM) is an unsupervised probabilistic model for nonlinear dimensionality reduction. A supervised extension, called discriminative GPLVM (DGPLVM), incorporates supervisory information into GPLVM to enhance the classification performance. However, its limitation of the latent space dimensionality to at most C - 1 (C is the number of classes) leads to unsatisfactorily performance when the intrinsic dimensionality of the application is higher than C - 1. In this paper, we propose a novel supervised extension of GPLVM, called Gaussian process latent random field (GPLRF), by enforcing the latent variables to be a Gaussian Markov random field with respect to a graph constructed from the supervisory information. In GPLRF, the dimensionality of the latent space is no longer restricted to at most C - 1. This makes GPLRF much more flexible than DGPLVM in applications. Experiments conducted on both synthetic and real-world data sets demonstrate that GPLRF performs comparably with DGPLVM and other state-of-the-art methods on data sets with intrinsic dimensionality at most C - 1, and dramatically outperforms DGPLVM on data sets when the intrinsic dimensionality exceeds C - 1.
引用
收藏
页码:679 / 684
页数:6
相关论文
共 50 条