A New Method for Extracting Illumination Invariant Features and Its Application in Target Recognition

被引:0
|
作者
Li B.-Q. [1 ]
He Y.-Y. [1 ]
Chen L.-Z. [1 ]
机构
[1] School of Marine Science and Technology, Northwestern Polytechnical University, Xi'an, 710072, Shaanxi
来源
Li, Bao-Qi (bqli@mail.nwpu.edu.cn) | 2018年 / Chinese Institute of Electronics卷 / 46期
关键词
Dual Lenet; Ground target recognition; Illumination invariant features; MLNSCT (Multiple LNSCT); NSCT (Nonsubsampled Contourlet Transform);
D O I
10.3969/j.issn.0372-2112.2018.04.018
中图分类号
学科分类号
摘要
In order to solve the problem that LNSCT loses the target contour information due to discarding the low frequency components of image, a new illumination invariant extraction method, called MLNSCT, is proposed. Firstly, NSCT is used to decompose the input image in logarithm domain to realize the separation of the low-frequency and high-frequency components. Secondly, the BayesShrink threshold filter is applied to the high-frequency sub-band coefficients, and the inverse NSCT is performed for the low frequency components to obtain the feature image. Thirdly, a second NSCT decomposition on the feature image, the threshold filtering on high-frequency sub-band and inverse NSCT on low-frequency component are performed sequentially. After multiple NSCT decomposition, the illumination invariant features of the input image are extracted from the set of all high frequency sub-band coefficients. Through further study of the relationship between the illumination invariant features and the raw image, Dual Lenet, which is a parallel synchronous convolutional neural network, is designed to improve the accuracy of ground target recognition by fusing the high-level features of both. The experimental results show that MLNSCT has higher classification accuracy than that of LNSCT in Lenet model, and the classification accuracy is higher with the increase of decomposition number. Furthermore, it is proved that the fusion of illumination invariant features and raw image can effectively improve the classification accuracy of ground target recognition. © 2018, Chinese Institute of Electronics. All right reserved.
引用
收藏
页码:895 / 902
页数:7
相关论文
共 18 条
  • [1] Deng H., Himed B., Wicks M.C., Image feature-based space-time processing for ground moving target detection, IEEE Signal Processing Letters, 13, 4, pp. 216-219, (2006)
  • [2] York G., Pack D.J., Ground target detection using cooperative unmanned aerial systems, Journal of Intelligent & Robotic Systems, 6, 1-4, pp. 473-478, (2012)
  • [3] Shan S.J., Et al., Illumination normalization for robust face recognition against varying illumination conditions, The IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG), (2003)
  • [4] Georghideors A.S., Et al., From few to many: illumination cone models for face recognition under variable illumination and pose, IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 6, pp. 630-660, (2001)
  • [5] Ke S.-C., Zhao Y.-W., Li B.-C., Peng T.-Q., Image retrieval based on convolutional neural network and kernel-based supervised hashing, Acta Electronica Sinica, 45, 1, pp. 157-163, (2017)
  • [6] Basri R., Jacobs D.W., Lambertian reflectance and linear subspaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, 25, 2, pp. 218-233, (2003)
  • [7] Jobson D.J., Et al., A multi-scale retinex for bridging the gap between color images and the human observation of scenes, IEEE Transactions on Image Processing, 6, 7, pp. 965-976, (1997)
  • [8] Chen C.P., Chen C.S., Lighting normalization with generic intrinsic illumination subspace for face recognition, The Tenth IEEE International Conference on Computer Vision (ICCV), (2005)
  • [9] Chen T., Comaniciu D., Huang T.S., Et al., Total variation models for variable lighting face recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, 9, pp. 1519-1524, (2006)
  • [10] Goh Y.Z., Et al., Wavelet-based illumination invariant preprocessing in face recognition, Journal of Electronic Imaging, 18, 2, pp. 421-425, (2009)