Multifocus Image Fusion Using a Sparse and Low-Rank Matrix Decomposition for Aviator's Night Vision Goggle

被引:4
|
作者
Jian, Bo-Lin [1 ]
Chu, Wen-Lin [2 ]
Li, Yu-Chung [3 ]
Yau, Her-Terng [1 ]
机构
[1] Natl Chin Yi Univ Technol, Dept Elect Engn, Taichung 41170, Taiwan
[2] Natl Chin Yi Univ Technol, Dept Mech Engn, Taichung 44170, Taiwan
[3] Natl Cheng Kung Univ, Dept Mech Engn, Tainan 70101, Taiwan
来源
APPLIED SCIENCES-BASEL | 2020年 / 10卷 / 06期
关键词
autofocus; night vision goggles; image fusion; sparse and low-rank matrix decomposition; ROBUST PCA; FOCUS; REPRESENTATION; TRANSFORM; PHASE;
D O I
10.3390/app10062178
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This study proposed the concept of sparse and low-rank matrix decomposition to address the need for aviator's night vision goggles (NVG) automated inspection processes when inspecting equipment availability. First, the automation requirements include machinery and motor-driven focus knob of NVGs and image capture using cameras to achieve autofocus. Traditionally, passive autofocus involves first computing of sharpness of each frame and then use of a search algorithm to quickly find the sharpest focus. In this study, the concept of sparse and low-rank matrix decomposition was adopted to achieve autofocus calculation and image fusion. Image fusion can solve the multifocus problem caused by mechanism errors. Experimental results showed that the sharpest image frame and its nearby frame can be image-fused to resolve minor errors possibly arising from the image-capture mechanism. In this study, seven samples and 12 image-fusing indicators were employed to verify the image fusion based on variance calculated in a discrete cosine transform domain without consistency verification, with consistency verification, structure-aware image fusion, and the proposed image fusion method. Experimental results showed that the proposed method was superior to other methods and compared the autofocus put forth in this paper and the normalized gray-level variance sharpness results in the documents to verify accuracy.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
    Niu, Shanzhou
    Yu, Gaohang
    Ma, Jianhua
    Wang, Jing
    INVERSE PROBLEMS, 2018, 34 (02)
  • [22] A Blind Spectrum Sensing Based on Low-Rank and Sparse Matrix Decomposition
    Junsheng Mu
    Xiaojun Jing
    Hai Huang
    Ning Gao
    中国通信, 2018, 15 (08) : 118 - 125
  • [23] SPARSE AND LOW-RANK MATRIX DECOMPOSITION VIA ALTERNATING DIRECTION METHOD
    Yuan, Xiaoming
    Yang, Junfeng
    PACIFIC JOURNAL OF OPTIMIZATION, 2013, 9 (01): : 167 - 180
  • [24] A Blind Spectrum Sensing Based on Low-Rank and Sparse Matrix Decomposition
    Mu, Junsheng
    Jing, Xiaojun
    Huang, Hai
    Gao, Ning
    CHINA COMMUNICATIONS, 2018, 15 (08) : 118 - 125
  • [25] Sparse and Low-Rank Decomposition of Covariance Matrix for Efficient DOA Estimation
    Chen, Yong
    Wang, Fang
    Wan, Jianwei
    Xu, Ke
    2017 IEEE 9TH INTERNATIONAL CONFERENCE ON COMMUNICATION SOFTWARE AND NETWORKS (ICCSN), 2017, : 957 - 961
  • [26] VIDEO DENOISING VIA ONLINE SPARSE AND LOW-RANK MATRIX DECOMPOSITION
    Guo, Han
    Vaswani, Namrata
    2016 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2016,
  • [27] GPR Target Detection by Joint Sparse and Low-Rank Matrix Decomposition
    Tivive, Fok Hing Chi
    Bouzerdoum, Abdesselam
    Abeynayake, Canicious
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019, 57 (05): : 2583 - 2595
  • [28] Low-Rank and Sparse Matrix Recovery Based on a Randomized Rank-Revealing Decomposition
    Kaloorazi, Maboud F.
    de Lamare, Rodrigo C.
    2017 22ND INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2017,
  • [29] Low-rank and sparse matrix decomposition via the truncated nuclear norm and a sparse regularizer
    Zhichao Xue
    Jing Dong
    Yuxin Zhao
    Chang Liu
    Ryad Chellali
    The Visual Computer, 2019, 35 : 1549 - 1566
  • [30] Low-rank and sparse matrix decomposition via the truncated nuclear norm and a sparse regularizer
    Xue, Zhichao
    Dong, Jing
    Zhao, Yuxin
    Liu, Chang
    Chellali, Ryad
    VISUAL COMPUTER, 2019, 35 (11): : 1549 - 1566