Saliency Detection Using Global and Local Information Under Multilayer Cellular Automata

被引:3
|
作者
Liu, Yihang [1 ]
Yuan, Peiyan [1 ]
机构
[1] Henan Normal Univ, Coll Comp & Informat Engn, Xinxiang 453007, Henan, Peoples R China
来源
IEEE ACCESS | 2019年 / 7卷
基金
中国国家自然科学基金;
关键词
Saliency detection; global and local maps; multilayer cellular automata; CNN-based encoder-decoder model; sparse coding; OBJECT DETECTION;
D O I
10.1109/ACCESS.2019.2915261
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To detect the salient object in natural images with low contrast and complex backgrounds, a saliency detection method that fuses global and local information under multilayer cellular automata is proposed. First, a global saliency map was obtained by the iteratively trained convolutional neural network (CNN)-based encoder-decoder model. Moreover, to transmit high-level information to the lower-level layers and further reinforce the object edge, the skip connections and edge penalty term were added to the network. Second, the foreground and background codebooks were generated by the global saliency map, and sparse coding was subsequently obtained by the locality-constrained linear coding model. Thus, a local saliency map was generated. Finally, the final saliency map was obtained by fusing the global and local saliency maps under the multilayer cellular automata framework. The experimental results show that the average F-measure of our method on the MSRA 10K, ECSSD, DUT-OMRON, HKU-IS, THUR 15K, and XPIE datasets is 93.4%, 89.5%, 79.4%, 88.7%, 73.6%, and 85.2%, respectively, and the MAE is 0.046, 0.067, 0.054, 0.044, 0.072, and 0.049. Ultimately, these findings prove that our method has both high saliency detection accuracies and strong generalization abilities. In particular, our method can effectively detect the salient object of natural images with low contrast and complex backgrounds.
引用
收藏
页码:72736 / 72748
页数:13
相关论文
共 50 条
  • [1] Saliency detection integrating global and local information
    Zhang, Ming
    Wu, Yunhe
    Du, Yue
    Fang, Lei
    Pang, Yu
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 53 : 215 - 223
  • [2] Saliency Detection via Cellular Automata
    Qin, Yao
    Lu, Huchuan
    Xu, Yiqun
    Wang, He
    [J]. 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 110 - 119
  • [3] Saliency detection via global-object-seed-guided cellular automata
    [J]. 2016, IEEE Computer Society (2016-August):
  • [4] SALIENCY DETECTION VIA GLOBAL-OBJECT-SEED-GUIDED CELLULAR AUTOMATA
    Liu, Hong
    Tao, Shuning
    Li, Zheyuan
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 2772 - 2776
  • [5] Salient Region Detection Using Local and Global Saliency
    Cheung, Yiu-ming
    Peng, Qinmu
    [J]. 2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 210 - 213
  • [6] Cellular Automata Based on Occlusion Relationship for Saliency Detection
    Sheng, Hao
    Feng, Weichao
    Zhang, Shuo
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2016, 2016, 9983 : 28 - 39
  • [7] An Image Saliency Detection Method Based on Combining Global and Local Information
    Yang, Hangxu
    Gong, Yongjian
    Wang, Kai
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [8] Saliency detection using suitable variant of local and global consistency
    Chen, Jiazhong
    Chen, Jie
    Cao, Hua
    Li, Rong
    Xia, Tao
    Ling, Hefei
    Chen, Yang
    [J]. IET COMPUTER VISION, 2017, 11 (06) : 479 - 487
  • [9] Visualized Multiple Image Selection Encryption Based on Log Chaos System and Multilayer Cellular Automata Saliency Detection
    Su, Yining
    Teng, Lin
    Liu, Pengbo
    Unar, Salahuddin
    Wang, Xingyuan
    Fu, Xianping
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (09) : 4689 - 4702
  • [10] Neighborhood detection using mutual information for the identification of cellular automata
    Zhao, Y.
    Billings, S. A.
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2006, 36 (02): : 473 - 479