Medical image classification via multiscale representation learning

被引:22
|
作者
Tang, Qiling [1 ]
Liu, Yangyang [1 ]
Liu, Haihua [2 ]
机构
[1] South Cent Univ Nationalities, Coll Biomed Engn, Wuhan 430074, Hubei, Peoples R China
[2] Huibei Key Lab Med Informat Anal & Tumor Treatmen, Wuhan 430074, Hubei, Peoples R China
关键词
Multiscale feature learning; Sparse autoencoder; Fisher vector; Image classification; ANNOTATION; DIAGNOSIS;
D O I
10.1016/j.artmed.2017.06.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiscale structure is an essential attribute of natural images. Similarly, there exist scaling phenomena in medical images, and therefore a wide range of observation scales would be useful for medical imaging measurements. The present work proposes a multiscale representation learning method via sparse autoencoder networks to capture the intrinsic scales in medical images for the classification task. We obtain the multiscale feature detectors by the sparse autoencoders with different receptive field sizes, and then generate the feature maps by the convolution operation. This strategy can better characterize various size structures in medical imaging than single-scale version. Subsequently, Fisher vector technique is used to encode the extracted features to implement a fixed-length image representation, which provides more abundant information of high-order statistics and enhances the descriptiveness and discriminative ability of feature representation. We carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method have superior performance. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:71 / 78
页数:8
相关论文
共 50 条
  • [1] Multiscale Representation Learning for Image Classification: A Survey
    Jiao L.
    Gao J.
    Liu X.
    Liu F.
    Yang S.
    Hou B.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (01): : 23 - 43
  • [2] Sparse representation for image classification via paired dictionary learning
    Hui-Hung Wang
    Chia-Wei Tu
    Chen-Kuo Chiang
    Multimedia Tools and Applications, 2019, 78 : 16945 - 16963
  • [3] Sparse representation for image classification via paired dictionary learning
    Wang, Hui-Hung
    Tu, Chia-Wei
    Chiang, Chen-Kuo
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (12) : 16945 - 16963
  • [4] Hyperspectral Image Classification via Multiscale Joint Collaborative Representation With Locally Adaptive Dictionary
    Yang, Jinghui
    Qian, Jinxi
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2018, 15 (01) : 112 - 116
  • [5] Spectral-Spatial Hyperspectral Image Classification via Multiscale Adaptive Sparse Representation
    Fang, Leyuan
    Li, Shutao
    Kang, Xudong
    Benediktsson, Jon Atli
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2014, 52 (12): : 7738 - 7749
  • [6] Looking Closer at the Scene: Multiscale Representation Learning for Remote Sensing Image Scene Classification
    Wang, Qi
    Huang, Wei
    Xiong, Zhitong
    Li, Xuelong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (04) : 1414 - 1428
  • [7] MsVRL: Self-Supervised Multiscale Visual Representation Learning via Cross-Level Consistency for Medical Image Segmentation
    Zheng, Ruifeng
    Zhong, Ying
    Yan, Senxiang
    Sun, Hongcheng
    Shen, Haibin
    Huang, Kejie
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (01) : 91 - 102
  • [8] Discriminant sub-dictionary learning with adaptive multiscale superpixel representation for hyperspectral image classification
    Tu, Xiao
    Shen, Xiaobo
    Fu, Peng
    Wang, Tao
    Sun, Quansen
    Ji, Zexuan
    NEUROCOMPUTING, 2020, 409 : 131 - 145
  • [9] Learning discriminative representation for image classification
    Peng, Chong
    Liu, Yang
    Zhang, Xin
    Kang, Zhao
    Chen, Yongyong
    Chen, Chenglizhao
    Cheng, Qiang
    KNOWLEDGE-BASED SYSTEMS, 2021, 233
  • [10] Multiscale Emotion Representation Learning for Affective Image Recognition
    Zhang, Haimin
    Xu, Min
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 2203 - 2212