Combining Graph Transformers Based Multi-Label Active Learning and Informative Data Augmentation for Chest Xray Classification

被引:0
|
作者
Mahapatra, Dwarikanath [1 ]
Bozorgtabar, Behzad [2 ]
Ge, Zongyuan [3 ]
Reyes, Mauricio [4 ]
Thiran, Jean-Philippe [2 ]
机构
[1] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[2] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
[3] Monash Univ, Clayton, Australia
[4] Univ Bern, Bern, Switzerland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Informative sample selection in active learning (AL) helps a machine learning system attain optimum performance with minimum labeled samples, thus improving human-in-the-loop computer-aided diagnosis systems with limited labeled data. Data augmentation is highly effective for enlarging datasets with less labeled data. Combining informative sample selection and data augmentation should leverage their respective advantages and improve performance of AL systems. We propose a novel approach to combine informative sample selection and data augmentation for multi-label active learning. Conventional informative sample selection approaches have mostly focused on the single-label case which do not perform optimally in the multi-label setting. We improve upon state-of-the-art multi-label active learning techniques by representing disease labels as graph nodes, use graph attention transformers (GAT) to learn more effective inter-label relationships and identify most informative samples. We generate transformations of these informative samples which are also informative. Experiments on public chest xray datasets show improved results over state-of-the-art multi-label AL techniques in terms of classification performance, learning rates, and robustness. We also perform qualitative analysis to determine the realism of generated images.
引用
收藏
页码:21378 / 21386
页数:9
相关论文
共 50 条
  • [1] GANDALF: Graph-based transformer and Data Augmentation Active Learning Framework with interpretable features for multi-label chest Xray classification
    Mahapatra, Dwarikanath
    Bozorgtabar, Behzad
    Ge, Zongyuan
    Reyes, Mauricio
    MEDICAL IMAGE ANALYSIS, 2024, 93
  • [2] Active Learning in Multi-label Classification of Bioacoustic Data
    Kath, Hannes
    Gouvea, Thiago S.
    Sonntag, Daniel
    KI 2024: ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2024, 2024, 14992 : 114 - 127
  • [3] On active learning in multi-label classification
    Brinker, K
    FROM DATA AND INFORMATION ANALYSIS TO KNOWLEDGE ENGINEERING, 2006, : 206 - 213
  • [4] Active learning in multi-label image classification with graph convolutional network embedding
    Xie, Xiurui
    Tian, Maojun
    Luo, Guangchun
    Liu, Guisong
    Wu, Yizhe
    Qin, Ke
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 148 : 56 - 65
  • [5] Active learning for hierarchical multi-label classification
    Nakano, Felipe Kenji
    Cerri, Ricardo
    Vens, Celine
    DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 34 (05) : 1496 - 1530
  • [6] Active learning for hierarchical multi-label classification
    Felipe Kenji Nakano
    Ricardo Cerri
    Celine Vens
    Data Mining and Knowledge Discovery, 2020, 34 : 1496 - 1530
  • [7] Multi-label Active Learning for Image Classification
    Wu, Jian
    Sheng, Victor S.
    Zhang, Jing
    Zhao, Pengpeng
    Cui, Zhiming
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 5227 - 5231
  • [8] Image emotion multi-label classification based on multi-graph learning
    Wang, Meixia
    Zhao, Yuhai
    Wang, Yejiang
    Xu, Tongze
    Sun, Yiming
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 231
  • [9] Multi-Label Active Learning with Label Correlation for Image Classification
    Ye, Chen
    Wu, Jian
    Sheng, Victor S.
    Zhao, Pengpeng
    Cui, Zhiming
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 3437 - 3441
  • [10] Active Learning Algorithms for Multi-label Data
    Cherman, Everton Alvares
    Tsoumakas, Grigorios
    Monard, Maria-Carolina
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2016, 2016, 475 : 267 - 279