Delaunay Triangulation Data Augmentation guided by Visual Analytics for Deep Learning

被引:5
|
作者
Peixinho, Alan Z. [1 ]
Benato, Barbara C. [1 ]
Nonato, Luis G. [2 ]
Falcao, Alexandre X. [1 ]
机构
[1] Univ Estadual Campinas, Inst Comp, Campinas, SP, Brazil
[2] Univ Sao Paulo, Inst Math & Comp Sci, Sao Carlos, SP, Brazil
基金
巴西圣保罗研究基金会;
关键词
D O I
10.1109/SIBGRAPI.2018.00056
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is well known that image classification problems can be effectively solved by Convolutional Neural Networks (CNNs). However, the number of supervised training examples from all categories must be high enough to avoid model over-fitting. In this case, two key alternatives are usually presented (a) the generation of artificial examples, known as data augmentation, and (b) reusing a CNN previously trained over a large supervised training set from another image classification problem - a strategy known as transfer learning. Deep learning approaches have rarely exploited the superior ability of humans for cognitive tasks during the machine learning loop. We advocate that the expert intervention through visual analytics can improve machine learning. In this work, we demonstrate this claim by proposing a data augmentation framework based on Encoder-Decoder Neural Networks (EDNNs) and visual analytics for the design of more effective CNN-based image classifiers. An EDNN is initially trained such that its encoder extracts a feature vector from each training image. These samples are projected from the encoder feature space on to a 2D coordinate space. The expert includes points to the projection space and the feature vectors of the new samples are obtained on the original feature space by interpolation. The decoder generates artificial images from the feature vectors of the new samples and the augmented training set is used to improve the CNN-based classifier. We evaluate methods for the proposed framework and demonstrate its advantages using data from a real problem as case study - the diagnosis of helminth eggs in humans. We also show that transfer learning and data augmentation by affine transformations can further improve the results.
引用
收藏
页码:384 / 391
页数:8
相关论文
共 50 条
  • [1] Crystallization Learning with the Delaunay Triangulation
    Gu, Jiaqi
    Yin, Guosheng
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [2] Visual servoing with deep learning and data augmentation for robotic manipulation
    Liu, Jingshu
    Li, Yuan
    [J]. Journal of Advanced Computational Intelligence and Intelligent Informatics, 2020, 24 (07): : 953 - 962
  • [3] Domain-guided data augmentation for deep learning on medical imaging
    Athalye, Chinmayee
    Arnaout, Rima
    [J]. PLOS ONE, 2023, 18 (03):
  • [4] Visual Analytics for Explainable Deep Learning
    Choo, Jaegul
    Liu, Shixia
    [J]. IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2018, 38 (04) : 84 - 92
  • [5] Data Augmentation by Guided Deep Interpolation
    Szlobodnyik, Gergely
    Farkas, Lorant
    [J]. APPLIED SOFT COMPUTING, 2021, 111
  • [6] Delaunay triangulation programs on surface data
    Choi, S
    Amenta, N
    [J]. PROCEEDINGS OF THE THIRTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, 2002, : 135 - 136
  • [7] REPRESENTING STEREO DATA WITH THE DELAUNAY TRIANGULATION
    FAUGERAS, OD
    LEBRASMEHLMAN, E
    BOISSONNAT, JD
    [J]. ARTIFICIAL INTELLIGENCE, 1990, 44 (1-2) : 41 - 87
  • [8] d-Simplexed: Adaptive Delaunay Triangulation or Performance Modeling and Prediction on Big Data Analytics
    Chen, Yuxing
    Goetsch, Peter
    Hoque, Mohammad A.
    Lu, Jiaheng
    Tarkoma, Sasu
    [J]. IEEE TRANSACTIONS ON BIG DATA, 2022, 8 (02) : 458 - 469
  • [9] Paper: Visual Servoing with Deep Learning and Data Augmentation for Robotic Manipulation
    Liu, Jingshu
    Li, Yuan
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2020, 24 (07) : 953 - 962
  • [10] Deep learning ensemble with data augmentation using a transcoder in visual description
    Lee, Jin Young
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (22) : 31231 - 31243