Impact of Colour on Robustness of Deep Neural Networks

被引:11
|
作者
De, Kanjar [1 ,2 ]
Pedersen, Marius [2 ]
机构
[1] Lulea Univ Technol, S-97187 Lulea, Sweden
[2] Norwegian Univ Sci & Technol, Teknol Veien 22, N-2802 Gjoyik, Norway
关键词
D O I
10.1109/ICCVW54120.2021.00009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks have become the most widely used tool for computer vision applications like image classification, segmentation, object localization, etc. Recent studies have shown that the quality of images has a significant impact on the performance of these deep neural networks. The accuracy of the computer vision tasks gets significantly influenced by the image quality due to the shift in the distribution of the images on which the networks are trained on. Although, the effects of perturbations like image noise, image blur, image contrast, compression artifacts, etc. on the performance of deep neural networks on image classification have been studied, the effects of colour and quality of colour in digital images have been a mostly unexplored direction. One of the biggest challenges is that there is no particular dataset dedicated to colour distortions and colour aspects of images in image classification. The main aim of this paper is to study the impact of colour distortions on the performance of image classification using deep neural networks. Experiments performed using multiple state-of-of-the-the-art deep convolutional neural architectures on a proposed colour distorted dataset are presented and the impact of colour on image classification task is demonstrated.
引用
收藏
页码:21 / 30
页数:10
相关论文
共 50 条
  • [1] ε-Weakened Robustness of Deep Neural Networks
    Huang, Pei
    Yang, Yuting
    Liu, Minghao
    Jia, Fuqi
    Ma, Feifei
    Zhang, Jian
    [J]. PROCEEDINGS OF THE 31ST ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2022, 2022, : 126 - 138
  • [2] Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks
    Rasheed, Bader
    Abdelhamid, Mohamed
    Khan, Adil
    Menezes, Igor
    Khatak, Asad Masood
    [J]. IEEE ACCESS, 2024, 12 : 131323 - 131335
  • [3] Robustness Guarantees for Deep Neural Networks on Videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 308 - 317
  • [4] Robustness guarantees for deep neural networks on videos
    Wu, Min
    Kwiatkowska, Marta
    [J]. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2020, : 308 - 317
  • [5] Robustness Verification Boosting for Deep Neural Networks
    Feng, Chendong
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2019), 2019, : 531 - 535
  • [6] Analyzing the Noise Robustness of Deep Neural Networks
    Liu, Mengchen
    Liu, Shixia
    Su, Hang
    Cao, Kelei
    Zhu, Jun
    [J]. 2018 IEEE CONFERENCE ON VISUAL ANALYTICS SCIENCE AND TECHNOLOGY (VAST), 2018, : 60 - 71
  • [7] SoK: Certified Robustness for Deep Neural Networks
    Li, Linyi
    Xie, Tao
    Li, Bo
    [J]. 2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1289 - 1310
  • [8] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    [J]. Machine Vision and Applications, 2024, 35
  • [9] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    [J]. INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [10] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    [J]. MACHINE VISION AND APPLICATIONS, 2024, 35 (03)