Unsupervised Test-Time Adaptation of Deep Neural Networks at the Edge: A Case Study

被引:0
|
作者
Bhardwaj, Kshitij [1 ]
Diffenderfer, James [1 ]
Kailkhura, Bhavya [1 ]
Gokhale, Maya [1 ]
机构
[1] Lawrence Livermore Natl Lab, Livermore, CA 94550 USA
关键词
Robust deep learning; on-device neural network adaptation; unsupervised adaptation; edge devices;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning is being increasingly used in mobile and edge autonomous systems. The prediction accuracy of deep neural networks (DNNs), however, can degrade after deployment due to encountering data samples whose distributions are different than the training samples. To continue to robustly predict, DNNs must be able to adapt themselves post-deployment. Such adaptation at the edge is challenging as new labeled data may not be available, and it has to be performed on a resource-constrained device. This paper performs a case study to evaluate the cost of test-time fully unsupervised adaptation strategies on a real-world edge platform: Nvidia Jetson Xavier NX. In particular, we adapt pretrained state-of-the-art robust DNNs (trained using data augmentation) to improve the accuracy on image classification data that contains various image corruptions. During this prediction-time on-device adaptation, the model parameters of a DNN are updated using a single backpropagation pass while optimizing entropy loss. The effects of following three simple model updates are compared in terms of accuracy, adaptation time and energy: updating only convolutional (Conv-Tune); only fully-connected (FC-Tune); and only batch-norm parameters (BN-Tune). Our study shows that BN-Tune and Conv-Tune are more effective than FC-Tune in terms of improving accuracy for corrupted images data (average of 6.6%, 4.97%, and 4.02%, respectively over no adaptation). However, FC-Tune leads to significantly faster and more energy efficient solution with a small loss in accuracy. Even when using FC-Tune, the extra overheads of on-device fine-tuning are significant to meet tight real-time deadlines (209ms). This study motivates the need for designing hardware-aware robust algorithms for efficient ondevice adaptation at the autonomous edge.
引用
收藏
页码:412 / 417
页数:6
相关论文
共 50 条
  • [41] The Case for Adaptive Deep Neural Networks in Edge Computing
    McNamee, Francis
    Dustdar, Schahram
    Kilpatrick, Peter
    Shi, Weisong
    Spence, Ivor
    Varghese, Blesson
    2021 IEEE 14TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD 2021), 2021, : 43 - 52
  • [42] An Empirical Study on Test Case Prioritization Metrics for Deep Neural Networks
    Shi, Ying
    Yin, Beibei
    Zheng, Zheng
    Li, Tiancheng
    2021 IEEE 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY (QRS 2021), 2021, : 157 - 166
  • [43] Automatic Brain Tumor Segmentation Using Convolutional Neural Networks with Test-Time Augmentation
    Wang, Guotai
    Li, Wenqi
    Ourselin, Sebastien
    Vercauteren, Tom
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2018, PT II, 2019, 11384 : 61 - 72
  • [44] GRAPHPATCHER: Mitigating Degree Bias for Graph Neural Networks via Test-time Augmentation
    Ju, Mingxuan
    Zhao, Tong
    Yu, Wenhao
    Shah, Neil
    Ye, Yanfang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Extrapolative Continuous-time Bayesian Neural Network for Fast Training-free Test-time Adaptation
    Huang, Hengguan
    Gu, Xiangming
    Wang, Hao
    Xiao, Chang
    Liu, Hongfu
    Wang, Ye
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [46] Boosting anomaly detection using unsupervised diverse test-time augmentation
    Cohen, Seffi
    Goldshlager, Niv
    Rokach, Lior
    Shapira, Bracha
    INFORMATION SCIENCES, 2023, 626 : 821 - 836
  • [47] Category-Aware Test-Time Training Domain Adaptation
    Feng, Yangqin
    Xu, Xinxing
    Fu, Huazhu
    Wang, Yan
    Wang, Zizhou
    Zhen, Liangli
    Goh, Rick Siow Mong
    Liu, Yong
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 300 - 306
  • [48] Multiple Teacher Model for Continual Test-Time Domain Adaptation
    Wang, Ran
    Zuo, Hua
    Fang, Zhen
    Lu, Jie
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 304 - 314
  • [49] Compression and restoration: exploring elasticity in continual test-time adaptation
    Li, Jingwei
    Liu, Chengbao
    Bai, Xiwei
    Tan, Jie
    Chu, Jiaqi
    Wang, Yudong
    MACHINE LEARNING, 2025, 114 (04)
  • [50] A Comprehensive Survey on Test-Time Adaptation Under Distribution Shifts
    Liang, Jian
    He, Ran
    Tan, Tieniu
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (01) : 31 - 64