Adversarial Robustness in Graph-Based Neural Architecture Search for Edge AI Transportation Systems

被引:3
|
作者
Xu, Peng [1 ]
Wang, Ke [2 ]
Hassan, Mohammad Mehedi [3 ]
Chen, Chien-Ming [4 ]
Lin, Weiguo [5 ]
Hassan, Md Rafiul [6 ]
Fortino, Giancarlo [7 ]
机构
[1] Harbin Inst Technol Shenzhen, Dept Comp Sci, Shenzhen 518055, Peoples R China
[2] Jinan Univ, Coll Informat Sci & Technol, Guangzhou 510632, Peoples R China
[3] King Saud Univ, Coll Comp & Informat Sci, Informat Syst Dept, Riyadh 11543, Saudi Arabia
[4] Shandong Univ Sci & Technol, Coll Comp Sci & Engn, Qingdao 266590, Shandong, Peoples R China
[5] Commun Univ China, State Key Lab Media Convergence & Commun, Beijing 100024, Peoples R China
[6] Univ Maine Presque Isle, Coll Arts & Sci, Presque Isle, ME 04769 USA
[7] Univ Calabria, Dept Informat Modeling Elect & Syst, I-87036 Arcavacata Di Rende, Italy
关键词
Robustness; Computational modeling; Data models; Mathematical models; Analytical models; Deep learning; Computer architecture; Adversarial robustness; adversarial example; model compression and neural architecture search;
D O I
10.1109/TITS.2022.3197713
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Edge AI technologies have been used for many Intelligent Transportation Systems, such as road traffic monitor systems. Neural Architecture Search (NAS) is a typcial way to search high-performance models for edge devices with limited computing resources. However, NAS is also vulnerable to adversarial attacks. In this paper, A One-Shot NAS is employed to realize derivative models with different scales. In order to study the relation between adversarial robustness and model scales, a graph-based method is designed to select best sub models generated from One-Shot NAS. Besides, an evaluation method is proposed to assess robustness of deep learning models under various scales of models. Experimental results shows an interesting phenomenon about the correlations between network sizes and model robustness, reducing model parameters will increase model robustness under maximum adversarial attacks, while, increasing model paremters will increase model robustness under minimum adversarial attacks. The phenomenon is analyzed, that is able to help understand the adversarial robustness of models with different scales for edge AI transportation systems.
引用
收藏
页码:8465 / 8474
页数:10
相关论文
共 50 条
  • [1] Graph-based Neural Architecture Search with Operation Embeddings
    Chatzianastasis, Michail
    Dasoulas, George
    Siolas, Georgios
    Vazirgiannis, Michalis
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 393 - 402
  • [2] On Adversarial Robustness: A Neural Architecture Search perspective
    Devaguptapu, Chaitanya
    Agarwal, Devansh
    Mittal, Gaurav
    Gopalani, Pulkit
    Balasubramanian, Vineeth N.
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 152 - 161
  • [3] An Interpretive Perspective: Adversarial Trojaning Attack on Neural-Architecture-Search Enabled Edge AI Systems
    Xu, Ship Peng
    Wang, Ke
    Hassan, Md Rafiul
    Hassan, Mohammad Mehedi
    Chen, Chien-Ming
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (01) : 503 - 510
  • [4] AutoGSR: Neural Architecture Search for Graph-based Session Recommendation
    Chen, Jingfan
    Zhu, Guanghui
    Hou, Haojun
    Yuan, Chunfeng
    Huang, Yihua
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 1694 - 1704
  • [5] Neural Architecture Search for Wide Spectrum Adversarial Robustness
    Cheng, Zhi
    Li, Yanxi
    Dong, Minjing
    Su, Xiu
    You, Shan
    Xu, Chang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 442 - 451
  • [6] Uncovering Hidden Vulnerabilities in Convolutional Neural Networks through Graph-based Adversarial Robustness Evaluation
    Wang, Ke
    Chen, Zicong
    Dang, Xilin
    Fan, Xuan
    Han, Xuming
    Chen, Chien-Ming
    Ding, Weiping
    Yiu, Siu-Ming
    Weng, Jian
    PATTERN RECOGNITION, 2023, 143
  • [7] Neural Architecture Dilation for Adversarial Robustness
    Li, Yanxi
    Yang, Zhaohui
    Wang, Yunhe
    Xu, Chang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] Graph Neural Architecture Search
    Gao, Yang
    Yang, Hong
    Zhang, Peng
    Zhou, Chuan
    Hu, Yue
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1403 - 1409
  • [9] Generative Adversarial Neural Architecture Search
    Rezaei, Seyed Saeed Changiz
    Han, Fred X.
    Niu, Di
    Salameh, Mohammad
    Mills, Keith
    Lian, Shuo
    Lu, Wei
    Jui, Shangling
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2227 - 2234
  • [10] Adversarial Learning Attacks on Graph-based IoT Malware Detection Systems
    Abusnaina, Ahmed
    Khormali, Aminollah
    Alasmary, Hisham
    Park, Jeman
    Anwar, Afsah
    Mohaisen, Aziz
    2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, : 1296 - 1305