Enhancing can security with ML-based IDS: Strategies and efficacies against adversarial attacks

被引:0
|
作者
Lin, Ying-Dar [1 ]
Chan, Wei-Hsiang [1 ]
Lai, Yuan-Cheng [2 ]
Yu, Chia-Mu [3 ]
Wu, Yu-Sung [1 ]
Lee, Wei-Bin [4 ]
机构
[1] Natl Yang Ming Chiao Tung Univ, Dept Comp Sci, Hsinchu 300, Taiwan
[2] Natl Taiwan Univ Sci & Technol, Dept Informat Management, Taipei 10607, Taiwan
[3] Natl Yang Ming Chiao Tung Univ, Dept Elect & Elect Engn, Hsinchu 300, Taiwan
[4] Hon Hai Res Inst, Taipei, Taiwan
关键词
Adversarial attack; Machine learning; Intrusion detection; Distance-based optimization; Electronic vehicle;
D O I
10.1016/j.cose.2025.104322
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Control Area Networks (CAN) face serious security threats recently due to their inherent vulnerabilities and the increasing sophistication of cyberattacks targeting automotive and industrial systems. This paper focuses on enhancing the security of CAN, which currently lack adequate defense mechanisms. We propose integrating Machine Learning-based Intrusion Detection Systems (ML-based IDS) into the network to address this vulnerability. However, ML systems are susceptible to adversarial attacks, leading to misclassification of data. We introduce three defense combination methods to mitigate this risk: adversarial training, ensemble learning, and distance-based optimization. Additionally, we employ a simulated annealing algorithm in distance-based optimization to optimize the distance moved in feature space, aiming to minimize intra-class distance and maximize the inter-class distance. Our results show that the ZOO attack is the most potent adversarial attack, significantly impacting model performance. In terms of model, the basic models achieve an F1 score of 0.99, with CNN being the most robust against adversarial attacks. Under known adversarial attacks, the average F1 score decreases to 0.56. Adversarial training with triplet loss does not perform well, achieving only 0.64, while our defense method attains the highest F1 score of 0.97. For unknown adversarial attacks, the F1 score drops to 0.24, with adversarial training with triplet loss scoring 0.47. Our defense method still achieves the highest score of 0.61. These results demonstrate our method's excellent performance against known and unknown adversarial attacks.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] StratDef: Strategic defense against adversarial attacks in ML-based malware detection
    Rashid, Aqib
    Such, Jose
    COMPUTERS & SECURITY, 2023, 134
  • [2] MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-Based Malware Detection
    Rashid, Aqib
    Such, Jose
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4361 - 4376
  • [3] Defending ML-Based Feedback Loop System Against Malicious Adversarial Inference Attacks
    Vahakainu, Petri
    Lehto, Martti
    Kariluoto, Antti
    PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON CYBER WARFARE AND SECURITY (ICCWS 2021), 2021, : 382 - 390
  • [4] Experimental Study of Adversarial Attacks on ML-based xApps in O-RAN
    Sapavath, Naveen Naik
    Kim, Brian
    Chowdhury, Kaushik
    Shah, Vijay K.
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6352 - 6357
  • [5] Bringing to Light: Adversarial Poisoning Detection for ML-Based IDS in Software-Defined Networks
    Das, Tapadhir
    Shukla, Raj Mani
    Rath, Suman
    Sengupta, Shamik
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2025, 12 (02): : 791 - 802
  • [6] The Challenges in ML-based Security for SDN
    Nguyen, Tam N.
    2018 2ND CYBER SECURITY IN NETWORKING CONFERENCE (CSNET), 2018,
  • [7] mDARTS: Searching ML-Based ECG Classifiers Against Membership Inference Attacks
    Park, Eunbin
    Lee, Youngjoo
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (01) : 177 - 187
  • [8] Protecting Smart-Home IoT Devices From MQTT Attacks: An Empirical Study of ML-Based IDS
    Alasmari, Rana
    Alhogail, Areej
    IEEE ACCESS, 2024, 12 : 25993 - 26004
  • [9] Eluding ML-based Adblockers With Actionable Adversarial Examples
    Zhu, Shitong
    Wang, Zhongjie
    Chen, Xun
    Li, Shasha
    Man, Keyu
    Iqbal, Umar
    Qian, Zhiyun
    Chan, Kevin S.
    Krishnamurthy, Srikanth V.
    Shafiq, Zubair
    Hao, Yu
    Li, Guoren
    Zhang, Zheng
    Zou, Xiaochen
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 541 - 553
  • [10] Adversarial Perturbation Attacks on ML-based CAD: A Case Study on CNN-based Lithographic Hotspot Detection
    Liu, Kang
    Yang, Haoyu
    Ma, Yuzhe
    Tan, Benjamin
    Yu, Bei
    Young, Evangeline F. Y.
    Karri, Ramesh
    Garg, Siddharth
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2020, 25 (05)