Enhancing Smart Contract Security Through Multi-Agent Deep Reinforcement Learning Fuzzing: A Survey of Approaches and Techniques

被引:0
|
作者
Andrijasa, Muhammad Farman [1 ,2 ]
Ismail, Saiful Adli [1 ]
Ahmad, Norulhusna [1 ]
Yusop, Othman Mohd [1 ]
机构
[1] Univ Teknol Malaysia, Fac Artificial Intelligence, Kuala Lumpur, Malaysia
[2] State Polytech Samarinda, Samarinda, Indonesia
关键词
Smart contract security; multi-agent systems; deep reinforcement learning; fuzzing techniques; blockchain technology;
D O I
10.14569/IJACSA.2024.0150576
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Multi-Agent Systems (MAS) and Deep Reinforcement Learning (DRL) have emerged as powerful tools for enhancing security measures, particularly in the context of smart contract security in blockchain technology. This literature review explores the integration of Multi-Agent DRL fuzzing techniques to bolster the security of smart contracts. The study delves into the formalization of emergence in MAS, the comprehensive survey of multi-agent reinforcement learning, and progress on the state explosion problem in model checking. By addressing challenges such as state space explosion, real-time detection, and adaptability across blockchain platforms, researchers aim to advance the field of smart contract security. The review emphasizes the significance of Multi-Agent DRL fuzzing in improving security testing processes and calls for future research and collaboration to enhance the resilience and integrity of decentralized applications. Through advancements in algorithmic efficiency, the incorporation of Explainable AI, cross-domain applications of MAS, and cooperation with blockchain development teams, the future of smart contract security holds promise for robust and secure blockchain ecosystems.
引用
收藏
页码:754 / 767
页数:14
相关论文
共 50 条
  • [1] Enhancing Smart-Contract Security through Machine Learning: A Survey of Approaches and Techniques
    Jiang, Fan
    Chao, Kailin
    Xiao, Jianmao
    Liu, Qinghua
    Gu, Keyang
    Wu, Junyi
    Cao, Yuanlong
    [J]. ELECTRONICS, 2023, 12 (09)
  • [2] Multi-agent deep reinforcement learning: a survey
    Gronauer, Sven
    Diepold, Klaus
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (02) : 895 - 943
  • [3] Multi-agent deep reinforcement learning: a survey
    Sven Gronauer
    Klaus Diepold
    [J]. Artificial Intelligence Review, 2022, 55 : 895 - 943
  • [4] Deep Multi-Agent Reinforcement Learning: A Survey
    Liang, Xing-Xing
    Feng, Yang-He
    Ma, Yang
    Cheng, Guang-Quan
    Huang, Jin-Cai
    Wang, Qi
    Zhou, Yu-Zhen
    Liu, Zhong
    [J]. Zidonghua Xuebao/Acta Automatica Sinica, 2020, 46 (12): : 2537 - 2557
  • [5] Survey of Fully Cooperative Multi-Agent Deep Reinforcement Learning
    Zhao, Liyang
    Chang, Tianqing
    Chu, Kaixuan
    Guo, Libin
    Zhang, Lei
    [J]. Computer Engineering and Applications, 2023, 59 (12) : 14 - 27
  • [6] BanditFuzz: Fuzzing SMT Solvers with Multi-agent Reinforcement Learning
    Scott, Joseph
    Sudula, Trishal
    Rehman, Hammad
    Mora, Federico
    Ganesh, Vijay
    [J]. FORMAL METHODS, FM 2021, 2021, 13047 : 103 - 121
  • [7] A survey on scalability and transferability of multi-agent deep reinforcement learning
    Yan, Chao
    Xiang, Xiao-Jia
    Xu, Xin
    Wang, Chang
    Zhou, Han
    Shen, Lin-Cheng
    [J]. Kongzhi yu Juece/Control and Decision, 2022, 37 (12): : 3083 - 3102
  • [8] Multi-agent reinforcement learning: A survey
    Busoniu, Lucian
    Babuska, Robert
    De Schutter, Bart
    [J]. 2006 9TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1- 5, 2006, : 1133 - +
  • [9] Multi-Agent Deep Reinforcement Learning for Multi-Robot Applications: A Survey
    Orr, James
    Dutta, Ayan
    [J]. SENSORS, 2023, 23 (07)
  • [10] Deep reinforcement learning for multi-agent interaction
    Ahmed, Ibrahim H.
    Brewitt, Cillian
    Carlucho, Ignacio
    Christianos, Filippos
    Dunion, Mhairi
    Fosong, Elliot
    Garcin, Samuel
    Guo, Shangmin
    Gyevnar, Balint
    McInroe, Trevor
    Papoudakis, Georgios
    Rahman, Arrasy
    Schafer, Lukas
    Tamborski, Massimiliano
    Vecchio, Giuseppe
    Wang, Cheng
    Albrecht, Stefano, V
    [J]. AI COMMUNICATIONS, 2022, 35 (04) : 357 - 368