Robustness evaluation of trust and reputation systems using a deep reinforcement learning approach

被引:2
|
作者
Bidgoly, Amir Jalaly [1 ]
Arabi, Fereshteh [1 ]
机构
[1] Univ Qom, Dept Informat Technol & Comp Engn, Qom, Iran
关键词
Trust and reputation; Robustness; Attacks; Deep reinforcement learning; QUANTITATIVE VERIFICATION; MODEL; MANAGEMENT; FRAMEWORK;
D O I
10.1016/j.cor.2023.106250
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
One of the biggest challenges facing trust and reputation systems (TRSs) is evaluating their ability to withstand various attacks, as these systems are vulnerable to multiple types of attacks. While simulation methods have been used to evaluate TRSs, they are limited because they cannot detect new attacks and do not ensure the system's overall robustness. To address this limitation, verification methods have been proposed that can examine the entire state space and detect all possible attacks. However, these methods are not always practical for large models and real environments because they suffer from the state space explosion problem. To tackle this issue, we propose a deep reinforcement learning approach for evaluating the robustness of TRSs. In this approach, an agent can learn how to attack a system and find the best attack plan without prior knowledge. Additionally, our method avoids the state space explosion problem because it uses a deep Q-network instead of storing and examining the entire state space. We tested our proposed method on five well-known reputation models, assessing various attack goals such as selfishness, maliciousness, competition, and slandering. The results showed that our method was successful in identifying the best attack plan and executing it successfully in the system, demonstrating its effectiveness in evaluating the robustness of TRSs.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning
    Hammoud, Mohamad Abed El Rahman
    Raboudi, Naila
    Titi, Edriss S.
    Knio, Omar
    Hoteit, Ibrahim
    JOURNAL OF ADVANCES IN MODELING EARTH SYSTEMS, 2024, 16 (08)
  • [32] Deep Reinforcement Learning for Adaptive Learning Systems
    Li, Xiao
    Xu, Hanchen
    Zhang, Jinming
    Chang, Hua-hua
    JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2023, 48 (02) : 220 - 243
  • [33] Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning
    Everett, Michael
    Lutjens, Bjorn
    How, Jonathan P.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (09) : 4184 - 4198
  • [34] A deep recurrent reinforcement learning approach for enhanced MPPT in PV systems
    Wadehra, Archit
    Bhalla, Siddhant
    Jaiswal, Vicky
    Rana, K. P. S.
    Kumar, Vineet
    APPLIED SOFT COMPUTING, 2024, 162
  • [35] A Novel Deep Reinforcement Learning Approach for Task Offloading in MEC Systems
    Liu, Xiaowei
    Jiang, Shuwen
    Wu, Yi
    APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [36] A fast learning approach for autonomous navigation using a deep reinforcement learning method
    Ejaz, Muhammad Mudassir
    Tang, Tong Boon
    Lu, Cheng-Kai
    ELECTRONICS LETTERS, 2021, 57 (02) : 50 - 53
  • [37] A reinforcement learning approach for reducing traffic congestion using deep Q learning
    Swapno, S. M. Masfequier Rahman
    Nobel, S. M. Nuruzzaman
    Meena, Preeti
    Meena, V. P.
    Azar, Ahmad Taher
    Haider, Zeeshan
    Tounsi, Mohamed
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [38] Trust-Based Social Networks with Computing, Caching and Communications: A Deep Reinforcement Learning Approach
    He, Ying
    Liang, Chengchao
    Yu, F. Richard
    Han, Zhu
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2020, 7 (01): : 66 - 79
  • [39] Advancing Security and Trust in WSNs: A Federated Multi-Agent Deep Reinforcement Learning Approach
    Moudoud, Hajar
    El Houda, Zakaria Abou
    Brik, Bouziane
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) : 6909 - 6918
  • [40] Deep reinforcement learning for swarm systems
    Hüttenrauch, Maximilian
    Oic, Adrian
    Neumann, Gerhard
    Journal of Machine Learning Research, 2019, 20