Trust in Autonomous Systems for Threat Analysis: A Simulation Methodology

被引:1
|
作者
Matthews, Gerald [1 ]
Panganiban, April Rose [2 ]
Bailey, Rachel [2 ]
Lin, Jinchao [1 ]
机构
[1] Univ Cent Florida, Inst Simulat & Training, Orlando, FL 32816 USA
[2] Air Force Res Lab, Wright Patterson AFB, OH USA
关键词
Autonomous systems; Trust; Threat detection; Simulation; Cognitive processes; INDIVIDUAL-DIFFERENCES; AUTOMATION; METAANALYSIS;
D O I
10.1007/978-3-319-91584-5_27
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Human operators will increasingly team with autonomous systems in military and security settings, for example, evaluation and analysis of threats. Determining whether humans are threatening is a particular challenge to which future autonomous systems may contribute. Optimal trust calibration is critical for mission success, but most trust research has addressed conventional automated systems of limited intelligence. This article identifies multiple factors that may influence trust in autonomous systems. Trust may be undermined by various sources of demand and uncertainty. These include the cognitive demands resulting from the complexity and unpredictability of the system, "social" demands resulting from the system's capacity to function as a team-member, and self-regulative demands associated with perceived threats to personal competence. It is proposed that existing gaps in trust research may be addressed using simulation methodologies. A simulated environment developed by the research team is described. It represents a "town-clearing" task in which the human operator teams with a robot that can be equipped with various sensors, and software for intelligent analysis of sensor data. The functionality of the simulator is illustrated, together with future research directions.
引用
下载
收藏
页码:341 / 353
页数:13
相关论文
共 50 条
  • [1] Trust in Autonomous Systems-iTrust Lab Future Directions for Analysis of Trust With Autonomous Systems
    Nahavandi, Saeid
    IEEE SYSTEMS MAN AND CYBERNETICS MAGAZINE, 2019, 5 (03): : 52 - 59
  • [2] Building Trust in Autonomous Systems: Opportunities for Modelling and Simulation
    Mansfield, Thomas
    Caamano, Pilar
    Godfrey, Sasha Blue
    Carrera, Arnau
    Tremori, Alberto
    Nandakumar, Girish
    Moberly, Kevin
    Cronin, Jeremiah
    Da Deppo, Serge
    MODELLING AND SIMULATION FOR AUTONOMOUS SYSTEMS (MESAS 2021), 2022, 13207 : 424 - 439
  • [3] Predictive Runtime Simulation for building Trust in Cooperative Autonomous Systems
    Cioroaica, Emilia
    Schneider, Daniel
    AlZughbi, Hanna
    Reich, Jan
    Adler, Rasmus
    Braun, Tobias
    2019 49TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS (DSN-W), 2019, : 86 - 89
  • [4] A trust analysis methodology for pervasive computing systems
    Lo Presti, S
    Butler, M
    Leuschel, M
    Booth, C
    TRUSTING AGENTS FOR TRUSTING ELECTRONIC SOCIETIES: THEORY AND APPLICATIONS IN HCI AND E-COMMERCE, 2005, 3577 : 129 - 143
  • [5] Autonomous trust construction in multi-agent systems - a graph theory methodology
    Jiang, YC
    Xia, ZY
    Zhong, YP
    Zhang, SY
    ADVANCES IN ENGINEERING SOFTWARE, 2005, 36 (02) : 59 - 66
  • [6] A Survey on Trust in Autonomous Systems
    Shahrdar, Shervin
    Menezes, Luiza
    Nojoumian, Mehrdad
    INTELLIGENT COMPUTING, VOL 2, 2019, 857 : 368 - 386
  • [7] Autonomous Systems, Trust, and Guarantees
    TaheriNejad, Nima
    Herkersdorf, Andreas
    Jantsch, Axel
    IEEE DESIGN & TEST, 2022, 39 (01) : 42 - 48
  • [8] A Reliability Evaluation Methodology for X-in-the-Loop Simulation in Autonomous Vehicle Systems
    Oh, Taeyoung
    Cho, Sungwoo
    Yoo, Jinwoo
    IEEE Access, 2024, 12 : 193622 - 193640
  • [9] Trust and resilient autonomous driving systems
    Adam Henschke
    Ethics and Information Technology, 2020, 22 : 81 - 92
  • [10] Trust and resilient autonomous driving systems
    Henschke, Adam
    ETHICS AND INFORMATION TECHNOLOGY, 2020, 22 (01) : 81 - 92