Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments

被引:4
|
作者
Vatral, Caleb [1 ]
Biswas, Gautam [1 ]
Cohn, Clayton [1 ]
Davalos, Eduardo [1 ]
Mohammed, Naveeduddin [1 ]
机构
[1] Vanderbilt Univ, Inst Software Integrated Syst, Dept Comp Sci, Open Ended Learning Environm, Nashville, TN 37235 USA
来源
基金
美国国家科学基金会;
关键词
distributed cognition; learning analytics (LA); multimodal data; simulation based training (SBT); mixed reality (MR); DiCoT; human performance; multimodal learning analytics (MMLA); DISTRIBUTED COGNITION; SIMULATION; EDUCATION; MANNEQUIN; SAFETY; TRACKING; DESIGN; MODEL;
D O I
10.3389/frai.2022.941825
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.
引用
收藏
页数:30
相关论文
共 50 条
  • [41] A Serious Mixed-Reality Game for Training Police Officers in Tagging Crime Scenes
    Acampora, Giovanni
    Trinchese, Pasquale
    Trinchese, Roberto
    Vitiello, Autilia
    APPLIED SCIENCES-BASEL, 2023, 13 (02):
  • [42] Interactive Visual Procedure using an extended FMEA and Mixed-Reality
    Arevalo, Fernando
    Sunaringtyas, Dimitri
    Tito, Cristhian
    Piolo, Christian
    Schwung, Andreas
    2020 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2020, : 286 - 291
  • [43] Automatic Generation of Interactive NPC Scripts for a Mixed-Reality Integrated Learning Environment
    Yuan, Andrew M.
    Ke, Fengfeng
    Naglieri, Raymond
    Dai, Zhaihuan
    Pachman, Mariya
    PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON EDUCATION TECHNOLOGY AND COMPUTERS (ICETC 2018), 2018, : 74 - 79
  • [44] A Mixed-reality Interaction-driven Game-based Learning Framework
    Spiliotopoulos, Dimitris
    Margaris, Dionisis
    Vassilakis, Costas
    Petukhova, Volha
    Kotis, Konstantinos
    11TH INTERNATIONAL CONFERENCE ON MANAGEMENT OF DIGITAL ECOSYSTEMS (MEDES), 2019, : 229 - 236
  • [45] Multi-User Mixed-Reality Collaboration Framework for Onsite MEP Coordination
    Cheng, Huai-En
    Le, Thai-Hoa
    Lin, Jacob J.
    CONSTRUCTION RESEARCH CONGRESS 2024: ADVANCED TECHNOLOGIES, AUTOMATION, AND COMPUTER APPLICATIONS IN CONSTRUCTION, 2024, : 329 - 338
  • [46] Mixed-Reality Snapshot System Using Environmental Depth Sensors
    Tsuru, Hiroyoshi
    Kitahara, Itaru
    Ohta, Yuichi
    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 97 - 100
  • [47] Poster: Maestro: The Analysis-Simulation Integrated Framework for Mixed Reality
    Lee, Jingyu
    Kim, Hyunsoo
    Kim, Minjae
    Chun, Byung-Gon
    Lee, Youngki
    PROCEEDINGS OF THE 2024 THE 22ND ANNUAL INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS, APPLICATIONS AND SERVICES, MOBISYS 2024, 2024, : 670 - 671
  • [48] Mixed-reality learning environments: Integrating mobile interfaces with laboratory test-beds
    Frank, Jared A.
    Kapila, Vikram
    COMPUTERS & EDUCATION, 2017, 110 : 88 - 104
  • [49] A mixed-reality approach to soundscape assessment of outdoor urban environments augmented with natural sounds*
    Hong, Joo Young
    Lam, Bhan
    Ong, Zhen-Ting
    Ooi, Kenneth
    Gan, Woon-Seng
    Kang, Jian
    Yeong, Samuel
    Lee, Irene
    Tan, Sze-Tiong
    BUILDING AND ENVIRONMENT, 2021, 194
  • [50] Improving mixed-reality neuronavigation with blue-green light: a comparative multimodal laboratory study
    Marrone, Salvatore
    Scalia, Gianluca
    Strigari, Lidia
    Ranganathan, Sruthi
    Travali, Mario
    Maugeri, Rosario
    Costanzo, Roberta
    Brunasso, Lara
    Bonosi, Lapo
    Cicero, Salvatore
    Iacopino, Domenico Gerardo
    Salvati, Maurizio
    Umana, Giuseppe Emanuele
    NEUROSURGICAL FOCUS, 2024, 56 (01)