Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments

被引:4
|
作者
Vatral, Caleb [1 ]
Biswas, Gautam [1 ]
Cohn, Clayton [1 ]
Davalos, Eduardo [1 ]
Mohammed, Naveeduddin [1 ]
机构
[1] Vanderbilt Univ, Inst Software Integrated Syst, Dept Comp Sci, Open Ended Learning Environm, Nashville, TN 37235 USA
来源
基金
美国国家科学基金会;
关键词
distributed cognition; learning analytics (LA); multimodal data; simulation based training (SBT); mixed reality (MR); DiCoT; human performance; multimodal learning analytics (MMLA); DISTRIBUTED COGNITION; SIMULATION; EDUCATION; MANNEQUIN; SAFETY; TRACKING; DESIGN; MODEL;
D O I
10.3389/frai.2022.941825
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.
引用
收藏
页数:30
相关论文
共 50 条
  • [21] A pilot study of local anesthesia training using a mixed-reality haptic fidelity model
    Lamira, Jensine M. M.
    Wilson, Charlotte S. S.
    Leppek, Noah C. C.
    Orr, Caley M. M.
    De la Rosa, Laurice M. M.
    Greany, Thomas J. J.
    JOURNAL OF DENTAL EDUCATION, 2023, 87 (04) : 583 - 591
  • [22] A Mixed-Reality Training Environment for Upper Limb Prosthesis Control
    Sharma, Avinash
    Hunt, Christopher L.
    Maheshwari, Asheesh
    Osborn, Luke
    Levay, Gyorgy
    Kaliki, Rahul R.
    Soares, Alcimar B.
    Thakor, Nitish
    2018 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS): ADVANCED SYSTEMS FOR ENHANCING HUMAN HEALTH, 2018, : 17 - 20
  • [23] A mixed-reality stimulator for lumbar puncture training: a pilot study
    Xiaojing Huang
    Zhaoxia Yan
    Chao Gong
    Zheliang Zhou
    Hua Xu
    Chunhui Qin
    Zhenwei Wang
    BMC Medical Education, 23
  • [24] Special issue on spatial sound in virtual, augmented, and mixed-reality environments
    Michael Cohen
    Julián Villegas
    Woodrow Barfield
    Virtual Reality, 2015, 19 : 147 - 148
  • [25] A Model-Driven Engineering Approach for Immersive Mixed-Reality Environments
    Nakevska, Marija
    Markovski, Jasen
    Rauterberg, Matthias
    ENTERTAINMENT COMPUTING - ICEC 2013, 2013, 8215 : 147 - 150
  • [26] Development and Experimental Verification of Model-Based Process Control using Mixed-Reality Environments
    Schubert, Udo
    Arellano-Garcia, Harvey
    Wozny, Guenter
    19TH EUROPEAN SYMPOSIUM ON COMPUTER AIDED PROCESS ENGINEERING, 2009, 26 : 333 - 337
  • [27] Special issue on spatial sound in virtual, augmented, and mixed-reality environments
    Cohen, Michael
    Villegas, Julian
    Barfield, Woodrow
    VIRTUAL REALITY, 2015, 19 (3-4) : 147 - 148
  • [28] Using Mixed-Reality Simulation in Teacher Preparation in Reading
    Allen, Abigail A.
    Stecker, Pamela M.
    INTERVENTION IN SCHOOL AND CLINIC, 2023, 59 (01) : 59 - 65
  • [29] Using a Camera Phone as a Mixed-Reality Laser Cannon
    Chehimi, Fadi
    Coulton, Paul
    Edwards, Reuben
    INTERNATIONAL JOURNAL OF COMPUTER GAMES TECHNOLOGY, 2008, 2008
  • [30] Development and Evaluation of Pediatric Mixed-Reality Model for Neuroendoscopic Surgical Training
    Coelho, Giselle
    Figueiredo, Eberval Gadelha
    Rabelo, Nicollas Nunes
    de Souza, Matheus Rodrigues
    Fagundes, Caroline Ferreira
    Teixeira, Manoel Jacobsen
    Zanon, Nelci
    WORLD NEUROSURGERY, 2020, 139 : E189 - E202