Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments

被引:4
|
作者
Vatral, Caleb [1 ]
Biswas, Gautam [1 ]
Cohn, Clayton [1 ]
Davalos, Eduardo [1 ]
Mohammed, Naveeduddin [1 ]
机构
[1] Vanderbilt Univ, Inst Software Integrated Syst, Dept Comp Sci, Open Ended Learning Environm, Nashville, TN 37235 USA
来源
基金
美国国家科学基金会;
关键词
distributed cognition; learning analytics (LA); multimodal data; simulation based training (SBT); mixed reality (MR); DiCoT; human performance; multimodal learning analytics (MMLA); DISTRIBUTED COGNITION; SIMULATION; EDUCATION; MANNEQUIN; SAFETY; TRACKING; DESIGN; MODEL;
D O I
10.3389/frai.2022.941825
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.
引用
收藏
页数:30
相关论文
共 50 条
  • [1] Using mixed-reality to develop smart environments
    Pena-Rios, Anasol
    Callaghan, Vic
    Gardner, Michael
    Alhaddad, Mohammed J.
    2014 INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS (IE), 2014, : 182 - 189
  • [2] Teaching training in a mixed-reality integrated learning environment
    Ke, Fengfeng
    Lee, Sungwoong
    Xu, Xinhao
    COMPUTERS IN HUMAN BEHAVIOR, 2016, 62 : 212 - 220
  • [3] Mixed-Reality Humans for Team Training
    Lok, Benjamin
    Chuah, Joon Hao
    Robb, Andrew
    Cordar, Andrew
    Lampotang, Samsun
    Wendling, Adam
    White, Casey
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2014, 34 (03) : 71 - 74
  • [4] Mixed-Reality Demonstration and Training of Glassblowing
    Carre, Anne Laure
    Dubois, Arnaud
    Partarakis, Nikolaos
    Zabulis, Xenophon
    Patsiouras, Nikolaos
    Mantinaki, Elina
    Zidianakis, Emmanouil
    Cadi, Nedjma
    Baka, Evangelia
    Thalmann, Nadia Magnenat
    Makrygiannis, Dimitrios
    Glushkova, Alina
    Manitsaris, Sotirios
    HERITAGE, 2022, 5 (01) : 103 - 128
  • [5] Developing xReality objects for mixed-reality environments
    Pena-Rios, Anasol
    Callaghan, Vic
    Gardner, Michael
    Alhaddad, Mohammed J.
    WORKSHOP PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS, 2013, 17 : 190 - 200
  • [6] Mixed-Reality Learning Environments in Teacher Education: An Analysis of TeachLivE™ Research
    Ersozlu, Zara
    Ledger, Susan
    Ersozlu, Alpay
    Mayne, Fiona
    Wildy, Helen
    SAGE OPEN, 2021, 11 (03):
  • [7] A Mixed-Reality Training System for Teleoperated Biomanipulations
    Mattos, Leonardo
    Caldwell, Darwin G.
    THIRD INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTER-HUMAN INTERACTIONS: ACHI 2010, 2010, : 169 - 174
  • [8] Construction and Implementation of Embodied Mixed-Reality Learning Environments
    Liu Shuguang
    Ba Lin
    2021 INTERNATIONAL CONFERENCE ON BIG DATA ENGINEERING AND EDUCATION (BDEE 2021), 2021, : 126 - 131
  • [9] Design of an Interactive Table for Mixed-Reality Learning Environments
    Su, Mu-Chun
    Chen, Gwo-Dong
    Tsai, Yi-Shan
    Yao, Ren-Hao
    Chou, Chung-Kuang
    Jinawi, Yohannes Budiono
    Huang, De-Yuan
    Hsieh, Yi-Zeng
    Lin, Shih-Chieh
    LEARNING BY PLAYING: GAME-BASED EDUCATION SYSTEM DESIGN AND DEVELOPMENT, 2009, 5670 : 489 - 494
  • [10] CONFIGURING VIRTUAL REALITY DISPLAYS IN A MIXED-REALITY ENVIRONMENT FOR LVC TRAINING
    Newendorp, Brandon J.
    Noon, Christian
    Holub, Joe
    Winer, Eliot H.
    Gilbert, Stephen
    de la Cruz, Julio
    PROCEEDINGS OF THE ASME WORLD CONFERENCE ON INNOVATIVE VIRTUAL REALITY - 2011, 2011, : 423 - +