Read Between the Lines: An Annotation Tool for Multimodal Data for Learning

被引:28
|
作者
Di Mitri, Daniele [1 ]
Schneider, Jan [2 ]
Klemke, Roland [1 ]
Specht, Marcus [1 ]
Drachsler, Hendrik [1 ,2 ]
机构
[1] Open Univ Netherlands, Welten Inst, Res Ctr Learning Teaching & Technol, Valkenburgerweg 177, NL-6401 AT Heerlen, Netherlands
[2] Leibniz Inst Res & Informat Educ, Rostocker Str 6, D-60323 Frankfurt, Germany
关键词
Multimodal data; Internet of Things; Learning Analytics; sensors;
D O I
10.1145/3303772.3303776
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.
引用
收藏
页码:51 / 60
页数:10
相关论文
共 50 条
  • [1] A Framework for Multimodal Data Collection, Visualization, Annotation and Learning
    Thompson, Anne Loomis
    Bohus, Dan
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 67 - 68
  • [2] The Multimodal Annotation Software Tool (MAST)
    Cardoso, Bruno
    Cohn, Neil
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6822 - 6828
  • [3] Multimodal Multisensor Activity Annotation Tool
    Barz, Michael
    Weber, Markus
    Moniri, Mohammad Mehdi
    Sonntag, Daniel
    [J]. UBICOMP'16 ADJUNCT: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, 2016, : 17 - 20
  • [4] Learning to read between the lines:: The aspect Bernoulli model
    Kabán, A
    Bingham, E
    Hirsimäki, T
    [J]. PROCEEDINGS OF THE FOURTH SIAM INTERNATIONAL CONFERENCE ON DATA MINING, 2004, : 462 - 466
  • [5] A Data Collection and Annotation Tool for Asynchronous Multimodal Data During Human-Computer Interactions
    Khan, Nibraas
    Ghosh, Ritam
    Migovich, Miroslava
    Johnson, Andrew
    Witherow, Austin
    Taylor, Curtis
    Schroder, Matt
    Vongpanya, Tyler
    Sarkar, Medha
    Sarkar, Nilanjan
    [J]. HUMAN ASPECTS OF IT FOR THE AGED POPULATION: DESIGN, INTERACTION AND TECHNOLOGY ACCEPTANCE, PT I, 2022, 13330 : 201 - 211
  • [6] READ BETWEEN THE LINES
    TALCOTT, AW
    [J]. LIBRARY JOURNAL, 1987, 112 (05) : 61 - 61
  • [7] READ BETWEEN LINES
    PARKER, KA
    [J]. DATAMATION, 1972, 18 (01): : 21 - &
  • [8] Read Between the Lines
    Levisman, Jeffrey
    Amsterdam, Ezra
    [J]. AMERICAN JOURNAL OF MEDICINE, 2011, 124 (01): : 35 - 36
  • [9] READ BETWEEN THE LINES
    LENGNICKHALLAND, M
    RUTHERFORD, L
    [J]. PERSONNEL, 1990, 67 (12) : 5 - 5
  • [10] Read between the lines
    [J]. 2005, Findlay Publications Ltd (38):