Collaborative editing of Multimodal Annotation Data

被引:0
|
作者
Wieschebrink, Stephan [1 ]
机构
[1] Univ Bielefeld, CRC 673 Alignment Commun, D-33501 Bielefeld, Germany
关键词
Hyperdocument Systems; XML data binding; software engineering; online collaboration; CSCW;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The annotation of multimodal speech corpora is a particularly tedious task, since annotatable events can be composed of smaller events that span across several modalities (e.g. speech and gesture), which imposes the need to operate on the same data, using a wide range of different tools in order to cover all the different modalities and layers of abstraction within multimodal data. MonadicDom4J has been developed as a highly generic general purpose java-based Rich Client framework that opens the possibility to simultaneously operate on any kind of XML data through several different views and from several remote locations. It allows for the dynamic allocation of plugins, needed to render a given type of XML markup, and takes care of the concurrency between different sites viewing the same data, by means of differential synchronization. The demonstration will involve several different applications ranging from general textual hyperdocument editing to multimodal annotation tools, whose contents can be freely intermixed, interlinked and transcluded into different contexts, using drag and drop interaction. The audience will have the opportunity to try collaborative editing on the presented examples from their own devices.
引用
收藏
页码:69 / 71
页数:3
相关论文
共 50 条
  • [1] A Modular Framework for Collaborative Multimodal Annotation and Visualization
    Kim, Chris
    [J]. PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES: COMPANION (IUI 2019), 2019, : 165 - 166
  • [2] The VAST Collaborative Multimodal Annotation Platform: Annotating Values
    Petasis, Georgios
    Ruskov, Martin
    Gradou, Anna
    Kokol, Marko
    [J]. INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 4, WORLDCIST 2023, 2024, 802 : 205 - 216
  • [3] A Framework for Multimodal Data Collection, Visualization, Annotation and Learning
    Thompson, Anne Loomis
    Bohus, Dan
    [J]. ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 67 - 68
  • [4] AMUSED: An Annotation Framework of Multimodal Social Media Data
    Shahi, Gautam Kishore
    Majchrzak, Tim A.
    [J]. INTELLIGENT TECHNOLOGIES AND APPLICATIONS, 2022, 1616 : 287 - 299
  • [5] Collaborative semantic editing of linked data lexica
    McCrae, John
    Montiel-Ponsoda, Elena
    Cimiano, Philipp
    [J]. LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2012, : 2619 - 2625
  • [6] Umwelt: Accessible Structured Editing of Multimodal Data Representations
    Zong, Jonathan
    Pineros, Isabella Pedraza
    Chen, Mengzhu
    Hajas, Daniel
    Satyanarayan, Arvind
    [J]. PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2024, 2024,
  • [7] A Collaborative Pilot Platform for Data Annotation and Enrichment in Viticulture
    Mylonas, Phivos
    Voutos, Yorghos
    Sofou, Anastasia
    [J]. INFORMATION, 2019, 10 (04)
  • [8] The NITE XML Toolkit: Flexible annotation for multimodal language data
    Carletta, J
    Evert, S
    Heid, U
    Kilgour, J
    Robertson, J
    Voormann, H
    [J]. BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS, 2003, 35 (03): : 353 - 363
  • [9] Read Between the Lines: An Annotation Tool for Multimodal Data for Learning
    Di Mitri, Daniele
    Schneider, Jan
    Klemke, Roland
    Specht, Marcus
    Drachsler, Hendrik
    [J]. PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE (LAK'19), 2019, : 51 - 60
  • [10] Towards an integrated scheme for semantic annotation of multimodal dialogue data
    Petukhova, Volha
    Bunt, Harry
    [J]. LREC 2010 - SEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2010, : 2556 - 2563