Investigating Concurrent Speech-based Designs for Information Communication

被引:4
|
作者
Abu ul Fazal, Muhammad [1 ]
Ferguson, Sam [1 ]
Johnston, Andrew [1 ]
机构
[1] Univ Technol, Creat & Cognit Studios, Sydney, NSW, Australia
关键词
Concurrent audio; Speech-based Information Comprehension; Dichotic listening; Diotic listening; Information Comprehension Study; Spatial cues; Intermittent & continuous audio presentation; COMPREHENSION; DISCOURSE;
D O I
10.1145/3243274.3243284
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Speech-based information is usually communicated to users in a sequential manner, but users are capable of obtaining information from multiple voices concurrently. This fact implies that the sequential approach is possibly under-utilizing human perception capabilities to some extent and restricting users to perform optimally in an immersive environment. This paper reports on an experiment that aimed to test different speech-based designs for concurrent information communication. Two audio streams from two types of content were played concurrently to 34 users, in both a continuous or intermittent form, with the manipulation of a variety of spatial configurations (i.e. Diotic, Diotic-Monotic, and Dichotic). In total, 12 concurrent speech-based design configurations were tested with each user. The results showed that the concurrent speech-based information designs involving intermittent form and the spatial difference in information streams produce comprehensibility equal to the level achieved in sequential information communication.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Speech-Based Activity Recognition for Trauma Resuscitation
    Abdulbaqi, Jalal
    Gu, Yue
    Xu, Zhichao
    Gao, Chenyang
    Marsic, Ivan
    Burd, Randall S.
    2020 8TH IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2020), 2020, : 376 - 383
  • [42] Verifying Human Users in Speech-Based Interactions
    Shirali-Shahreza, Sajad
    Ganjali, Yashar
    Balakrishnan, Ravin
    12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 1596 - 1599
  • [43] Effect of Reverberation in Speech-based Emotion Recognition
    Zhao, Shujie
    Yang, Yan
    Chen, Jingdong
    2018 IEEE INTERNATIONAL CONFERENCE ON THE SCIENCE OF ELECTRICAL ENGINEERING IN ISRAEL (ICSEE), 2018,
  • [44] An architecture and applications for speech-based accessibility systems
    Turunen, M
    Hakulinen, J
    Räihä, KJ
    Salonen, EP
    Kainulainen, A
    Prusi, P
    IBM SYSTEMS JOURNAL, 2005, 44 (03) : 485 - 504
  • [45] Speech-based cognitive load monitoring system
    Yin, Bo
    Chen, Fang
    Ruiz, Natalie
    Ambikairajah, Eliathamby
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 2041 - 2044
  • [46] Browsing the web from a speech-based interface
    Poon, J
    Nunn, C
    HUMAN-COMPUTER INTERACTION - INTERACT'01, 2001, : 302 - 309
  • [47] An investigation of speech-based human emotion recognition
    Wang, YJ
    Guan, L
    2004 IEEE 6TH WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, 2004, : 15 - 18
  • [48] Towards Robust Speech-Based Emotion Recognition
    Tabatabaei, Talieh S.
    Krishnan, Sridhar
    2010 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010), 2010,
  • [49] VoiceWriting: a completely speech-based text editor
    De Marsico, Maria
    Mattei, Francesca Romana
    PROCEEDINGS OF THE 14TH BIANNUAL CONFERENCE OF THE ITALIAN SIGCHI CHAPTER (CHIITALY 2021), 2021,
  • [50] Portable Speech-based Aids for Blind Persons
    Kordon, U.
    Informationstechnik und Technische Informatik, 39 (02):