Investigating Concurrent Speech-based Designs for Information Communication

被引:4
|
作者
Abu ul Fazal, Muhammad [1 ]
Ferguson, Sam [1 ]
Johnston, Andrew [1 ]
机构
[1] Univ Technol, Creat & Cognit Studios, Sydney, NSW, Australia
关键词
Concurrent audio; Speech-based Information Comprehension; Dichotic listening; Diotic listening; Information Comprehension Study; Spatial cues; Intermittent & continuous audio presentation; COMPREHENSION; DISCOURSE;
D O I
10.1145/3243274.3243284
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Speech-based information is usually communicated to users in a sequential manner, but users are capable of obtaining information from multiple voices concurrently. This fact implies that the sequential approach is possibly under-utilizing human perception capabilities to some extent and restricting users to perform optimally in an immersive environment. This paper reports on an experiment that aimed to test different speech-based designs for concurrent information communication. Two audio streams from two types of content were played concurrently to 34 users, in both a continuous or intermittent form, with the manipulation of a variety of spatial configurations (i.e. Diotic, Diotic-Monotic, and Dichotic). In total, 12 concurrent speech-based design configurations were tested with each user. The results showed that the concurrent speech-based information designs involving intermittent form and the spatial difference in information streams produce comprehensibility equal to the level achieved in sequential information communication.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Speaker normalisation for speech-based emotion detection
    Sethu, Vidhyasaharan
    Ambikairajah, Eliathainby
    Epps, Julien
    PROCEEDINGS OF THE 2007 15TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING, 2007, : 611 - +
  • [32] VOICE: a framework for speech-based mobile systems
    Sharp, Adam
    Kurkovsky, Stan
    21ST INTERNATIONAL CONFERENCE ON ADVANCED NETWORKING AND APPLICATIONS WORKSHOPS/SYMPOSIA, VOL 2, PROCEEDINGS, 2007, : 38 - +
  • [33] Speech-Based Annotation and Retrieval of Digital Photographs
    Hazen, Timothy J.
    Sherry, Brennan
    Adler, Mark
    INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 2077 - +
  • [34] An Exploration of Speech-Based Productivity Support in the Car
    Martelaro, Nikolas
    Teevan, Jaime
    Iqbal, Shamsi T.
    CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [35] Speech-based Interaction: Myths, Challenges, and Opportunities
    Munteanu, Cosmin
    Penn, Gerald
    PROCEEDINGS OF THE 16TH ACM INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION WITH MOBILE DEVICES AND SERVICES (MOBILEHCI'14), 2014, : 567 - 568
  • [36] The SRI Speech-Based Collaborative Learning Corpus
    Richey, Colleen
    D'Angelo, Cynthia
    Alozie, Nonye
    Bratt, Harry
    Shriberg, Elizabeth
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 1550 - 1554
  • [37] Contemporary Reflections on Speech-Based Language Learning
    Gustafson, Marianne
    VOLTA REVIEW, 2009, 109 (2-3) : 143 - 153
  • [38] Speech-Based Automated Cognitive Status Assessment
    Hakkani-Tuer, Dilek
    Vergyri, Dimitra
    Tur, Gokhan
    11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 1-2, 2010, : 258 - +
  • [39] Speech-Based Interface For Visually Impaired Users
    Huang, Yi-Chin
    Tsai, Cheng-Hung
    IEEE 20TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS / IEEE 16TH INTERNATIONAL CONFERENCE ON SMART CITY / IEEE 4TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND SYSTEMS (HPCC/SMARTCITY/DSS), 2018, : 1223 - 1228
  • [40] Automatic Speech-Based Smoking Status Identification
    Ma, Zhizhong
    Singh, Satwinder
    Qiu, Yuanhang
    Hou, Feng
    Wang, Ruili
    Bullen, Christopher
    Chu, Joanna Ting Wai
    INTELLIGENT COMPUTING, VOL 3, 2022, 508 : 193 - 203