Toward Connecting Speech Acts and Search Actions in Conversational Search Tasks

被引:0
|
作者
Ghosh, Souvick [1 ]
Ghosh, Satanu [2 ]
Shah, Chirag [3 ]
机构
[1] San Jose State Univ, San Jose, CA 95192 USA
[2] Univ New Hampshire, Durham, NH 03824 USA
[3] Univ Washington, Seattle, WA 98195 USA
关键词
Conversational Search Systems; Wizard-of-Oz Study; Experimental; Speech Acts; Dialogue Acts; Spoken Search; INFORMATION; DISCOURSE; USER;
D O I
10.1109/JCDL57899.2023.00027
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Conversational search systems can improve user experience in digital libraries by facilitating a natural and intuitive way to interact with library content. However, most conversational search systems are limited to performing simple tasks and controlling smart devices. Therefore, there is a need for systems that can accurately understand the user's information requirements and perform the appropriate search activity. Prior research on intelligent systems suggested that it is possible to comprehend the functional aspect of discourse (search intent) by identifying the speech acts in user dialogues. In this work, we automatically identify the speech acts associated with spoken utterances and use them to predict the system-level search actions. First, we conducted a Wizard-of-Oz study to collect data from 75 search sessions. We performed thematic analysis to curate a gold standard dataset - containing 1,834 utterances and 509 system actions - of human-system interactions in three information-seeking scenarios. Next, we developed attention-based deep neural networks to understand natural language and predict speech acts. Then, the speech acts were fed to the model to predict the corresponding system-level search actions. We also annotated a second dataset to validate our results. For the two datasets, the best-performing classification model achieved maximum accuracy of 90.2% and 72.7% for speech act classification and 58.8% and 61.1%, respectively, for search act classification.
引用
收藏
页码:119 / 131
页数:13
相关论文
共 50 条
  • [31] Stochastic guided search model for search asymmetries in visual search tasks
    Koike, T
    Saiki, J
    BIOLOGICALLY MOTIVATED COMPUTER VISION, PROCEEDINGS, 2002, 2525 : 408 - 417
  • [32] Spoken Conversational Search: Information Retrieval over a Speech-only Communication Channel
    Trippas, Johanne R.
    SIGIR 2015: PROCEEDINGS OF THE 38TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2015, : 1067 - 1067
  • [33] SEARCH FOR BASIC ACTIONS
    BAIER, A
    AMERICAN PHILOSOPHICAL QUARTERLY, 1971, 8 (02) : 161 - 170
  • [34] Evaluating user search trails in exploratory search tasks
    Hendahewa, Chathra
    Shah, Chirag
    INFORMATION PROCESSING & MANAGEMENT, 2017, 53 (04) : 905 - 922
  • [35] Exploring the Impact of Search Interface Features on Search Tasks
    Diriye, Abdigani
    Blandford, Ann
    Tombros, Anastasios
    RESEARCH AND ADVANCED TECHNOLOGY FOR DIGITAL LIBRARIES, 2010, 6273 : 184 - +
  • [36] Caching Historical Embeddings in Conversational Search
    Frieder, Ophir
    Mele, Ida
    Muntean, Cristina Ioana
    Nardini, Franco Maria
    Perego, Raffaele
    Tonellotto, Nicola
    ACM TRANSACTIONS ON THE WEB, 2024, 18 (04)
  • [37] Adaptive utterance rewriting for conversational search
    Mele, Ida
    Muntean, Cristina Ioana
    Nardini, Franco Maria
    Perego, Raffaele
    Tonellotto, Nicola
    Frieder, Ophir
    INFORMATION PROCESSING & MANAGEMENT, 2021, 58 (06)
  • [38] Toward a typology of constative speech acts: Actions beyond evidentiality, epistemic modality, and factuality
    Tantucci, Vittorio
    INTERCULTURAL PRAGMATICS, 2016, 13 (02) : 181 - 209
  • [39] Exploring the economics of conversational search sessions
    Ghosh, Souvick
    Gogoi, Julie
    Chua, Kristen
    ASLIB JOURNAL OF INFORMATION MANAGEMENT, 2024, 76 (04) : 613 - 628
  • [40] A Survey on Conversational Search and Applications in Biomedicine
    Adatrao, Naga Sai Krishna
    Gadireddy, Gowtham Reddy
    Noh, Jiho
    PROCEEDINGS OF THE 2023 ACM SOUTHEAST CONFERENCE, ACMSE 2023, 2023, : 78 - 88