Multi-modal Dialogue System with Sign Language Capabilities

被引:0
|
作者
Hruz, M. [1 ]
Campr, P. [1 ]
Krnoul, Z. [1 ]
Zelezny, M. [1 ]
Aran, Oya
Santemiz, Pinar
机构
[1] Univ W Bohemia, Dept Cybernet, Fac Sci Appl, Plzen 30614, Czech Republic
关键词
sign language; visual tracking; sign categorization;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
This paper presents the design of a multimodal sign-language-enabled dialogue system. Its functionality was tested on a prototype of an information kiosk for the deaf people providing information about train connections. We use an automatic computer-vision-based sign language recognition, automatic speech recognition and touchscreen as input modalities. The outputs are shown on a screen displaying 3D signing avatar and on a touchscreen displaying graphical user interface. The information kiosk can be used both by hearing users and deaf users in several languages. We focus on description of sign language input and output modality.
引用
收藏
页码:265 / 266
页数:2
相关论文
共 50 条
  • [1] Architecture of multi-modal dialogue system
    Fuchs, M
    Hejda, P
    Slavík, P
    [J]. TEXT, SPEECH AND DIALOGUE, PROCEEDINGS, 2000, 1902 : 433 - 438
  • [2] On the use of Multi-Modal Sensing in Sign Language Classification
    Sharma, Sneha
    Gupta, Rinki
    Kumar, Arun
    [J]. 2019 6TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 2019, : 495 - 500
  • [3] Skeleton aware multi-modal sign language recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    [J]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2021, : 3408 - 3418
  • [4] Skeleton Aware Multi-modal Sign Language Recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3408 - 3418
  • [5] Multi-modal Sign Language Recognition with Enhanced Spatiotemporal Representation
    Xiao, Shiwei
    Fang, Yuchun
    Ni, Lan
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Observing, Coaching and Reflecting: A Multi-modal Natural Language-based Dialogue System in a Learning Context
    Van Helvert, Joy
    Van Rosmalen, Peter
    Borner, Dirk
    Petukhova, Volha
    Alexandersson, Jan
    [J]. WORKSHOP PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON INTELLIGENT ENVIRONMENTS, 2015, 19 : 220 - 227
  • [7] Preliminary Study of Multi-modal Dialogue System for Personal Robot with IoTs
    Yamasaki, Shintaro
    Matsui, Kenji
    [J]. DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE, 2018, 620 : 286 - 292
  • [8] Multi-Modal Open-Domain Dialogue
    Shuster, Kurt
    Smith, Eric Michael
    Ju, Da
    Weston, Jason
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 4863 - 4883
  • [9] Automatic multi-modal dialogue scene indexing
    Alatan, AA
    [J]. 2001 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL III, PROCEEDINGS, 2001, : 374 - 377
  • [10] MLMSign: Multi-lingual multi-modal illumination-invariant sign language recognition
    Sadeghzadeh, Arezoo
    Shah, A. F. M. Shahen
    Islam, Md Baharul
    [J]. INTELLIGENT SYSTEMS WITH APPLICATIONS, 2024, 22