Human Augmented Cognition Based on Integration of Visual and Auditory Information

被引:0
|
作者
Won, Woong Jae [1 ]
Lee, Wono [1 ]
Ban, Sang-Woo [2 ]
Kim, Minook [3 ]
Park, Hyung-Min [3 ]
Lee, Minho [1 ]
机构
[1] Kyungpook Natl Univ, Sch Elect Engn & Comp Sci, 1370 Sankyuk Dong, Taegu 702701, South Korea
[2] Dongguk Univ, Dept Informat & Commun Engn, Gyeongbuk 780714, South Korea
[3] Sogang Univ, Dept Elect Engn, Seoul 121742, South Korea
基金
新加坡国家研究基金会;
关键词
human augmented cognition; human identification; multiple sensory integration model; visual and auditory; adaptive boosting; selective attention; SELECTIVE ATTENTION; RECOGNITION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a new multiple sensory fused human identification model for providing human augmented cognition. In the proposed model, both facial features and mel-frequency cepstral coefficients (MFCCs) are considered as visual features and auditory features for identifying a human, respectively. As well, an adaboosting model identifies a human using the integrated sensory features of both visual and auditory features. In the proposed model, facial form features are obtained from the principal component analysis (PCA) of a human's face area localized by an Adaboost algorithm in conjunction with a skin color preferable attention model. Moreover, MFCCs are extracted from human speech. Thus, the proposed multiple sensory integration model is aimed to enhance the performance of human identification by considering both visual and auditory complementarily working under partly distorted sensory environments. A human augmented cognition system with the proposed human identification model is implemented as a goggle type, on which it presents information such as unknown people's profile based on human identification. Experimental results show that the proposed model can plausibly conduct human identification in an indoor meeting situation.
引用
收藏
页码:547 / +
页数:3
相关论文
共 50 条
  • [31] Integration of visual and auditory motion signals in the human brain: an MEG study
    Aspell, J. E.
    Tanskanen, T.
    Hurlbert, A. C.
    PERCEPTION, 2006, 35 : 201 - 201
  • [32] Auditory-visual integration in fields of the auditory cortex
    Kubota, Michinori
    Sugimoto, Shunji
    Hosokawa, Yutaka
    Ojima, Hisayuki
    Horikawa, Junsei
    HEARING RESEARCH, 2017, 346 : 25 - 33
  • [33] Towards cognition-augmented human-centric assembly: A visual computation perspective
    Pang, Jiazhen
    Zheng, Pai
    Fan, Junming
    Liu, Tianyuan
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2025, 91
  • [34] Cybernetics of augmented cognition as an alternative to information processing
    Smith, TJ
    Henning, RA
    Foundations of Augmented Cognition, Vol 11, 2005, : 641 - 650
  • [35] Auditory noise influences human visual perception of ambiguous information: multi-modal integration during bistable perception
    Woochul Choi
    Se-Bum Paik
    BMC Neuroscience, 16 (Suppl 1)
  • [36] Astrocytes and human cognition: Modeling information integration and modulation of neuronal activity
    Pereira, Alfredo, Jr.
    Furlan, Fabio Augusto
    PROGRESS IN NEUROBIOLOGY, 2010, 92 (03) : 405 - 420
  • [37] Experimental evaluation of auditory cognition's effects on visual cognition of video
    Kamitani, Tatsuo
    Haruki, Kazuhito
    Matsuda, Minora
    IEEJ Transactions on Electronics, Information and Systems, 2009, 129 (10) : 1845 - 1852
  • [38] Combining visual and auditory information
    Burr, David
    Alais, David
    VISUAL PERCEPTION, PT 2: FUNDAMENTALS OF AWARENESS: MULTI-SENSORY INTEGRATION AND HIGH-ORDER PERCEPTION, 2006, 155 : 243 - 258
  • [39] Overview of the DARPA augmented cognition technical integration experiment
    St John, M
    Kobus, DA
    Morrison, JG
    Schmorrow, D
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2004, 17 (02) : 131 - 149
  • [40] Overview of the DARPA Augmented Cognition technical integration experiment
    St John, M
    Kobus, DA
    Morrison, JG
    Schmorrow, DD
    Foundations of Augmented Cognition, Vol 11, 2005, : 446 - 452