Database of Multichannel In-Ear and Behind-the-Ear Head-Related and Binaural Room Impulse Responses

被引:0
|
作者
H. Kayser
S. D. Ewert
J. Anemüller
T. Rohdenburg
V. Hohmann
B. Kollmeier
机构
[1] Universität Oldenburg,Medizinische Physik
关键词
Impulse Response; Human Head; Sound Field; Anechoic Chamber; Communication Situation;
D O I
暂无
中图分类号
学科分类号
摘要
An eight-channel database of head-related impulse responses (HRIRs) and binaural room impulse responses (BRIRs) is introduced. The impulse responses (IRs) were measured with three-channel behind-the-ear (BTEs) hearing aids and an in-ear microphone at both ears of a human head and torso simulator. The database aims at providing a tool for the evaluation of multichannel hearing aid algorithms in hearing aid research. In addition to the HRIRs derived from measurements in an anechoic chamber, sets of BRIRs for multiple, realistic head and sound-source positions in four natural environments reflecting daily-life communication situations with different reverberation times are provided. For comparison, analytically derived IRs for a rigid acoustic sphere were computed at the multichannel microphone positions of the BTEs and differences to real HRIRs were examined. The scenes' natural acoustic background was also recorded in each of the real-world environments for all eight channels. Overall, the present database allows for a realistic construction of simulated sound fields for hearing instrument research and, consequently, for a realistic evaluation of hearing instrument algorithms.
引用
收藏
相关论文
共 38 条
  • [21] Modeling and Customization of Head-Related Impulse Responses Based on General Basis Functions in Time Domain
    Hwang, Sungmok
    Park, Youngjin
    Park, Youn-sik
    ACTA ACUSTICA UNITED WITH ACUSTICA, 2008, 94 (06) : 965 - 980
  • [22] A multiple model high-resolution head-related impulse response database for aided and unaided ears
    Thiemann, Joachim
    van de Par, Steven
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2019, 2019 (1)
  • [23] A multiple model high-resolution head-related impulse response database for aided and unaided ears
    Joachim Thiemann
    Steven van de Par
    EURASIP Journal on Advances in Signal Processing, 2019
  • [24] Intelligibility assessment in elementary school classrooms from binaural room impulse responses measured with a childlike dummy head
    Melo, Viviane S. G.
    Tenenbaum, Roberto A.
    Musafir, Ricardo E.
    APPLIED ACOUSTICS, 2013, 74 (12) : 1436 - 1447
  • [25] Computationally Efficient Parametric Filter Approximations for Sound-Source Directivity and Head-Related Impulse Responses
    Ewert, Stephan D.
    Buttler, Oliver
    Hu, Hongmei
    2021 IMMERSIVE AND 3D AUDIO: FROM ARCHITECTURE TO AUTOMOTIVE (I3DA), 2021,
  • [26] Robust and low complexity localization algorithm based on head-related impulse responses and interaural time difference
    Wan, Xinwang
    Liang, Juan
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2013, 133 (01): : EL40 - EL46
  • [27] Effect of reducing data amount by reducing quantization resolution of head-related impulse responses on sound localization
    Watanabe, Kanji
    Takane, Shouichi
    Abe, Koji
    ACOUSTICAL SCIENCE AND TECHNOLOGY, 2018, 39 (04) : 316 - 319
  • [28] Which Inner Ear Disorders Lie Behind a Selective Posterior Semicircular Canal Hypofunction on Video Head Impulse Test?
    Castellucci, Andrea
    Piras, Gianluca
    Del Vecchio, Valeria
    Ferri, Gian Gaetano
    Ghidini, Angelo
    Brandolini, Cristina
    OTOLOGY & NEUROTOLOGY, 2021, 42 (04) : 573 - 584
  • [29] Experimental analysis of considering the sound pressure distribution pattern at the ear canal entrance as an unrevealed head-related localization clue
    TONG Xin
    QI Na
    MENG Zihou
    Chinese Journal of Acoustics, 2018, 37 (01) : 110 - 128
  • [30] Efficient Representation and Sparse Sampling of Head-Related Transfer Functions Using Phase-Correction Based on Ear Alignment
    Ben-Hur, Zamir
    Alon, David Lou
    Mehra, Ravish
    Rafaely, Boaz
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (12) : 2249 - 2262