Personalization of Hearing Aid Compression by Human-in-the-Loop Deep Reinforcement Learning

被引:12
|
作者
Alamdari, Nasim [1 ]
Lobarinas, Edward [2 ]
Kehtarnavaz, Nasser [1 ]
机构
[1] Univ Texas Dallas, Elect & Comp Engn Dept, Richardson, TX 75080 USA
[2] Univ Texas Dallas, Callier Ctr Commun Disorders, Richardson, TX 75080 USA
来源
IEEE ACCESS | 2020年 / 8卷
关键词
Personalized audio compression; deep reinforcement learning; human-in-the-loop personalization; personalized hearing aid; hearing aid compression;
D O I
10.1109/ACCESS.2020.3035728
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing prescriptive compression strategies used in hearing aid fitting are designed based on gain averages from a group of users which may not be necessarily optimal for a specific user. Nearly half of hearing aid users prefer settings that differ from the commonly prescribed settings. This paper presents a human-in-the-loop deep reinforcement learning approach that personalizes hearing aid compression to achieve improved hearing perception. The developed approach is designed to learn a specific user's hearing preferences in order to optimize compression based on the user's feedbacks. Both simulation and subject testing results are reported. These results demonstrate the proof-of-concept of achieving personalized compression via human-in-the-loop deep reinforcement learning.
引用
收藏
页码:203503 / 203515
页数:13
相关论文
共 50 条
  • [1] Human-in-the-loop Reinforcement Learning
    Liang, Huanghuang
    Yang, Lu
    Cheng, Hong
    Tu, Wenzhe
    Xu, Mengjie
    [J]. 2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 4511 - 4518
  • [2] Thermal comfort management leveraging deep reinforcement learning and human-in-the-loop
    Cicirelli, Franco
    Guerrieri, Antonio
    Mastroianni, Carlo
    Spezzano, Giandomenico
    Vinci, Andrea
    [J]. PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS (ICHMS), 2020, : 160 - 165
  • [3] Deep Reinforcement Active Learning for Human-In-The-Loop Person Re-Identification
    Liu, Zimo
    Wang, Jingya
    Gong, Shaogang
    Lu, Huchuan
    Tao, Dacheng
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6121 - 6130
  • [4] Value Driven Representation for Human-in-the-Loop Reinforcement Learning
    Keramati, Ramtin
    Brunskill, Emma
    [J]. ACM UMAP '19: PROCEEDINGS OF THE 27TH ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, 2019, : 176 - 180
  • [5] Reinforcement Learning Requires Human-in-the-Loop Framing and Approaches
    Taylor, Matthew E.
    [J]. HHAI 2023: AUGMENTING HUMAN INTELLECT, 2023, 368 : 351 - 360
  • [6] Where to Add Actions in Human-in-the-Loop Reinforcement Learning
    Mandel, Travis
    Liu, Yun-En
    Brunskill, Emma
    Popovic, Zoran
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2322 - 2328
  • [7] End-to-end grasping policies for human-in-the-loop robots via deep reinforcement learning
    Sharif, Mohammadreza
    Erdogmus, Deniz
    Amato, Christopher
    Padir, Taskin
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 2768 - 2774
  • [8] HEIDL: Learning Linguistic Expressions with Deep Learning and Human-in-the-Loop
    Yang, Yiwei
    Kandogan, Eser
    Li, Yunyao
    Lasecki, Walter S.
    Sen, Prithviraj
    [J]. PROCEEDINGS OF THE 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: SYSTEM DEMONSTRATIONS, (ACL 2019), 2019, : 135 - 140
  • [9] ASHA: Assistive Teleoperation via Human-in-the-Loop Reinforcement Learning
    Chen, Sean
    Gao, Jensen
    Reddy, Siddharth
    Berseth, Glen
    Dragan, Anca D.
    Levine, Sergey
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7505 - 7512
  • [10] Optimal Volt/Var Control for Unbalanced Distribution Networks With Human-in-the-Loop Deep Reinforcement Learning
    Sun, Xianzhuo
    Xu, Zhao
    Qiu, Jing
    Liu, Huichuan
    Wu, Huayi
    Tao, Yuechuan
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2024, 15 (03) : 2639 - 2651