An autonomous agent for negotiation with multiple communication channels using parametrized deep Q-network *

被引:7
|
作者
Chen, Siqi [1 ]
Su, Ran [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-agent systems; cooperative games; reinforcement learning; deep learning; human-agent interaction;
D O I
10.3934/mbe.2022371
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Agent-based negotiation aims at automating the negotiation process on behalf of humans to save time and effort. While successful, the current research considers communication between negotiation agents through offer exchange. In addition to the simple manner, many real-world settings tend to involve linguistic channels with which negotiators can express intentions, ask questions, and discuss plans. The information bandwidth of traditional negotiation is therefore restricted and grounded in the action space. Against this background, a negotiation agent called MCAN (multiple channel automated negotiation) is described that models the negotiation with multiple communication channels problem as a Markov decision problem with a hybrid action space. The agent employs a novel deep reinforcement learning technique to generate an efficient strategy, which can interact with different opponents, i.e., other negotiation agents or human players. Specifically, the agent leverages parametrized deep Q-networks (P-DQNs) that provides solutions for a hybrid discrete-continuous action space, thereby learning a comprehensive negotiation strategy that integrates linguistic communication skills and bidding strategies. The extensive experimental results show that the MCAN agent outperforms other agents as well as human players in terms of averaged utility. A high human perception evaluation is also reported based on a user study. Moreover, a comparative experiment shows how the P-DQNs algorithm promotes the performance of the MCAN agent.
引用
收藏
页码:7933 / 7951
页数:19
相关论文
共 50 条
  • [31] Dynamic fusion for ensemble of deep Q-network
    Chan, Patrick P. K.
    Xiao, Meng
    Qin, Xinran
    Kees, Natasha
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (04) : 1031 - 1040
  • [32] Trax Solver on Zynq with Deep Q-Network
    Sugimoto, Naru
    Mitsuishi, Takuji
    Kaneda, Takahiro
    Tsuruta, Chiharu
    Sakai, Ryotaro
    Shimura, Hideki
    Amano, Hideharu
    2015 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (FPT), 2015, : 272 - 275
  • [33] Application of Deep Q-Network in Portfolio Management
    Gao, Ziming
    Gao, Yuan
    Hu, Yi
    Jiang, Zhengyong
    Su, Jionglong
    2020 5TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS (IEEE ICBDA 2020), 2020, : 268 - 275
  • [34] Proposal of a Deep Q-network with Profit Sharing
    Miyazaki, Kazuteru
    8TH ANNUAL INTERNATIONAL CONFERENCE ON BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, BICA 2017 (EIGHTH ANNUAL MEETING OF THE BICA SOCIETY), 2018, 123 : 302 - 307
  • [35] Social Attentive Deep Q-network for Recommendation
    Lei, Yu
    Wang, Zhitao
    Li, Wenjie
    Pei, Hongbin
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1189 - 1192
  • [36] Averaged Weighted Double Deep Q-Network
    Wu, Jinjin
    Liu, Quan
    Chen, Song
    Yan, Yan
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (03): : 576 - 589
  • [37] Timeslot Scheduling with Reinforcement Learning Using a Double Deep Q-Network
    Ryu, Jihye
    Kwon, Juhyeok
    Ryoo, Jeong-Dong
    Cheung, Taesik
    Joung, Jinoo
    ELECTRONICS, 2023, 12 (04)
  • [38] Microgrid energy management using deep Q-network reinforcement learning
    Alabdullah, Mohammed H.
    Abido, Mohammad A.
    ALEXANDRIA ENGINEERING JOURNAL, 2022, 61 (11) : 9069 - 9078
  • [39] Obstacle rearrangement for robotic manipulation in clutter using a deep Q-network
    Sanghun Cheong
    Brian Y. Cho
    Jinhwi Lee
    Jeongho Lee
    Dong Hwan Kim
    Changjoo Nam
    Chang-hwan Kim
    Sung-kee Park
    Intelligent Service Robotics, 2021, 14 : 549 - 561
  • [40] Energy Optimization of Hybrid electric Vehicles Using Deep Q-Network
    Yokoyama, Takashi
    Ohmori, Hiromitsu
    2022 61ST ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS (SICE), 2022, : 827 - 832