An autonomous agent for negotiation with multiple communication channels using parametrized deep Q-network *

被引:7
|
作者
Chen, Siqi [1 ]
Su, Ran [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-agent systems; cooperative games; reinforcement learning; deep learning; human-agent interaction;
D O I
10.3934/mbe.2022371
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Agent-based negotiation aims at automating the negotiation process on behalf of humans to save time and effort. While successful, the current research considers communication between negotiation agents through offer exchange. In addition to the simple manner, many real-world settings tend to involve linguistic channels with which negotiators can express intentions, ask questions, and discuss plans. The information bandwidth of traditional negotiation is therefore restricted and grounded in the action space. Against this background, a negotiation agent called MCAN (multiple channel automated negotiation) is described that models the negotiation with multiple communication channels problem as a Markov decision problem with a hybrid action space. The agent employs a novel deep reinforcement learning technique to generate an efficient strategy, which can interact with different opponents, i.e., other negotiation agents or human players. Specifically, the agent leverages parametrized deep Q-networks (P-DQNs) that provides solutions for a hybrid discrete-continuous action space, thereby learning a comprehensive negotiation strategy that integrates linguistic communication skills and bidding strategies. The extensive experimental results show that the MCAN agent outperforms other agents as well as human players in terms of averaged utility. A high human perception evaluation is also reported based on a user study. Moreover, a comparative experiment shows how the P-DQNs algorithm promotes the performance of the MCAN agent.
引用
收藏
页码:7933 / 7951
页数:19
相关论文
共 50 条
  • [11] Visual Analysis of Deep Q-network
    Seng, Dewen
    Zhang, Jiaming
    Shi, Xiaoying
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (03): : 853 - 873
  • [12] Stochastic Double Deep Q-Network
    Lv, Pingli
    Wang, Xuesong
    Cheng, Yuhu
    Duan, Ziming
    IEEE ACCESS, 2019, 7 : 79446 - 79454
  • [13] Double Deep Q-Network with a Dual-Agent for Traffic Signal Control
    Gu, Jianfeng
    Fang, Yong
    Sheng, Zhichao
    Wen, Peng
    APPLIED SCIENCES-BASEL, 2020, 10 (05):
  • [14] Sensing and Navigation for Multiple Mobile Robots Based on Deep Q-Network
    Dai, Yanyan
    Yang, Seokho
    Lee, Kidong
    REMOTE SENSING, 2023, 15 (19)
  • [15] Investigating Deep Q-Network Agent Sensibility to Texture Changes on FPS Games
    de Sousa Serafim, Paulo Bruno
    Barbosa Nogueira, Yuri Lenon
    Vidal, Creto Augusto
    Cavalcante-Neto, Joaquim Bento
    Ferrer Filho, Romulo
    2020 19TH BRAZILIAN SYMPOSIUM ON COMPUTER GAMES AND DIGITAL ENTERTAINMENT (SBGAMES 2020), 2020, : 117 - 125
  • [16] Highly accurate map construction and deep Q-network for autonomous driving and smart transportation
    Sun, Xiaowei
    Dou, Huili
    Zhou, Zhiguo
    COMPUTERS & ELECTRICAL ENGINEERING, 2023, 110
  • [17] Autonomous Robot Navigation System with Learning Based on Deep Q-Network and Topological Maps
    Kato, Yuki
    Kamiyama, Koji
    Morioka, Kazuyuki
    2017 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2017, : 1040 - 1046
  • [18] End-to-End Autonomous Driving Through Dueling Double Deep Q-Network
    Peng, Baiyu
    Sun, Qi
    Li, Shengbo Eben
    Kum, Dongsuk
    Yin, Yuming
    Wei, Junqing
    Gu, Tianyu
    AUTOMOTIVE INNOVATION, 2021, 4 (03) : 328 - 337
  • [19] Behavioral-Adaptive Deep Q-Network for Autonomous Driving Decisions in Heavy Traffic
    Liu, Zhicheng
    Yu, Hong
    TRANSPORTATION RESEARCH RECORD, 2024,
  • [20] UAV Autonomous Navigation for Wireless Powered Data Collection with Onboard Deep Q-Network
    LI Yuting
    DING Yi
    GAO Jiangchuan
    LIU Yusha
    HU Jie
    YANG Kun
    ZTE Communications, 2023, 21 (02) : 80 - 87