An autonomous agent for negotiation with multiple communication channels using parametrized deep Q-network *

被引:7
|
作者
Chen, Siqi [1 ]
Su, Ran [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-agent systems; cooperative games; reinforcement learning; deep learning; human-agent interaction;
D O I
10.3934/mbe.2022371
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Agent-based negotiation aims at automating the negotiation process on behalf of humans to save time and effort. While successful, the current research considers communication between negotiation agents through offer exchange. In addition to the simple manner, many real-world settings tend to involve linguistic channels with which negotiators can express intentions, ask questions, and discuss plans. The information bandwidth of traditional negotiation is therefore restricted and grounded in the action space. Against this background, a negotiation agent called MCAN (multiple channel automated negotiation) is described that models the negotiation with multiple communication channels problem as a Markov decision problem with a hybrid action space. The agent employs a novel deep reinforcement learning technique to generate an efficient strategy, which can interact with different opponents, i.e., other negotiation agents or human players. Specifically, the agent leverages parametrized deep Q-networks (P-DQNs) that provides solutions for a hybrid discrete-continuous action space, thereby learning a comprehensive negotiation strategy that integrates linguistic communication skills and bidding strategies. The extensive experimental results show that the MCAN agent outperforms other agents as well as human players in terms of averaged utility. A high human perception evaluation is also reported based on a user study. Moreover, a comparative experiment shows how the P-DQNs algorithm promotes the performance of the MCAN agent.
引用
收藏
页码:7933 / 7951
页数:19
相关论文
共 50 条
  • [41] Optimal Wireless Information and Power Transfer Using Deep Q-Network
    Xing, Yuan
    Pan, Haowen
    Xu, Bin
    Tapparello, Cristiano
    Shi, Wei
    Liu, Xuejun
    Zhao, Tianchi
    Lu, Timothy
    WIRELESS POWER TRANSFER, 2021, 2021
  • [42] Noisy Dueling Double Deep Q-Network algorithm for autonomous underwater vehicle path planning
    Liao, Xu
    Li, Le
    Huang, Chuangxia
    Zhao, Xian
    Tan, Shumin
    Frontiers in Neurorobotics, 2024, 18
  • [43] Deep Attention Q-Network for Personalized Treatment Recommendation
    Ma, Simin
    Lee, Junghwan
    Serban, Nicoleta
    Yang, Shihao
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 329 - 337
  • [44] Controlling a cargo ship without human experience using deep Q-network
    Chen, Chen
    Ma, Feng
    Liu, Jialun
    Negenborn, Rudy R.
    Liu, Yuanchang
    Yan, Xinping
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2020, 39 (05) : 7363 - 7379
  • [45] Intelligent Traffic Signal Phase Distribution System Using Deep Q-Network
    Joo, Hyunjin
    Lim, Yujin
    APPLIED SCIENCES-BASEL, 2022, 12 (01):
  • [46] Partially Obervable Multi-agent RL with Enhanced Deep Distributed Recurrent Q-Network
    Fan, Longtao
    Zhang, Sen
    Liu, Yuan-yuan
    2018 5TH INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE 2018), 2018, : 375 - 379
  • [47] Analysis of user pairing non-orthogonal multiple access network using deep Q-network algorithm for defense applications
    Ravi, Shankar
    Kulkarni, Gopal Ramchandra
    Ray, Samrat
    Ravisankar, Malladi
    Krishnan, V. Gokula
    Chakravarthy, D. S. K.
    JOURNAL OF DEFENSE MODELING AND SIMULATION-APPLICATIONS METHODOLOGY TECHNOLOGY-JDMS, 2022, 20 (03): : 303 - 316
  • [48] Deep Q-network learning-based active speed management under autonomous driving environments
    Kang, Kawon
    Park, Nuri
    Park, Juneyoung
    Abdel-Aty, Mohamed
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2024,
  • [49] Accurate Price Prediction by Double Deep Q-Network
    Feizi-Derakhshi, Mohammad-Reza
    Lotfimanesh, Bahram
    Amani, Omid
    INTELIGENCIA ARTIFICIAL-IBEROAMERICAN JOURNAL OF ARTIFICIAL INTELLIGENCE, 2024, 27 (74): : 12 - 21
  • [50] Train Scheduling with Deep Q-Network: A Feasibility Test
    Gong, Intaek
    Oh, Sukmun
    Min, Yunhong
    APPLIED SCIENCES-BASEL, 2020, 10 (23): : 1 - 14