An autonomous agent for negotiation with multiple communication channels using parametrized deep Q-network *

被引:7
|
作者
Chen, Siqi [1 ]
Su, Ran [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300072, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-agent systems; cooperative games; reinforcement learning; deep learning; human-agent interaction;
D O I
10.3934/mbe.2022371
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Agent-based negotiation aims at automating the negotiation process on behalf of humans to save time and effort. While successful, the current research considers communication between negotiation agents through offer exchange. In addition to the simple manner, many real-world settings tend to involve linguistic channels with which negotiators can express intentions, ask questions, and discuss plans. The information bandwidth of traditional negotiation is therefore restricted and grounded in the action space. Against this background, a negotiation agent called MCAN (multiple channel automated negotiation) is described that models the negotiation with multiple communication channels problem as a Markov decision problem with a hybrid action space. The agent employs a novel deep reinforcement learning technique to generate an efficient strategy, which can interact with different opponents, i.e., other negotiation agents or human players. Specifically, the agent leverages parametrized deep Q-networks (P-DQNs) that provides solutions for a hybrid discrete-continuous action space, thereby learning a comprehensive negotiation strategy that integrates linguistic communication skills and bidding strategies. The extensive experimental results show that the MCAN agent outperforms other agents as well as human players in terms of averaged utility. A high human perception evaluation is also reported based on a user study. Moreover, a comparative experiment shows how the P-DQNs algorithm promotes the performance of the MCAN agent.
引用
收藏
页码:7933 / 7951
页数:19
相关论文
共 50 条
  • [1] Deep Deformable Q-Network: An Extension of Deep Q-Network
    Jin, Beibei
    Yang, Jianing
    Huang, Xiangsheng
    Khan, Dawar
    2017 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE (WI 2017), 2017, : 963 - 966
  • [2] A deep reinforcement learning-based agent for negotiation with multiple communication channels
    Gao, Xiaoyang
    Chen, Siqi
    Zheng, Yan
    Hao, Jianye
    2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 868 - 872
  • [3] Deep Q-network implementation for simulated autonomous vehicle control
    Quek, Yang Thee
    Koh, Li Ling
    Koh, Ngiap Tiam
    Tso, Wai Ann
    Woo, Wai Lok
    IET INTELLIGENT TRANSPORT SYSTEMS, 2021, 15 (07) : 875 - 885
  • [4] Autonomous Penetration Testing Based on Improved Deep Q-Network
    Zhou, Shicheng
    Liu, Jingju
    Hou, Dongdong
    Zhong, Xiaofeng
    Zhang, Yue
    APPLIED SCIENCES-BASEL, 2021, 11 (19):
  • [5] Design of Obstacle Avoidance for Autonomous Vehicle Using Deep Q-Network and CARLA Simulator
    Terapaptommakol, Wasinee
    Phaoharuhansa, Danai
    Koowattanasuchat, Pramote
    Rajruangrabin, Jartuwat
    WORLD ELECTRIC VEHICLE JOURNAL, 2022, 13 (12):
  • [6] Deep Q-Network Using Reward Distribution
    Nakaya, Yuta
    Osana, Yuko
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2018, PT I, 2018, 10841 : 160 - 169
  • [7] Deep multi-agent fusion Q-Network for graph generation
    Rassil, Asmaa
    Chougrad, Hiba
    Zouaki, Hamid
    KNOWLEDGE-BASED SYSTEMS, 2023, 269
  • [8] A Deep Policy Inference Q-Network for Multi-Agent Systems
    Hong, Zhang-Wei
    Su, Shih-Yang
    Shann, Tzu-Yun
    Chang, Yi-Hsiang
    Lee, Chun-Yi
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS' 18), 2018, : 1388 - 1396
  • [9] Multipath Communication With Deep Q-Network for Industry 4.0 Automation and Orchestration
    Pokhrel, Shiva Raj
    Garg, Sahil
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (04) : 2852 - 2859
  • [10] A Deep Q-Network Reinforcement Learning-Based Model for Autonomous Driving
    Ahmed, Marwa
    Lim, Chee Peng
    Nahavandi, Saeid
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 739 - 744