Learning drifting negotiations

被引:13
|
作者
Enembreck, Fabricio
Avila, Braulio Coelho
Edson, E.
Barthes, Jean-Paul
机构
[1] Pontificia Univ Catolica Parana, PPGIA, Grad Prog Appl Comp Sci, PUCPR, BR-1155 Curitiba, Parana, Brazil
[2] Univ Technol Compiegne, HEUDIASYC, Ctr Rech Royallieu, Compiegne, France
关键词
D O I
10.1080/08839510701526954
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose the use of drift detection techniques for learning offer policies in multi-issue, bilateral negotiation. Several works aiming to develop adaptive trading agents have been proposed. Such agents are capable of learning their competitors' utility values and functions, thereby obtaining better results in negotiation. However, the learning mechanisms generally used disregard possible changes in a competitor's offer/counter-offer policy. In that case, the agent's performance may decrease drastically. The agent then needs to restart the learning process, as the model previously learned is no longer valid. Drift detection techniques can be used to detect changes in the current offers model and quickly update it. In this work, we demonstrate with simulated data that drift detection algorithms can be used to build adaptive trading agents and offer a number of advantages over the techniques mostly used in this problem. The results obtained with the algorithm IB3 (instance-based) show that the agent's performance can be rapidly recovered even when changes interesting to the competitor are abrupt, moderate, or gradual.
引用
收藏
页码:861 / 881
页数:21
相关论文
共 50 条
  • [1] Learning with a Drifting Target Concept
    Hanneke, Steve
    Kanade, Varun
    Yang, Liu
    ALGORITHMIC LEARNING THEORY, ALT 2015, 2015, 9355 : 149 - 164
  • [2] Drifting explanations in continual learning
    Cossu, Andrea
    Spinnato, Francesco
    Guidotti, Riccardo
    Bacciu, Davide
    NEUROCOMPUTING, 2024, 597
  • [3] Deep Drifting: Autonomous Drifting of Arbitrary Trajectories using Deep Reinforcement Learning
    Domberg, Fabian
    Wembers, Carlos Castelar
    Patel, Hiren
    Schildbach, Georg
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7753 - 7759
  • [4] On the complexity of learning from drifting distributions
    Barve, RD
    Long, PM
    INFORMATION AND COMPUTATION, 1997, 138 (02) : 170 - 193
  • [5] Active Learning With Drifting Streaming Data
    Zliobaite, Indre
    Bifet, Albert
    Pfahringer, Bernhard
    Holmes, Geoffrey
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (01) : 27 - 39
  • [6] LEARNING DRIFTING CONCEPTS WITH NEURAL NETWORKS
    BIEHL, M
    SCHWARZE, H
    JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1993, 26 (11): : 2651 - 2665
  • [7] Autonomous drifting using reinforcement learning
    Orgován L.
    Bécsi T.
    Aradi S.
    Periodica Polytechnica Transportation Engineering, 2021, 49 (03): : 292 - 300
  • [8] Learning Agents in Automated Negotiations
    Chandrashekhar, Hemalatha
    Bhasker, Bharat
    INFORMATION SYSTEMS, TECHNOLOGY AND MANAGEMENT-THIRD INTERNATIONAL CONFERENCE, ICISTM 2009, 2009, 31 : 292 - 302
  • [9] An Improved Algorithm for Learning Drifting Discrete Distributions
    Mazzetto, Alessio
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [10] Learning with continuous experts using drifting games
    Mukherjee, Indraneel
    Schapire, Robert E.
    THEORETICAL COMPUTER SCIENCE, 2010, 411 (29-30) : 2670 - 2683