Algorithmic Trading Behavior Identification using Reward Learning Method

被引:0
|
作者
Yang, Steve Y. [1 ]
Qiao, Qifeng [2 ]
Beling, Peter A. [2 ]
Scherer, William T. [2 ]
机构
[1] Stevens Inst Technol, Sch Syst & Enterprises, Financial Engn Program, Hoboken, NJ 07030 USA
[2] Univ Virginia, Dept Syst & Informat Engn, Charlottesville, VA 22903 USA
关键词
Inverse Reinforcement Learning; Gaussian Process; High Frequency Trading; Algorithmic Trading; Behavioral Finance; Markov Decision Process; Support Vector Machine; INTRADAY PATTERNS; LIQUIDITY; MARKET;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Identifying and understanding the impact of algorithmic trading on financial markets has become a critical issue for market operators and regulators. Advanced data feed and audit trail information from market operators now make the full observation of market participants' actions possible. A key question is the extent to which it is possible to understand and characterize the behavior of individual participants from observations of trading actions. In this paper, we consider the basic problems of categorizing and recognizing traders (or, equivalently, trading algorithms) on the basis observed limit orders. Our approach, which is based on inverse reinforcement learning (IRL), is to model trading decisions as a Markov decision process and then use observations of an optimal decision policy to find the reward function. The approach strikes a balance between two desirable features in that it captures key empirical properties of order book dynamics and yet remains computationally tractable. Making use of a realworld data set from the E-Mini futures contract, we compare two principal IRL variants, linear IRL and Gaussian process IRL. Results suggest that IRL-based feature spaces support accurate classification and meaningful clustering.
引用
收藏
页码:3807 / 3814
页数:8
相关论文
共 50 条
  • [1] Using Reinforcement Learning in the Algorithmic Trading Problem
    E. S. Ponomarev
    I. V. Oseledets
    A. S. Cichocki
    [J]. Journal of Communications Technology and Electronics, 2019, 64 : 1450 - 1457
  • [2] Using Reinforcement Learning in the Algorithmic Trading Problem
    Ponomarev, E. S.
    Oseledets, I. V.
    Cichocki, A. S.
    [J]. JOURNAL OF COMMUNICATIONS TECHNOLOGY AND ELECTRONICS, 2019, 64 (12) : 1450 - 1457
  • [3] ALGORITHMIC TRADING WITH LEARNING
    Cartea, Alvaro
    Jaimungal, Sebastian
    Kinzebulatov, Damir
    [J]. INTERNATIONAL JOURNAL OF THEORETICAL AND APPLIED FINANCE, 2016, 19 (04)
  • [4] ALGORITHMIC TRADING STRATEGY DEVELOPMENT USING MACHINE LEARNING
    Loon, Hiew Sir
    Dewi, Deshinta Arrova
    Thinakaran, Rajermani
    Kurniawan, Tri Basuki
    Batumalay, Malathy
    [J]. JOURNAL OF ENGINEERING SCIENCE AND TECHNOLOGY, 2023, 18 (06): : 22 - 31
  • [5] Algorithmic trading using machine learning and neural network
    Agarwal D.
    Sheth R.
    Shekokar N.
    [J]. Lecture Notes on Data Engineering and Communications Technologies, 2021, 66 : 407 - 421
  • [6] Algorithmic trading: Intraday profitability and trading behavior
    Arumugam, Devika
    [J]. ECONOMIC MODELLING, 2023, 128
  • [7] Algorithmic Forex Trading Using Q-learning
    Zahrah, Hasna Haifa
    Tirtawangsa, Jimmy
    [J]. ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2023, PT I, 2023, 675 : 24 - 35
  • [8] A new hybrid method of recurrent reinforcement learning and BiLSTM for algorithmic trading
    Huang, Yuling
    Song, Yunlin
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (02) : 1939 - 1951
  • [9] Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
    Park, Deog-Yeong
    Lee, Ki-Hoon
    [J]. IEEE ACCESS, 2021, 9 : 152310 - 152321
  • [10] R-DDQN: Optimizing Algorithmic Trading Strategies Using a Reward Network in a Double DQN
    Zhou, Chujin
    Huang, Yuling
    Cui, Kai
    Lu, Xiaoping
    [J]. MATHEMATICS, 2024, 12 (11)