Hybrid Deep Reinforcement Learning for Pairs Trading

被引:10
|
作者
Kim, Sang-Ho [1 ]
Park, Deog-Yeong [1 ]
Lee, Ki-Hoon [1 ]
机构
[1] Kwangwoon Univ, Sch Comp & Informat Engn, 20 Kwangwoon Ro, Seoul 01897, South Korea
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 03期
基金
新加坡国家研究基金会;
关键词
algorithmic trading; pairs trading; deep learning; reinforcement learning; TIME-SERIES; REPRESENTATION; COINTEGRATION;
D O I
10.3390/app12030944
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Pairs trading is an investment strategy that exploits the short-term price difference (spread) between two co-moving stocks. Recently, pairs trading methods based on deep reinforcement learning have yielded promising results. These methods can be classified into two approaches: (1) indirectly determining trading actions based on trading and stop-loss boundaries and (2) directly determining trading actions based on the spread. In the former approach, the trading boundary is completely dependent on the stop-loss boundary, which is certainly not optimal. In the latter approach, there is a risk of significant loss because of the absence of a stop-loss boundary. To overcome the disadvantages of the two approaches, we propose a hybrid deep reinforcement learning method for pairs trading called HDRL-Trader, which employs two independent reinforcement learning networks; one for determining trading actions and the other for determining stop-loss boundaries. Furthermore, HDRL-Trader incorporates novel techniques, such as dimensionality reduction, clustering, regression, behavior cloning, prioritized experience replay, and dynamic delay, into its architecture. The performance of HDRL-Trader is compared with the state-of-the-art reinforcement learning methods for pairs trading (P-DDQN, PTDQN, and P-Trader). The experimental results for twenty stock pairs in the Standard & Poor's 500 index show that HDRL-Trader achieves an average return rate of 82.4%, which is 25.7%P higher than that of the second-best method, and yields significantly positive return rates for all stock pairs.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Improved pairs trading strategy using two-level reinforcement learning framework
    Xu, Zhizhao
    Luo, Chao
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 126
  • [42] Algorithmic trading using continuous action space deep reinforcement learning
    Majidi, Naseh
    Shamsi, Mahdi
    Marvasti, Farokh
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
  • [43] Algorithmic trading using combinational rule vector and deep reinforcement learning
    Huang, Zhen
    Li, Ning
    Mei, Wenliang
    Gong, Wenyong
    [J]. APPLIED SOFT COMPUTING, 2023, 147
  • [44] Empirical Analysis of Automated Stock Trading Using Deep Reinforcement Learning
    Kong, Minseok
    So, Jungmin
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [45] Deep Reinforcement Learning for Automated Stock Trading: Inclusion of Short Selling
    Asodekar, Eeshaan
    Nookala, Arpan
    Ayre, Sayali
    Nimkar, Anant V.
    [J]. FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2022), 2022, 13515 : 187 - 197
  • [46] Trading Strategy in a Local Energy Market, a Deep Reinforcement Learning Approach
    Jogunola, Olamide
    Tsado, Yakubu
    Adebisi, Bamidele
    Nawaz, Raheel
    [J]. 2021 IEEE ELECTRICAL POWER AND ENERGY CONFERENCE (EPEC), 2021, : 347 - 352
  • [47] Beating the Stock Market with a Deep Reinforcement Learning Day Trading System
    Conegundes, Leonardo
    Machado Pereira, Adriano C.
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [48] High-Dimensional Stock Portfolio Trading with Deep Reinforcement Learning
    Pigorsch, Uta
    Schaefer, Sebastian
    [J]. 2022 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE FOR FINANCIAL ENGINEERING AND ECONOMICS (CIFER), 2022,
  • [49] Application of deep reinforcement learning in stock trading strategies and stock forecasting
    Yuming Li
    Pin Ni
    Victor Chang
    [J]. Computing, 2020, 102 : 1305 - 1322
  • [50] Deep Reinforcement Learning Based Optimization and Risk Control of Trading Strategies
    Bao, Mengrui
    [J]. JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (05) : 241 - 252