Pairs trading strategy optimization using the reinforcement learning method: a cointegration approach

被引:15
|
作者
Fallahpour, Saeid [1 ]
Hakimian, Hasan [1 ]
Taheri, Khalil [2 ]
Ramezanifar, Ehsan [3 ]
机构
[1] Univ Tehran, Dept Finance, Fac Management, Tehran, Iran
[2] Univ Tehran, Sch Elect & Comp Engn, Coll Engn, Adv Robot & Intelligent Syst Lab, Tehran, Iran
[3] Sch Business & Econ, Dept Finance, Maastricht, Netherlands
关键词
Pairs trading; Reinforcement learning; Cointegration; Sortino ratio; Mean-reverting process; MODEL; RULE;
D O I
10.1007/s00500-016-2298-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies show that the popularity of the pairs trading strategy has been growing and it may pose a problem as the opportunities to trade become much smaller. Therefore, the optimization of pairs trading strategy has gained widespread attention among high-frequency traders. In this paper, using reinforcement learning, we examine the optimum level of pairs trading specifications over time. More specifically, the reinforcement learning agent chooses the optimum level of parameters of pairs trading to maximize the objective function. Results are obtained by applying a combination of the reinforcement learning method and cointegration approach. We find that boosting pairs trading specifications by using the proposed approach significantly overperform the previous methods. Empirical results based on the comprehensive intraday data which are obtained from S&P500 constituent stocks confirm the efficiently of our proposed method.
引用
收藏
页码:5051 / 5066
页数:16
相关论文
共 50 条
  • [41] A Priority Scheduling Strategy of a Microgrid Using a Deep Reinforcement Learning Method
    Dong, Lun
    Huang, Yuan
    Xu, Xiao
    Zhang, Zhenyuan
    Liu, Junyong
    Pan, Li
    Hu, Weihao
    [J]. 2023 IEEE/IAS INDUSTRIAL AND COMMERCIAL POWER SYSTEM ASIA, I&CPS ASIA, 2023, : 1490 - 1496
  • [42] Energy Trading Game for Microgrids Using Reinforcement Learning
    Xiao, Xingyu
    Dai, Canhuang
    Li, Yanda
    Zhou, Changhua
    Xiao, Liang
    [J]. GAME THEORY FOR NETWORKS (GAMENETS 2017), 2017, 212 : 131 - 140
  • [43] Cryptocurrency Trading Agent Using Deep Reinforcement Learning
    Suliman, Uwais
    van Zyl, Terence L.
    Paskaramoorthy, Andrew
    [J]. 2022 9TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE, ISCMI, 2022, : 6 - 10
  • [44] Feature Fusion Deep Reinforcement Learning Approach for Stock Trading
    Bai, Tongyuan
    Lang, Qi
    Song, Shifan
    Fang, Yan
    Liu, Xiaodong
    [J]. 2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7240 - 7245
  • [45] Recommending Cryptocurrency Trading Points with Deep Reinforcement Learning Approach
    Sattarov, Otabek
    Muminov, Azamjon
    Lee, Cheol Won
    Kang, Hyun Kyu
    Oh, Ryumduck
    Ahn, Junho
    Oh, Hyung Jun
    Jeon, Heung Seok
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (04):
  • [46] Deep Reinforcement Learning Approach for Trading Automation in the Stock Market
    Kabbani, Taylan
    Duman, Ekrem
    [J]. IEEE ACCESS, 2022, 10 : 93564 - 93574
  • [47] Adaptive Quantitative Trading: An Imitative Deep Reinforcement Learning Approach
    Liu, Yang
    Liu, Qi
    Zhao, Hongke
    Pan, Zhen
    Liu, Chuanren
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 2128 - 2135
  • [48] Nonlinear relationships in soybean commodities Pairs trading-test by deep reinforcement learning
    Liu, Jianhe
    Lu, Luze
    Zong, Xiangyu
    Xie, Baao
    [J]. FINANCE RESEARCH LETTERS, 2023, 58
  • [49] Deep reinforcement learning for pairs trading: Evidence from China black series futures
    Guo, Minjia
    Liu, Jianhe
    Luo, Ziping
    Han, Xiao
    [J]. INTERNATIONAL REVIEW OF ECONOMICS & FINANCE, 2024, 93 : 981 - 993
  • [50] Adaptive strategy optimization in game-theoretic paradigm using reinforcement learning
    Cheong, Kang Hao
    Zhao, Jie
    [J]. PHYSICAL REVIEW RESEARCH, 2024, 6 (03):