Regime-Switching Recurrent Reinforcement Learning in Automated Trading

被引:0
|
作者
Maringer, Dietmar [1 ]
Ramtohul, Tikesh [1 ]
机构
[1] Univ Basel, CH-4002 Basel, Switzerland
关键词
SECURITY PRICE CHANGES; TRANSACTION VOLUMES; VOLATILITY; MODEL; FLOW;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The regime-switching recurrent reinforcement learning (RSRRL) model was first presented in [19], in the form of a GARCH-based threshold version that extended the standard RRL algorithm developed by [22]. In this study, the main aim is to investigate the influence of different transition variables, in multiple RSRRL settings and for various datasets, and compare and contrast the performance levels of the RRL and RSRRL systems in algorithmic trading experiments. The transition variables considered are GARCH-based volatility, de-trended volume, and the rate of information arrival, the latter being modelled on the Mixture Distribution Hypothesis (MDH). A frictionless setting was assumed for all the experiments. The results showed that the RSRRL models yield higher Sharpe ratios than the standard RRL in-sample, but struggle to reproduce the same performance levels out-of-sample. We argue that the lack of in- and out-of-sample correlation is due to a drastic change in market conditions, and find that the RSRRL can consistently outperform the RRL only when certain conditions are present. We also find that trading volume presents a lot of promise as an indicator, and could be the way forward for the design of more sophisticated RSRRL systems.
引用
收藏
页码:93 / 121
页数:29
相关论文
共 50 条
  • [1] Regime-switching recurrent reinforcement learning for investment decision making
    Maringer, Dietmar
    Ramtohul, Tikesh
    [J]. COMPUTATIONAL MANAGEMENT SCIENCE, 2012, 9 (01) : 89 - 107
  • [2] Regime-switching recurrent reinforcement learning for investment decision making
    Dietmar Maringer
    Tikesh Ramtohul
    [J]. Computational Management Science, 2012, 9 (1) : 89 - 107
  • [3] Threshold Recurrent Reinforcement Learning Model for Automated Trading
    Maringer, Dietmar
    Ramtohul, Tikesh
    [J]. APPLICATIONS OF EVOLUTIONARY COMPUTATION, PT II, PROCEEDINGS, 2010, 6025 : 212 - 221
  • [4] Deep Reinforcement Learning for Goal-Based Investing Under Regime-Switching
    Bauman, Tessa
    Gasperov, Bruno
    Goluza, Sven
    Kostanjcar, Zvonko
    [J]. NORTHERN LIGHTS DEEP LEARNING CONFERENCE, VOL 233, 2024, 233 : 13 - 19
  • [5] ADAPTIVE LEARNING IN REGIME-SWITCHING MODELS
    Branch, William A.
    Davig, Troy
    McGough, Bruce
    [J]. MACROECONOMIC DYNAMICS, 2013, 17 (05) : 998 - 1022
  • [6] Transition Variable Selection for Regime Switching Recurrent Reinforcement Learning
    Maringer, Dietmar
    Zhang, Jin
    [J]. 2014 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE FOR FINANCIAL ENGINEERING & ECONOMICS (CIFER), 2014, : 407 - 413
  • [7] An Automated Portfolio Trading System with Feature Preprocessing and Recurrent Reinforcement Learning
    Li, Lin
    [J]. ICAIF 2021: THE SECOND ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, 2021,
  • [8] Asset Pricing Using Trading Volumes in a Hidden Regime-Switching Environment
    Elliott R.J.
    Siu T.K.
    [J]. Asia-Pacific Financial Markets, 2015, 22 (2) : 133 - 149
  • [9] Analytic value function for optimal regime-switching pairs trading rules
    Bai, Yang
    Wu, Lan
    [J]. QUANTITATIVE FINANCE, 2018, 18 (04) : 637 - 654
  • [10] Regime-switching cointegration
    Jochmann, Markus
    Koop, Gary
    [J]. STUDIES IN NONLINEAR DYNAMICS AND ECONOMETRICS, 2015, 19 (01): : 35 - 48