Learning financial asset-specific trading rules via deep reinforcement learning

被引:33
|
作者
Taghian, Mehran [1 ]
Asadi, Ahmad [1 ]
Safabakhsh, Reza [1 ]
机构
[1] Amirkabir Univ Technol, Deep Learning Lab, Comp Engn Dept, Hafez St, Tehran, Iran
关键词
Reinforcement learning; Deep Q-learning; Single Stock trading; Trading strategy; PERFORMANCE;
D O I
10.1016/j.eswa.2022.116523
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating asset-specific trading signals based on the financial conditions of the assets is one of the challenging problems in automated trading. Various asset trading rules are proposed experimentally based on different technical analysis techniques. However, these kind of trading strategies are profitable, extracting new asset specific trading rules from vast historical data to increase total return and decrease the risk of portfolios is difficult for human experts. Recently, various deep reinforcement learning (DRL) methods are employed to learn the new trading rules for each asset. In this paper, a novel DRL model with various feature extraction modules is proposed. The effect of different input representations on the performance of the models is investigated and the performance of DRL-based models in different markets and asset situations is studied. The proposed model in this work outperformed the other state-of-the-art models in learning single asset-specific trading rules and obtained almost 12.4% more profit over the best state-of-the-art model on the Dow Jones Index in the same time period.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Deep reinforcement learning based trading agents: Risk curiosity driven learning for financial rules-based policy
    Hirchoua, Badr
    Ouhbi, Brahim
    Frikh, Bouchra
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 170
  • [2] Price Trailing for Financial Trading Using Deep Reinforcement Learning
    Tsantekidis, Avraam
    Passalis, Nikolaos
    Toufa, Anastasia-Sotiria
    Saitas-Zarkias, Konstantinos
    Chairistanidis, Stergios
    Tefas, Anastasios
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (07) : 2837 - 2846
  • [3] Application of A Deep Reinforcement Learning Method in Financial Market Trading
    Ma, Lixin
    Liu, Yang
    [J]. 2019 11TH INTERNATIONAL CONFERENCE ON MEASURING TECHNOLOGY AND MECHATRONICS AUTOMATION (ICMTMA 2019), 2019, : 421 - 425
  • [4] Synthetic Data Augmentation for Deep Reinforcement Learning in Financial Trading
    Liu, Chunli
    Ventre, Carmine
    Polukarov, Maria
    [J]. 3RD ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2022, 2022, : 343 - 351
  • [5] DEEP REINFORCEMENT LEARNING FOR FINANCIAL TRADING USING PRICE TRAILING
    Zarkias, Konstantinos Saitas
    Passalis, Nikolaos
    Tsantekidis, Avraam
    Tefas, Anastasios
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3067 - 3071
  • [6] Deep Direct Reinforcement Learning for Financial Signal Representation and Trading
    Deng, Yue
    Bao, Feng
    Kong, Youyong
    Ren, Zhiquan
    Dai, Qionghai
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (03) : 653 - 664
  • [7] Rules Based Policy for Stock Trading: A New Deep Reinforcement Learning Method
    Badr, Hirchoua
    Ouhbi, Brahim
    Frikh, Bouchra
    [J]. PROCEEDINGS OF 2020 5TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND ARTIFICIAL INTELLIGENCE: TECHNOLOGIES AND APPLICATIONS (CLOUDTECH'20), 2020, : 61 - 66
  • [8] Trading financial indices with reinforcement learning agents
    Pendharkar, Parag C.
    Cusatis, Patrick
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2018, 103 : 1 - 13
  • [9] Making financial trading by recurrent reinforcement learning
    Bertoluzzo, Francesco
    Corazza, Marco
    [J]. KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS: KES 2007 - WIRN 2007, PT II, PROCEEDINGS, 2007, 4693 : 619 - 626
  • [10] Constructing the Financial Asset Allocation Method Using Deep Reinforcement Learning Algorithm for Financial Transactions
    Gao, Wan-Dong
    Fei, Yu-Min
    [J]. JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2024, 40 (04) : 781 - 797