Trading Strategy of the Cryptocurrency Market Based on Deep Q-Learning Agents

被引:0
|
作者
Huang, Chester S. J. [1 ]
Su, Yu-Sheng [2 ,3 ]
机构
[1] Natl Kaohsiung Univ Sci & Technol, Dept Money & Banking, Kaohsiung, Taiwan
[2] Natl Chung Cheng Univ, Dept Comp Sci & Informat Engn, 168,Sec 1,Univ Rd, Chiayi 621301, Taiwan
[3] Natl Taiwan Ocean Univ, Dept Comp Sci & Engn, Keelung City, Taiwan
关键词
BITCOIN; FEAR;
D O I
10.1080/08839514.2024.2381165
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As of December 2021, the cryptocurrency market had a market value of over US$270 billion, and over 5,700 types of cryptocurrencies were circulating among 23,000 online exchanges. Reinforcement learning (RL) has been used to identify the optimal trading strategy. However, most RL-based optimal trading strategies adopted in the cryptocurrency market focus on trading one type of cryptocurrency, whereas most traders in the cryptocurrency market often trade multiple cryptocurrencies. Therefore, the present study proposes a method based on deep Q-learning for identifying the optimal trading strategy for multiple cryptocurrencies. The proposed method uses the same training data to train multiple agents repeatedly so that each agent has accumulated learning experiences to improve its prediction of the future market trend and to determine the optimal action. The empirical results obtained with the proposed method are described in the following text. For Ethereum, VeChain, and Ripple, which were considered to have an uptrend, a horizontal trend, and a downtrend, respectively, the annualized rates of return were 725.48%, -14.95%, and - 3.70%, respectively. Regardless of the cryptocurrency market trend, a higher annualized rate of return was achieved when using the proposed method than when using the buy-and-hold strategy.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Forecasting on Trading: A Parameter Adaptive Framework Based on Q-learning
    Chen, Chao
    Li, Yelin
    Bu, Hui
    Wu, Junjie
    Xiong, Zhang
    [J]. 2018 15TH INTERNATIONAL CONFERENCE ON SERVICE SYSTEMS AND SERVICE MANAGEMENT (ICSSSM), 2018,
  • [22] An adaptive deep Q-learning strategy for handwritten digit recognition
    Qiao, Junfei
    Wang, Gongming
    Li, Wenjing
    Chen, Min
    [J]. NEURAL NETWORKS, 2018, 107 : 61 - 71
  • [23] Air-Combat Strategy Using Deep Q-Learning
    Ma, Xiaoteng
    Xia, Li
    Zhao, Qianchuan
    [J]. 2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, : 3952 - 3957
  • [24] Constrained Deep Q-Learning Gradually Approaching Ordinary Q-Learning
    Ohnishi, Shota
    Uchibe, Eiji
    Yamaguchi, Yotaro
    Nakanishi, Kosuke
    Yasui, Yuji
    Ishii, Shin
    [J]. FRONTIERS IN NEUROROBOTICS, 2019, 13
  • [25] An extension of weighted strategy sharing in cooperative Q-learning for specialized agents
    Eshgh, SM
    Ahmadabadi, MN
    [J]. ICONIP'02: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING: COMPUTATIONAL INTELLIGENCE FOR THE E-AGE, 2002, : 106 - 110
  • [26] Algorithmic Forex Trading Using Q-learning
    Zahrah, Hasna Haifa
    Tirtawangsa, Jimmy
    [J]. ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2023, PT I, 2023, 675 : 24 - 35
  • [27] Quantum agents in the Gym: a variational quantum algorithm for deep Q-learning
    Skolik, Andrea
    Jerbi, Sofiene
    Dunjko, Vedran
    [J]. QUANTUM, 2022, 6
  • [28] Q-learning and LSTM based deep active learning strategy for malware defense in industrial IoT applications
    Khowaja, Sunder Ali
    Khuwaja, Parus
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (10) : 14637 - 14663
  • [29] Q-learning and LSTM based deep active learning strategy for malware defense in industrial IoT applications
    Sunder Ali Khowaja
    Parus Khuwaja
    [J]. Multimedia Tools and Applications, 2021, 80 : 14637 - 14663
  • [30] Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market
    Fernandez Vicente, Oscar
    Fernandez Rebollo, Fernando
    Garcia Polo, Francisco Javier
    [J]. ICAIF 2021: THE SECOND ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, 2021,