Combining deep reinforcement learning with technical analysis and trend monitoring on cryptocurrency markets

被引:3
|
作者
Kochliaridis, Vasileios [1 ]
Kouloumpris, Eleftherios [1 ]
Vlahavas, Ioannis [1 ]
机构
[1] Aristotle Univ Thessaloniki, Sch Informat, Thessaloniki 54124, Greece
来源
NEURAL COMPUTING & APPLICATIONS | 2023年 / 35卷 / 29期
关键词
Deep reinforcement learning; Machine learning; Proximal policy optimization; Trading; Technical analysis; Risk optimization;
D O I
10.1007/s00521-023-08516-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cryptocurrency markets experienced a significant increase in the popularity, which motivated many financial traders to seek high profits in cryptocurrency trading. The predominant tool that traders use to identify profitable opportunities is technical analysis. Some investors and researchers also combined technical analysis with machine learning, in order to forecast upcoming trends in the market. However, even with the use of these methods, developing successful trading strategies is still regarded as an extremely challenging task. Recently, deep reinforcement learning (DRL) algorithms demonstrated satisfying performance in solving complicated problems, including the formulation of profitable trading strategies. While some DRL techniques have been successful in increasing profit and loss (PNL) measures, these techniques are not much risk-aware and present difficulty in maximizing PNL and lowering trading risks simultaneously. This research proposes the combination of DRL approaches with rule-based safety mechanisms to both maximize PNL returns and minimize trading risk. First, a DRL agent is trained to maximize PNL returns, using a novel reward function. Then, during the exploitation phase, a rule-based mechanism is deployed to prevent uncertain actions from being executed. Finally, another novel safety mechanism is proposed, which considers the actions of a more conservatively trained agent, in order to identify high-risk trading periods and avoid trading. Our experiments on 5 popular cryptocurrencies show that the integration of these three methods achieves very promising results.
引用
收藏
页码:21445 / 21462
页数:18
相关论文
共 50 条
  • [31] Automated cryptocurrency trading approach using ensemble deep reinforcement learning: Learn to understand candlesticks
    Jing, Liu
    Kang, Yuncheol
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [32] Combining deep reinforcement learning with heuristics to solve the traveling salesman problem
    洪莉
    刘宇
    徐梦俏
    邓文慧
    [J]. Chinese Physics B, 2025, 34 (01) : 100 - 110
  • [33] Deep reinforcement learning for tiled aperture beam combining in a simulated environment
    Tunnermann, Henrik
    Shirakawa, Akira
    [J]. JOURNAL OF PHYSICS-PHOTONICS, 2021, 3 (01):
  • [34] Predicting price trends combining kinetic energy and deep reinforcement learning
    Ghotbi, Mahdie
    Zahedi, Morteza
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 244
  • [35] An AUV Target-Tracking Method Combining Imitation Learning and Deep Reinforcement Learning
    Mao, Yubing
    Gao, Farong
    Zhang, Qizhong
    Yang, Zhangyi
    [J]. JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2022, 10 (03)
  • [36] Deep Learning-based Cryptocurrency Price Prediction: A Comparative Analysis
    Mazinani, Armin
    Davoli, Luca
    Ferrari, Gianluigi
    [J]. 2023 5TH CONFERENCE ON BLOCKCHAIN RESEARCH & APPLICATIONS FOR INNOVATIVE NETWORKS AND SERVICES, BRAINS, 2023,
  • [37] Resistance to shock analysis of Deep Reinforcement Learning
    Pchelintsev, Ilya
    Lukianchenko, Petr
    [J]. 2024 ZOOMING INNOVATION IN CONSUMER TECHNOLOGIES CONFERENCE, ZINC 2024, 2024, : 157 - 162
  • [38] Ensemble of Technical Analysis and Machine Learning for Market Trend Prediction
    Ratto, Andrea Picasso
    Merello, Simone
    Oneto, Luca
    Ma, Yukun
    Malandri, Lorenzo
    Cambria, Erik
    [J]. 2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 2090 - 2096
  • [39] Tool Condition Monitoring in the Milling Process Using Deep Learning and Reinforcement Learning
    Kaliyannan, Devarajan
    Thangamuthu, Mohanraj
    Pradeep, Pavan
    Gnansekaran, Sakthivel
    Rakkiyannan, Jegadeeshwaran
    Pramanik, Alokesh
    [J]. JOURNAL OF SENSOR AND ACTUATOR NETWORKS, 2024, 13 (04)
  • [40] Untying cable by combining 3D deep neural network with deep reinforcement learning
    Fan, Zheming
    Shao, Wanpeng
    Hayashi, Toyohiro
    Ohashi, Takeshi
    [J]. ADVANCED ROBOTICS, 2023, 37 (05) : 380 - 394