Reinforcement Learning for Options Trading

被引:3
|
作者
Wen, Wen [1 ,2 ]
Yuan, Yuyu [1 ,2 ]
Yang, Jincui [1 ,2 ]
机构
[1] Beijing Univ Posts & Telecommun, Natl Pilot Software Engn Sch, Sch Comp Sci, Beijing 100876, Peoples R China
[2] Minist Educ, Key Lab Trustworthy Distributed Comp & Serv, Beijing 100876, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 23期
基金
中国国家自然科学基金;
关键词
reinforcement learning; options trading; data augmentation; protective closing strategy; DATA AUGMENTATION; NETWORK;
D O I
10.3390/app112311208
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Reinforcement learning has been applied to various types of financial assets trading, such as stocks, futures, and cryptocurrencies. Options, as a novel kind of derivative, have their characteristics. Because there are too many option contracts for one underlying asset and their price behavior is different. Besides, the validity period of an option contract is relatively short. To apply reinforcement learning to options trading, we propose the options trading reinforcement learning (OTRL) framework. We use options' underlying asset data to train the reinforcement learning model. Candle data in different time intervals are utilized, respectively. The protective closing strategy is added to the model to prevent unbearable losses. Our experiments demonstrate that the most stable algorithm for obtaining high returns is proximal policy optimization (PPO) with the protective closing strategy. The deep Q network (DQN) can exceed the buy and hold strategy in options trading, as can soft actor critic (SAC). The OTRL framework is verified effectively.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Improving exploration in deep reinforcement learning for stock trading
    Zemzem, Wiem
    Tagina, Moncef
    [J]. INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2023, 72 (04) : 288 - 295
  • [42] Deep Reinforcement Learning for Trading-A Critical Survey
    Millea, Adrian
    [J]. DATA, 2021, 6 (11)
  • [43] Deep Reinforcement Learning for Quantitative Trading: Challenges and Opportunities
    An, Bo
    Sun, Shuo
    Wang, Rundong
    [J]. IEEE INTELLIGENT SYSTEMS, 2022, 37 (02) : 23 - 26
  • [44] Successor Options: An Option Discovery Framework for Reinforcement Learning
    Ramesh, Rahul
    Tomar, Manan
    Ravindran, Balaraman
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3304 - 3310
  • [45] Building Portable Options: Skill Transfer in Reinforcement Learning
    Konidaris, George
    Barto, Andrew
    [J]. 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 895 - 900
  • [46] Practical Algorithmic Trading Using State Representation Learning and Imitative Reinforcement Learning
    Park, Deog-Yeong
    Lee, Ki-Hoon
    [J]. IEEE ACCESS, 2021, 9 : 152310 - 152321
  • [47] Spectrum Markets for Service Provider Spectrum Trading with Reinforcement Learning
    Abji, Nadeem
    Leon-Garcia, Alberto
    [J]. 2011 IEEE 22ND INTERNATIONAL SYMPOSIUM ON PERSONAL INDOOR AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2011, : 650 - 655
  • [48] Feature Fusion Deep Reinforcement Learning Approach for Stock Trading
    Bai, Tongyuan
    Lang, Qi
    Song, Shifan
    Fang, Yan
    Liu, Xiaodong
    [J]. 2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7240 - 7245
  • [49] Learning Unfair Trading: a Market Manipulation Analysis From the Reinforcement Learning Perspective
    Martinez-Miranda, Enrique
    McBurney, Peter
    Howard, Matthew J. W.
    [J]. PROCEEDINGS OF THE 2016 IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS (EAIS), 2016, : 103 - 109
  • [50] Recommending Cryptocurrency Trading Points with Deep Reinforcement Learning Approach
    Sattarov, Otabek
    Muminov, Azamjon
    Lee, Cheol Won
    Kang, Hyun Kyu
    Oh, Ryumduck
    Ahn, Junho
    Oh, Hyung Jun
    Jeon, Heung Seok
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (04):