Effect of Representation of Information in the Input of Deep Learning on Prediction Success

被引:0
|
作者
Yucel, Hikmet [1 ]
机构
[1] Inovasyon Muhendisl Ltd, TR-26480 Eskisehir, Turkey
关键词
Stock price prediction; Deep learning; Representation of information; Real DFT; DWT; DCT; SUPPORT VECTOR MACHINE; NEURAL-NETWORKS; STOCK-MARKET; VOLATILITY;
D O I
10.1007/978-3-030-36178-5_60
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Stock price prediction has been always a hot topic in financial mathematics. There are lots of studies with various techniques on this area. Prediction studies are based on the assumption that the next price has a relation with the past values. On the other hand, artificial neural networks and especially deep learning is one of the best tools for extraction of the relations between inputs and outputs. Naturally, deep learning is becoming a widely used tool day by day. There are some developed neural nets especially for time series like RNN and LSTM. Besides the structure of the neural net, the type of the input is also important for the success of the prediction. Same information can be represented in different formats without any loss. This study investigates the effect of representation of the information in the input on the prediction success. Experiments are done with the USDTRY day close values starting from 1994 till 2017. 4 different representations are tested namely time series itself, Real DFT, DWT and DCT. The transforms used are lossless and time series can be reconstructed exactly so they are just different representations of the information. 64 tests with different time spans are conducted. The same neural net structure with 512 input neurons, 3 hidden layers, 10 output neurons is used for all tests except a slight difference for Real DFT. The results show that DCT gives the best results due to its high energy compaction property. Energy compaction results a sorted array in terms of significance therefore results stable weight calculation in neural nets.
引用
收藏
页码:709 / 723
页数:15
相关论文
共 50 条
  • [41] Representation of mutual information via input estimates
    Palomar, Daniel P.
    Verdu, Sergio
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2007, 53 (02) : 453 - 470
  • [42] Deep Internal Learning: Deep learning from a single input
    Tirer, Tom
    Giryes, Raja
    Chun, Se Young
    Eldar, Yonina C.
    IEEE SIGNAL PROCESSING MAGAZINE, 2024, 41 (04) : 40 - 57
  • [43] TOP: A deep mixture representation learning method for boosting molecular toxicity prediction
    Peng, Yuzhong
    Zhang, Ziqiao
    Jiang, Qizhi
    Guan, Jihong
    Zhou, Shuigeng
    METHODS, 2020, 179 : 55 - 64
  • [44] Learning Deep Multimanifold Structure Feature Representation for Quality Prediction With an Industrial Application
    Liu, Chenliang
    Wang, Kai
    Wang, Yalin
    Yuan, Xiaofeng
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 5849 - 5858
  • [45] Deep representation learning improves prediction of LacI-mediated transcriptional repression
    Garruss, Alexander S.
    Collins, Katherine M.
    Church, George M.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (27)
  • [46] Deep Representation Learning for Prediction of Temporal Event Sets in the Continuous Time Domain
    Dutta, Parag
    Mayilvaghanan, Kawin
    Sinha, Pratyaksha
    Dukkipati, Ambedkar
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222
  • [47] Therapeutic gene target prediction using novel deep hypergraph representation learning
    Kim, Kibeom
    Kim, Juseong
    Kim, Minwook
    Lee, Hyewon
    Song, Giltae
    BRIEFINGS IN BIOINFORMATICS, 2025, 26 (01)
  • [48] Pancancer survival prediction using a deep learning architecture with multimodal representation and integration
    Fan, Ziling
    Jiang, Zhangqi
    Liang, Hengyu
    Han, Chao
    NEURO-ONCOLOGY ADVANCES, 2023, 5 (01)
  • [49] Drug target prediction through deep learning functional representation of gene signatures
    Chen, Hao
    King, Frederick J.
    Zhou, Bin
    Wang, Yu
    Canedy, Carter J.
    Hayashi, Joel
    Zhong, Yang
    Chang, Max W.
    Pache, Lars
    Wong, Julian L.
    Jia, Yong
    Joslin, John
    Jiang, Tao
    Benner, Christopher
    Chanda, Sumit K.
    Zhou, Yingyao
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [50] Unsupervised deep representation learning for motor fault diagnosis by mutual information maximization
    Xiao, Dengyu
    Qin, Chengjin
    Yu, Honggan
    Huang, Yixiang
    Liu, Chengliang
    JOURNAL OF INTELLIGENT MANUFACTURING, 2021, 32 (02) : 377 - 391