A Multi-modal Deep Learning Approach for Predicting Dhaka Stock Exchange

被引:1
|
作者
Khan, Md. Nabil Rahman [1 ]
Al Tanim, Omor [1 ]
Salsabil, Most. Sadia [1 ]
Reza, S. M. Raiyan [1 ]
Hasib, Khan Md [2 ]
Alam, Mohammad Shafiul [1 ]
机构
[1] Ahsanullah Univ Sci & Technol, Dept Comp Sci & Engn, Dhaka, Bangladesh
[2] Bangladesh Univ Business & Technol, Dept Comp Sci & Engn, Dhaka, Bangladesh
关键词
Dhaka Stock Exchange (DSE); LSTM (Long Short-Term Memory); Transformer; Gated recurrent unit (GRUs); Time Series Data; Moving Average; Prediction;
D O I
10.1109/CCWC57344.2023.10099255
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study proposes a reliable and accurate approach for forecasting future stock price movements on the Dhaka Stock Exchange (DSE). Despite some people's beliefs that it is difficult to create a predictive framework that can properly anticipate stock prices, there is a substantial body of literature that shows that seemingly random movement patterns in stock prices can be forcasted with a highly accurate result. The framework described in this study combines LSTM, Transformer, and the GRU model. Performance metrics including mean squared error (MSE) and R-squared (R2) are used to gauge the suggested DeepDse model's accuracy. The results of the evaluation indicate that the model is highly accurate and can be used to provide reliable predictions of stock prices. This is of great importance, as accurate predictions of stock prices can assist investors in determining the best timing to buy and sell their investments. This can help investors minimize the risk of losing money and maximize their returns. The study suggests that the proposed model could be particularly valuable for investors in the Dhaka Stock Exchange, as it can provide them with valuable information to make informed investment decisions.
引用
收藏
页码:879 / 885
页数:7
相关论文
共 50 条
  • [31] Multi-modal haptic image recognition based on deep learning
    Han, Dong
    Nie, Hong
    Chen, Jinbao
    Chen, Meng
    Deng, Zhen
    Zhang, Jianwei
    [J]. SENSOR REVIEW, 2018, 38 (04) : 486 - 493
  • [32] InstaIndoor and multi-modal deep learning for indoor scene recognition
    Glavan, Andreea
    Talavera, Estefania
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (09): : 6861 - 6877
  • [33] Correspondence Learning for Deep Multi-Modal Recognition and Fraud Detection
    Park, Jongchan
    Kim, Min-Hyun
    Choi, Dong-Geol
    [J]. ELECTRONICS, 2021, 10 (07)
  • [34] A Deep Reinforcement Learning Recommendation Model with Multi-modal Features
    Pan, Huali
    Xie, Jun
    Gao, Jing
    Xu, Xinying
    Wang, Changzheng
    [J]. Data Analysis and Knowledge Discovery, 2023, 7 (04) : 114 - 128
  • [35] Multi-modal deep learning for automated assembly of periapical radiographs
    Pfaender, L.
    Schneider, L.
    Buettner, M.
    Krois, J.
    Meyer-Lueckel, H.
    Schwendicke, F.
    [J]. JOURNAL OF DENTISTRY, 2023, 135
  • [36] Multi-Modal Pedestrian Detection Algorithm Based on Deep Learning
    Li, Xiaoyan
    Fu, Huitong
    Niu, Wentao
    Wang, Peng
    Lü, Zhigang
    Wang, Weiming
    [J]. Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2022, 56 (10): : 61 - 70
  • [37] Multi-Modal Deep Learning for Assessing Surgeon Technical Skill
    Kasa, Kevin
    Burns, David
    Goldenberg, Mitchell G.
    Selim, Omar
    Whyne, Cari
    Hardisty, Michael
    [J]. SENSORS, 2022, 22 (19)
  • [38] Deep Collaborative Multi-Modal Learning for Unsupervised Kinship Estimation
    Dong, Guan-Nan
    Pun, Chi-Man
    Zhang, Zheng
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 4197 - 4210
  • [39] A multi-modal deep learning system for Arabic emotion recognition
    Abu Shaqra F.
    Duwairi R.
    Al-Ayyoub M.
    [J]. International Journal of Speech Technology, 2023, 26 (01) : 123 - 139
  • [40] MULTI-MODAL DEEP LEARNING ON IMAGING GENETICS FOR SCHIZOPHRENIA CLASSIFICATION
    Kanyal, Ayush
    Kandula, Srinivas
    Calhoun, Vince
    Ye, Dong Hye
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,