Attention-Based Sequence Learning Model for Travel Time Estimation

被引:1
|
作者
Wang, Zhong [1 ]
Fu, Hao [2 ]
Liu, Guiquan [2 ]
Meng, Xianwei [2 ]
机构
[1] Univ Sci & Technol China, Sch Data Sci, Hefei 230026, Peoples R China
[2] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Peoples R China
关键词
Roads; Estimation; Data models; Trajectory; Task analysis; Predictive models; Global Positioning System; Travel time estimation; road network topology; multi-relational data; PREDICTION;
D O I
10.1109/ACCESS.2020.3042673
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Travel time estimation (TTE) on a specific route is a challenging task since the complex road network structure and hard-captured temporal patterns. Many excellent methods have been proposed to address the aforementioned problems. Some approaches well designed heuristically in a non-learning based way have the advantage of a quick response to the query for travel time estimation. However, these methods are largely affected by the noise of traffic data since they are limited to a single feature. Existing road segment based methods are generally considered intuitive but are not accurate enough for they fail to model complex factors, like delay and direction of intersections. In this paper, we propose a novel attention based sequence learning model for travel time estimation of a path (ASTTE), that not only considers the real-world road network topology as multi-relational data but also refine the problem to the road segment and intersection direction aspects. Besides, we integrate the traffic information as local and neighbor dependency, which helps to monitor dynamic traffic conditions during the trip. The use of the attention mechanism allows the model to focus on significant elements among the path comprises road segments and intersections. High-quality experiments on two real-world datasets have demonstrated the effectiveness and robustness of our framework.
引用
收藏
页码:221442 / 221453
页数:12
相关论文
共 50 条
  • [31] An attention-based hybrid deep learning model for EEG emotion recognition
    Yong Zhang
    Yidie Zhang
    Shuai Wang
    [J]. Signal, Image and Video Processing, 2023, 17 : 2305 - 2313
  • [32] An attention-based deep learning model for citywide traffic flow forecasting
    Zhou, Tao
    Huang, Bo
    Li, Rongrong
    Liu, Xiaoqian
    Huang, Zhihui
    [J]. INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2022, 15 (01) : 323 - 344
  • [33] Attention-Based Distributed Deep Learning Model for Air Quality Forecasting
    Mengara, Axel Gedeon Mengara
    Park, Eunyoung
    Jang, Jinho
    Yoo, Younghwan
    [J]. SUSTAINABILITY, 2022, 14 (06)
  • [34] Attention-Based Deep Learning Model for Arabic Handwritten Text Recognition
    Gader T.B.A.
    Echi A.K.
    [J]. Machine Graphics and Vision, 2022, 31 (1-4): : 49 - 73
  • [35] Pretraining of attention-based deep learning potential model for molecular simulation
    Zhang, Duo
    Bi, Hangrui
    Dai, Fu-Zhi
    Jiang, Wanrun
    Liu, Xinzijian
    Zhang, Linfeng
    Wang, Han
    [J]. NPJ COMPUTATIONAL MATERIALS, 2024, 10 (01)
  • [36] Attention-Based Explanation in a Deep Learning Model For Classifying Radiology Reports
    Putelli, Luca
    Gerevini, Alfonso E.
    Lavelli, Alberto
    Maroldi, Roberto
    Serina, Ivan
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE (AIME 2021), 2021, : 367 - 372
  • [37] An Attention-Based Interactive Learning-to-Rank Model for Document Retrieval
    Zhang, Fan
    Chen, Wenyu
    Fu, Mingsheng
    Li, Fan
    Qu, Hong
    Yi, Zhang
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (09): : 5770 - 5782
  • [38] AIST: An Interpretable Attention-Based Deep Learning Model for Crime Prediction
    Rayhan, Yeasir
    Hashem, Tanzima
    [J]. ACM TRANSACTIONS ON SPATIAL ALGORITHMS AND SYSTEMS, 2023, 9 (02)
  • [39] Attention-Based Deep Learning Model for Image Desaturation of SDO/AIA
    Xinze Zhang
    Long Xu
    Zhixiang Ren
    Xuexin Yu
    Jia Li
    [J]. Research in Astronomy and Astrophysics, 2023, 23 (08) : 94 - 104
  • [40] SEQUENCE-LEVEL KNOWLEDGE DISTILLATION FOR MODEL COMPRESSION OF ATTENTION-BASED SEQUENCE-TO-SEQUENCE SPEECH RECOGNITION
    Mun'im, Raden Mu'az
    Inoue, Nakamasa
    Shinoda, Koichi
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6151 - 6155