Deep Feature Extraction from Trajectories for Transportation Mode Estimation

被引:56
|
作者
Endo, Yuki [1 ]
Toda, Hiroyuki [1 ]
Nishida, Kyosuke [1 ]
Kawanobe, Akihisa [1 ]
机构
[1] NTT Serv Evolut Labs, Yokosuka, Kanagawa, Japan
关键词
Movement trajectory; Deep learning; Transportation mode;
D O I
10.1007/978-3-319-31750-2_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the problem of feature extraction for estimating users' transportation modes from their movement trajectories. Previous studies have adopted supervised learning approaches and used engineers' skills to find effective features for accurate estimation. However, such hand-crafted features cannot always work well because human behaviors are diverse and trajectories include noise due to measurement error. To compensate for the shortcomings of hand-crafted features, we propose a method that automatically extracts additional features using a deep neural network (DNN). In order that a DNN can easily handle input trajectories, our method converts a raw trajectory data structure into an image data structure while maintaining effective spatio-temporal information. A classification model is constructed in a supervised manner using both of the deep features and hand-crafted features. We demonstrate the effectiveness of the proposed method through several experiments using two real datasets, such as accuracy comparisons with previous methods and feature visualization.
引用
收藏
页码:54 / 66
页数:13
相关论文
共 50 条
  • [41] Estimator: An Effective and Scalable Framework for Transportation Mode Classification Over Trajectories
    Hu, Danlei
    Fang, Ziquan
    Fang, Hanxi
    Li, Tianyi
    Shen, Chunhui
    Chen, Lu
    Gao, Yunjun
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, : 15562 - 15573
  • [42] Deep Feature Extraction for Panoramic Image Stitching
    Van-Dung Hoang
    Diem-Phuc Tran
    Nguyen Gia Nhu
    The-Anh Pham
    Van-Huy Pham
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS (ACIIDS 2020), PT II, 2020, 12034 : 141 - 151
  • [43] A deep learning approach to cardiovascular disease classification using empirical mode decomposition for ECG feature extraction
    Li, Ya
    Luo, Jing-hao
    Dai, Qing-yun
    Eshraghian, Jason K.
    Ling, Bingo Wing-Kuen
    Zheng, Ci-yan
    Wang, Xiao-li
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 79
  • [44] A deep feature extraction method for bearing fault diagnosis based on empirical mode decomposition and kernel function
    Wang, Fengtao
    Deng, Gang
    Liu, Chenxi
    Su, Wensheng
    Han, Qingkai
    Li, Hongkun
    ADVANCES IN MECHANICAL ENGINEERING, 2018, 10 (09)
  • [45] Deep Feature Extraction for Face Liveness Detection
    Sengur, Abdulkadir
    Akhtar, Zahid
    Akbulut, Yaman
    Ekici, Sami
    Budak, Umit
    2018 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA PROCESSING (IDAP), 2018,
  • [46] Discriminative Feature Extraction with Deep Neural Networks
    Stuhlsatz, Andre
    Lippel, Jens
    Zielke, Thomas
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,
  • [47] UNSUPERVISED DEEP FEATURE EXTRACTION OF HYPERSPECTRAL IMAGES
    Romero, Adriana
    Gatta, Carlo
    Camps-Valls, Gustavo
    2014 6TH WORKSHOP ON HYPERSPECTRAL IMAGE AND SIGNAL PROCESSING: EVOLUTION IN REMOTE SENSING (WHISPERS), 2014,
  • [48] Deep Feature Extraction in Intrusion Detection System
    Wang, Anbang
    Gong, Xinyu
    Lu, Jialiang
    4TH IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2019) / 3RD INTERNATIONAL SYMPOSIUM ON REINFORCEMENT LEARNING (ISRL 2019), 2019, : 104 - 109
  • [49] Studies on mode feature extraction and source range and depth estimation with a single hydrophone based on the dispersion characteristic
    Li Kun
    Fang Shi-Liang
    An Liang
    ACTA PHYSICA SINICA, 2013, 62 (09)
  • [50] Feature extraction for Facial Age Estimation: A Survey
    Dhimar, Twisha
    Mistree, Kinjal
    PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET), 2016, : 2243 - 2248