Framework Comparison of Neural Networks for Automated Counting of Vehicles and Pedestrians

被引:0
|
作者
Lalangui, Galo [2 ]
Cordero, Jorge [2 ]
Ruiz-Vivanco, Omar [2 ]
Barba-Guaman, Luis [2 ]
Guerrero, Jessica [3 ]
Farias, Fatima [3 ]
Rivas, Wilmer [3 ]
Loja, Nancy [3 ]
Heredia, Andres [1 ]
Barros-Gavilanes, Gabriel [1 ]
机构
[1] Univ Azuay, LIDI, Av 24 Mayo 7-77, Cuenca 010204, Ecuador
[2] Univ Tecn Particular Loja, Loja 1101608, Ecuador
[3] Univ Tecn Machala, Machala, Ecuador
来源
APPLICATIONS OF COMPUTATIONAL INTELLIGENCE, COLCACI 2019 | 2019年 / 1096卷
关键词
Convolutional Neural Networks; Learning transfer; Automatic counter; Classification; Tracking; Single shot detector; Mobilenet; RECOGNITION;
D O I
10.1007/978-3-030-36211-9_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a comparison of three neural network frameworks used to make volumetric counts in an automated and continuous way. In addition to cars, the application count pedestrians. Frameworks used are: SSD Mobilenet re-trained, SSD Mobilenet pre-trained, and GoogLeNet pre-trained. The evaluation data set has a total duration of 60 min and comes from three different cameras. Images from the real deployment videos are included when training to enrich the detectable cases. Traditional detection models applied to vehicle counting systems usually provide high values for cars seen from the front. However, when the observer or camera is on the side, some models have lower detection and classification values. A new data set with fewer classes reach similar performance values as trained methods with default data sets. Results show that for the class cars, recall and precision values are 0.97 and 0.90 respectively in the best case, making use of a trained model by default, while for the class people the use of a re-trained model provides better results with precision and recall values of 1 and 0.82.
引用
收藏
页码:16 / 28
页数:13
相关论文
共 50 条
  • [21] What to rely on - Implicit communication between pedestrians and turning automated vehicles
    Harkin, Marie
    Harkin, Kevin A.
    Petzoldt, Tibor
    TRANSPORTATION RESEARCH PART F-TRAFFIC PSYCHOLOGY AND BEHAVIOUR, 2023, 98 : 297 - 317
  • [22] Virtual Reality Study on Pedestrians' Perceived Trust in Interactions with Automated Vehicles
    Ilic, Mario
    Lindner, Johannes
    Vollmer, Marie
    Bogenberger, Klaus
    TRANSPORTATION RESEARCH RECORD, 2024,
  • [23] Pedestrians' perceptions, fixations, and decisions towards automated vehicles with varied appearances
    Lyu, Wei
    Cao, Yaqin
    Ding, Yi
    Li, Jingyu
    Tian, Kai
    Zhang, Hui
    ACCIDENT ANALYSIS AND PREVENTION, 2025, 211
  • [24] Crowd Counting from Unmanned Aerial Vehicles with Fully-Convolutional Neural Networks
    Castellano, Giovanna
    Castiello, Ciro
    Mencar, Corrado
    Vessio, Gennaro
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [25] Optimization-Based Intersection Control for Connected Automated Vehicles and Pedestrians
    Niels, Tanja
    Bogenberger, Klaus
    Papageorgiou, Markos
    Papamichail, Ioannis
    TRANSPORTATION RESEARCH RECORD, 2024, 2678 (02) : 135 - 152
  • [26] Two-step communication for the interaction between automated vehicles and pedestrians
    Bindschaedel, Janina
    Krems, Ingo
    Kiesel, Andrea
    TRANSPORTATION RESEARCH PART F-TRAFFIC PSYCHOLOGY AND BEHAVIOUR, 2022, 90 : 136 - 150
  • [27] A Neural Network Approach for Counting Pedestrians from Video Sequence Images
    Ikeda, Norifumi
    Saitoh, Ayumu
    Isokawa, Teijiro
    Kaniiura, Naotake
    Matsui, Nobuyuki
    2008 PROCEEDINGS OF SICE ANNUAL CONFERENCE, VOLS 1-7, 2008, : 2384 - 2387
  • [28] Comparison of convolutional neural networks in fruit detection and counting: A comprehensive evaluation
    Vasconez, J. P.
    Delpiano, J.
    Vougioukas, S.
    Auat Cheein, F.
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 173 (173)
  • [29] Detection of pedestrians and vehicles in autonomous driving with selective kernel networks
    Zhang, Zhenlin
    Gao Hanwen
    Wu, Xingang
    COGNITIVE COMPUTATION AND SYSTEMS, 2023, 5 (01) : 64 - 70