Scene adaptation in adverse conditions: a multi-sensor fusion framework for roadside traffic perception

被引:0
|
作者
Li, Kong [1 ]
Dai, Zhe [2 ]
Zuo, Chen [2 ]
Wang, Xuan [3 ]
Cui, Hua [4 ]
Song, Huansheng [1 ]
Cui, Mengying [2 ]
机构
[1] Changan Univ, Sch Informat Engn, Xian, Peoples R China
[2] Changan Univ, Sch Transportat Engn, Middle Sect Nan Erhuan Rd, Xian, Shannxi, Peoples R China
[3] Yantai Univ, Sch Comp & Control Engn, Yantai, Peoples R China
[4] Changan Univ, Sch Future Transportat, Xian, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-sensor fusion; scene adaptation; traffic parameter estimation; traffic perception; OBJECT DETECTION; RADAR; VEHICLE;
D O I
10.1080/15472450.2024.2390844
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Robust roadside traffic perception requires integrating the strengths of multi-source sensors under various adverse conditions, which is challenging but indispensable for formulating effective traffic management strategies. One limitation of existing radar-camera perception systems is that they focus on integrating multi-source information without directly considering scene information, leading to difficulties in achieving scene adaptive fusion. How to establish the connection between scene information and multi-source information is the key challenge to solving this problem. In this article, we propose a Scene adaptive Sensor Fusion (SSF) framework that characterizes scene information and integrates it into radar-camera fusion schemes, aiming to achieve high-quality roadside traffic perception. Specifically, we introduce a multi-source object association method that accurately associates multi-source sensor information on the roadside. We then utilize coding techniques to characterize the scene information, including visibility characterization regarding lighting and weather conditions, and road characterization regarding sensor viewpoint. By incorporating sensor and scene information into the fusion model, the SSF framework effectively establishes the connection between them. We evaluate the SSF framework on the Roadside Radar and Video Dataset (RRVD) and the Traffic flow Parameter Estimation Dataset (TPED), both collected from real-world traffic scenarios. Experiments demonstrate that SSF significantly improves vehicle detection accuracy under various adverse conditions compared to traditional single-source sensing methods and other state-of-the-art fusion techniques. Furthermore, vehicle trajectories based on SSF detection results enable accurate traffic parameter estimation, such as volume, speed, and density, in complex and dynamic environments.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] A Communication Scene Recognition Framework Based on Deep Learning with Multi-Sensor Fusion
    Feng Yufei
    Zhong Xiaofeng
    Chen Xinwei
    Zhou Shidong
    China Communications, 2025, 22 (04) : 174 - 201
  • [2] A Spatial Alignment Framework Using Geolocation Cues for Roadside Multi-View Multi-Sensor Fusion
    Zhao, Zhiguo
    Li, Yong
    Chen, Yunli
    Zhang, Xiaoting
    Tian, Rui
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 3633 - 3640
  • [3] A multi-sensor traffic scene dataset with omnidirectional video
    Koschorrek, Philipp
    Piccini, Tommaso
    Oberg, Per
    Felsberg, Michael
    Nielsen, Lars
    Mester, Rudolf
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2013, : 727 - 734
  • [4] Optimization Design of Joint Calibration for Roadside Multi-sensor Fusion
    She, Feng
    Yang, Guiyong
    Liu, Jianhu
    Wang, Ping
    Tongji Daxue Xuebao/Journal of Tongji University, 2024, 52 (11): : 1750 - 1757
  • [5] A novel multi-sensor hybrid fusion framework
    Du, Haoran
    Wang, Qi
    Zhang, Xunan
    Qian, Wenjun
    Wang, Jixin
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (08)
  • [6] A Method of Lane Departure Identification Based on Roadside Multi-Sensor Fusion
    Liu, Pengfei
    Yu, Guizhen
    Zhou, Bin
    Li, Da
    Wang, Zhangyu
    CICTP 2020: ADVANCED TRANSPORTATION TECHNOLOGIES AND DEVELOPMENT-ENHANCING CONNECTIONS, 2020, : 190 - 201
  • [7] Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles
    Jahromi, Babak Shahian
    Tulabandhula, Theja
    Cetin, Sabri
    SENSORS, 2019, 19 (20)
  • [8] Mobile robot localization by multi-sensor fusion and scene matching
    Yang, YB
    Tsui, HT
    INTELLIGENT ROBOTS AND COMPUTER VISION XV: ALGORITHMS, TECHNIQUES, ACTIVE VISION, AND MATERIALS HANDLING, 1996, 2904 : 298 - 309
  • [9] A Multi-Sensor Fusion Framework in 3-d
    Jain, Vishal
    Miller, Andrew C.
    Mundy, Joseph L.
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2013, : 314 - 319
  • [10] Multi-Sensor Fusion Framework using Discriminative Autoencoders
    Das, Arup Kumar
    Kumar, Kriti
    Majumdar, Angshul
    Sahu, Saurabh
    Chandra, M. Girish
    29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1351 - 1355