Robust Perception Under Adverse Conditions for Autonomous Driving Based on Data Augmentation

被引:1
|
作者
Zheng, Ziqiang [1 ]
Cheng, Yujie [1 ]
Xin, Zhichao [1 ]
Yu, Zhibin [1 ,2 ]
Zheng, Bing [1 ,2 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Sch Elect Informat Engn, Qingdao 266520, Peoples R China
[2] Ocean Univ China, Sanya Oceanog Inst, Key Lab Ocean Observat & Informat Hainan Prov, Sanya 572025, Peoples R China
基金
中国国家自然科学基金;
关键词
Generative adversarial network; data augmentation; unpaired image-to-image translation; TO-IMAGE TRANSLATION; NETWORK;
D O I
10.1109/TITS.2023.3297318
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Many existing advanced deep learning-based autonomous systems have recently been used for autonomous vehicles. In general, a deep learning-based visual perception system heavily relies on visual perception to recognize and localize dynamic interest objects (e.g., pedestrians and cars) and indicative traffic signs and lights to assist autonomous vehicles in maneuvering safely. However, the performance of existing object recognition algorithms could degrade significantly under some adverse and challenging scenarios including rainy, foggy, and rainy night conditions. The raindrops, light reflection, and low illumination pose a great challenge to robust object recognition. Thus, A robust and accurate autonomous driving system has attracted growing attention from the computer vision community. To achieve robust and accurate visual perception, we target to build effective and efficient augmentation and fusion techniques based on visual perception under various adverse conditions. The unpaired image-to-image (I2I) synthesis is integrated for visual perception enhancement and effective synthesis-based augmentation. Besides, we design a two-branch architecture to utilize the information from both the original image and the enhanced image synthesized by I2I. We comprehensively and hierarchically investigate the performance improvement and limitation of the proposed system based on visual recognition tasks and network backbones. An extensive experimental analysis of various adverse weather conditions is also included. The experimental results have demonstrated the proposed system could promote the ability of autonomous vehicles for robust and accurate perception under adverse weather conditions.
引用
收藏
页码:13916 / 13929
页数:14
相关论文
共 50 条
  • [21] Rethinking Data Augmentation for Robust LiDAR Semantic Segmentation in Adverse Weather
    Park, Junsung
    Kim, Kyungmin
    Shim, Hyunjung
    COMPUTER VISION - ECCV 2024, PT XVI, 2025, 15074 : 320 - 336
  • [22] Robust autonomous driving control using deep hybrid-learning network under rainy/snown conditions
    Lee C.-Y.
    Khanum A.
    Sung T.-W.
    Multimedia Tools and Applications, 2024, 83 (41) : 89281 - 89295
  • [23] RFID based Vehicular Positioning System for Safe Driving Under Adverse Weather Conditions
    Avireni, Bhargav
    Chu, Yihang
    Kepros, Ethan
    Ettorre, Mauro
    Chahal, Premjeet
    2023 IEEE 73RD ELECTRONIC COMPONENTS AND TECHNOLOGY CONFERENCE, ECTC, 2023, : 2196 - 2200
  • [24] Enhanced Perception for Autonomous Driving Using Semantic and Geometric Data Fusion
    Florea, Horatiu
    Petrovai, Andra
    Giosan, Ion
    Oniga, Florin
    Varga, Robert
    Nedevschi, Sergiu
    SENSORS, 2022, 22 (13)
  • [25] Robust object detection under harsh autonomous-driving environments
    Kim, Youngjun
    Hwang, Hyekyoung
    Shin, Jitae
    IET IMAGE PROCESSING, 2022, 16 (04) : 958 - 971
  • [26] PROCESSING of MEASUREMENTS for ROBUST OPERATION under ADVERSE CONDITIONS
    Farrell, James L.
    PROCEEDINGS OF THE 24TH INTERNATIONAL TECHNICAL MEETING OF THE SATELLITE DIVISION OF THE INSTITUTE OF NAVIGATION (ION GNSS 2011), 2011, : 3435 - 3443
  • [27] Autonomous Driving Control Based on the Perception of a Lidar Sensor and Odometer
    Tsai, Jichiang
    Chang, Che-Cheng
    Ou, Yu-Cheng
    Sieh, Bing-Herng
    Ooi, Yee-Ming
    APPLIED SCIENCES-BASEL, 2022, 12 (15):
  • [28] Sensing, Perception and Decision for Deep Learning Based Autonomous Driving
    Yamashita, Takayoshi
    DISTRIBUTED, AMBIENT AND PERVASIVE INTERACTIONS: TECHNOLOGIES AND CONTEXTS, DAPI 2018, PT II, 2018, 10922 : 152 - 163
  • [29] Research on Autonomous Driving Perception based on Deep Learning Algorithm
    Zhou, Bolin
    Zheng, Jihu
    Chen, Chen
    Yin, Pei
    Zhai, Yang
    2019 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2019, 11321
  • [30] A Method of Lunar Autonomous Driving Perception Planning Based on Hybrid A*
    Hu, Tao
    Cao, Tao
    Zheng, Bo
    Qian, Zhouyuan
    Han, Fei
    He, Liang
    2024 3RD CONFERENCE ON FULLY ACTUATED SYSTEM THEORY AND APPLICATIONS, FASTA 2024, 2024, : 1447 - 1452