Robust Perception Under Adverse Conditions for Autonomous Driving Based on Data Augmentation

被引:1
|
作者
Zheng, Ziqiang [1 ]
Cheng, Yujie [1 ]
Xin, Zhichao [1 ]
Yu, Zhibin [1 ,2 ]
Zheng, Bing [1 ,2 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Sch Elect Informat Engn, Qingdao 266520, Peoples R China
[2] Ocean Univ China, Sanya Oceanog Inst, Key Lab Ocean Observat & Informat Hainan Prov, Sanya 572025, Peoples R China
基金
中国国家自然科学基金;
关键词
Generative adversarial network; data augmentation; unpaired image-to-image translation; TO-IMAGE TRANSLATION; NETWORK;
D O I
10.1109/TITS.2023.3297318
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Many existing advanced deep learning-based autonomous systems have recently been used for autonomous vehicles. In general, a deep learning-based visual perception system heavily relies on visual perception to recognize and localize dynamic interest objects (e.g., pedestrians and cars) and indicative traffic signs and lights to assist autonomous vehicles in maneuvering safely. However, the performance of existing object recognition algorithms could degrade significantly under some adverse and challenging scenarios including rainy, foggy, and rainy night conditions. The raindrops, light reflection, and low illumination pose a great challenge to robust object recognition. Thus, A robust and accurate autonomous driving system has attracted growing attention from the computer vision community. To achieve robust and accurate visual perception, we target to build effective and efficient augmentation and fusion techniques based on visual perception under various adverse conditions. The unpaired image-to-image (I2I) synthesis is integrated for visual perception enhancement and effective synthesis-based augmentation. Besides, we design a two-branch architecture to utilize the information from both the original image and the enhanced image synthesized by I2I. We comprehensively and hierarchically investigate the performance improvement and limitation of the proposed system based on visual recognition tasks and network backbones. An extensive experimental analysis of various adverse weather conditions is also included. The experimental results have demonstrated the proposed system could promote the ability of autonomous vehicles for robust and accurate perception under adverse weather conditions.
引用
收藏
页码:13916 / 13929
页数:14
相关论文
共 50 条
  • [1] Perception-Friendly Video Enhancement for Autonomous Driving under Adverse Weather Conditions
    Lee, Younkwan
    Ko, Yeongmin
    Kim, Yechan
    Jeon, Moongu
    Proceedings - IEEE International Conference on Robotics and Automation, 2022, : 7760 - 7767
  • [2] Perception-Friendly Video Enhancement for Autonomous Driving under Adverse Weather Conditions
    Lee, Younkwan
    Ko, Yeongmin
    Kim, Yechan
    Jeon, Moongu
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 7760 - 7767
  • [3] Benchmarking Image Sensors Under Adverse Weather Conditions for Autonomous Driving
    Bijelic, Mario
    Gruber, Tobias
    Ritter, Werner
    2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1773 - 1779
  • [4] Latent Attention Augmentation for Robust Autonomous Driving Policies
    Cheng, Ran
    Agia, Christopher
    Shkurti, Florian
    Meger, David
    Dudek, Gregory
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 130 - 136
  • [5] Perception and sensing for autonomous vehicles under adverse weather conditions: A survey
    Zhang, Yuxiao
    Carballo, Alexander
    Yang, Hanting
    Takeda, Kazuya
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2023, 196 : 146 - 177
  • [6] Robust Environment Perception for the Audi Autonomous Driving Cup
    Kuhnt, Florian
    Pfeiffer, Micha
    Zimmer, Peter
    Zimmerer, David
    Gomer, Jan-Markus
    Kaiser, Vitali
    Kohlhaas, Ralf
    Zoellner, J. Marius
    2016 IEEE 19TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2016, : 1424 - 1431
  • [7] Robust Lane-Mark Extraction for Autonomous Driving Under Complex Real Conditions
    Xuan, Hanyu
    Liu, Hongzhe
    Yuan, Jiazheng
    Li, Qing
    IEEE ACCESS, 2018, 6 : 5749 - 5765
  • [8] Efficientdet Based Visial Perception for Autonomous Driving
    Lyu, Chenxi
    Fan, Xinwen
    Qiu, Zhenyu
    Chen, Jun
    Lin, Jingsong
    Dong, Chen
    2023 8TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND BIG DATA ANALYTICS, ICCCBDA, 2023, : 443 - 447
  • [9] Autonomous Driving Architectures, Perception and Data Fusion: A Review
    Velasco-Hernandez, Gustavo
    Yeong, De Jong
    Barry, John
    Walsh, Joseph
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP 2020), 2020, : 315 - 321
  • [10] SID: Stereo Image Dataset for Autonomous Driving in Adverse Conditions
    El-Shair, Zaid A.
    Abu-raddah, Abdalmalek
    Cofield, Aaron
    Alawneh, Hisham
    Aladem, Mohamed
    Hamzeh, Yazan
    Rawashdeh, Samir A.
    IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE, NAECON 2024, 2024, : 403 - 408