TLGAN: Conditional Style-Based Traffic light Generation with Generative Adversarial Networks

被引:0
|
作者
Wang, Danfeng [1 ,2 ]
Ma, Xin [1 ]
机构
[1] Shandong Univ, Qingdao, Peoples R China
[2] Qcraft, Beijing, Peoples R China
关键词
computer vision; generative adversarial networks; deep learning; autonomous driving; traffic light;
D O I
10.1109/HPBDIS53214.2021.9658470
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traffic light recognition plays a vital role in intelligent transportation systems and is a critical perception module for autonomous vehicles. Compared with cars, pedestrians, and other targets, the traffic light has the characteristics of variety and complexity and their state constantly changing, which adds many difficulties to recognition. The performance of the entire deep learning-based vision system largely depends on whether its training dataset is rich in scenes. However, it is difficult to collect data in rare scenarios such as extreme weather, flashing, and no working, resulting in data imbalance and poor model generalization ability. This paper proposes a model called TL-GAN, a conditional style-based generative adversarial network, to generate images of traffic lights that we lack, especially yellow, inactive and flashing traffic lights. Our model uses style mixing to separate the background and the foreground of the traffic light and apply a new template loss to force the model to generate traffic light images with the same background but belonging to different classes. In order to verify the validity of the generated data, we use a traffic light classification model based on time series. The results of experiments show that AP(average precision) values of the three categories have been improved by adding generated images, proving the generated data's validity.
引用
收藏
页码:192 / 195
页数:4
相关论文
共 50 条
  • [1] Conditional Style-Based Generative Adversarial Networks for Renewable Scenario Generation
    Yuan, Ran
    Wang, Bo
    Sun, Yeqi
    Song, Xuanning
    Watada, Junzo
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (02) : 1281 - 1296
  • [2] A Style-Based Generator Architecture for Generative Adversarial Networks
    Karras, Tero
    Laine, Samuli
    Aila, Timo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (12) : 4217 - 4228
  • [3] A Style-Based Generator Architecture for Generative Adversarial Networks
    Karras, Tero
    Laine, Samuli
    Aila, Timo
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4396 - 4405
  • [4] Microstructure synthesis using style-based generative adversarial networks
    Fokina, Daria
    Muravleva, Ekaterina
    Ovchinnikov, George
    Oseledets, Ivan
    PHYSICAL REVIEW E, 2020, 101 (04)
  • [5] Style-based quantum generative adversarial networks for Monte Carlo events
    Bravo-Prieto, Carlos
    Baglio, Julien
    Ce, Marco
    Francis, Anthony
    Grabowska, Dorota M.
    Carrazza, Stefano
    QUANTUM, 2022, 6
  • [6] Traffic trajectory generation via conditional Generative Adversarial Networks for transportation Metaverse
    Kong, Xiangjie
    Bi, Junhui
    Chen, Qiao
    Shen, Guojiang
    Chin, Tachia
    Pau, Giovanni
    APPLIED SOFT COMPUTING, 2024, 160
  • [7] Gait generation of human based on the conditional generative adversarial networks
    Wu X.
    Deng W.
    Niu X.
    Jia Z.
    Liu S.
    Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2020, 41 (01): : 129 - 137
  • [8] Opinion on enhancing diversity in photovoltaic scenario generation using weather data simulating by style-based generative adversarial networks
    Deng, Jianbin
    Zhang, Jing
    FRONTIERS IN ENERGY RESEARCH, 2024, 12
  • [9] 3D Segmentation Guided Style-Based Generative Adversarial Networks for PET Synthesis
    Zhou, Yang
    Yang, Zhiwen
    Zhang, Hui
    Chang, Eric I-Chao
    Fan, Yubo
    Xu, Yan
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (08) : 2092 - 2104
  • [10] Virtual Face Animation Generation Based on Conditional Generative Adversarial Networks
    Zeng, Jia
    He, Xiangzhen
    Li, Shuaishuai
    Wu, Lindong
    Wang, Jiaxin
    2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 580 - 583