TLGAN: Conditional Style-Based Traffic light Generation with Generative Adversarial Networks

被引:0
|
作者
Wang, Danfeng [1 ,2 ]
Ma, Xin [1 ]
机构
[1] Shandong Univ, Qingdao, Peoples R China
[2] Qcraft, Beijing, Peoples R China
关键词
computer vision; generative adversarial networks; deep learning; autonomous driving; traffic light;
D O I
10.1109/HPBDIS53214.2021.9658470
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traffic light recognition plays a vital role in intelligent transportation systems and is a critical perception module for autonomous vehicles. Compared with cars, pedestrians, and other targets, the traffic light has the characteristics of variety and complexity and their state constantly changing, which adds many difficulties to recognition. The performance of the entire deep learning-based vision system largely depends on whether its training dataset is rich in scenes. However, it is difficult to collect data in rare scenarios such as extreme weather, flashing, and no working, resulting in data imbalance and poor model generalization ability. This paper proposes a model called TL-GAN, a conditional style-based generative adversarial network, to generate images of traffic lights that we lack, especially yellow, inactive and flashing traffic lights. Our model uses style mixing to separate the background and the foreground of the traffic light and apply a new template loss to force the model to generate traffic light images with the same background but belonging to different classes. In order to verify the validity of the generated data, we use a traffic light classification model based on time series. The results of experiments show that AP(average precision) values of the three categories have been improved by adding generated images, proving the generated data's validity.
引用
收藏
页码:192 / 195
页数:4
相关论文
共 50 条
  • [21] Generation of highly realistic microstructural images of alloys from limited data with a style-based generative adversarial network
    Guillaume Lambard
    Kazuhiko Yamazaki
    Masahiko Demura
    Scientific Reports, 13
  • [22] Synthetic Traffic Generation with Wasserstein Generative Adversarial Networks
    Wu, Chao-Lun
    Chen, Yu-Ying
    Chou, Po-Yu
    Wang, Chih-Yu
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1503 - 1508
  • [23] GlyphGAN: Style-consistent font generation based on generative adversarial networks
    Hayashi, Hideaki
    Abe, Kohtaro
    Uchida, Seiichi
    KNOWLEDGE-BASED SYSTEMS, 2019, 186
  • [24] Dual Projection Generative Adversarial Networks for Conditional Image Generation
    Han, Ligong
    Min, Martin Renqiang
    Stathopoulos, Anastasis
    Tian, Yu
    Gao, Ruijiang
    Kadav, Asim
    Metaxas, Dimitris
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14418 - 14427
  • [25] Flow-based network traffic generation using Generative Adversarial Networks
    Ring, Markus
    Schloer, Daniel
    Landes, Dieter
    Hotho, Andreas
    COMPUTERS & SECURITY, 2019, 82 : 156 - 172
  • [26] Controllable fundus image generation based on conditional generative adversarial networks with mask guidance
    Xiaoxin Guo
    Xiang Li
    Qifeng Lin
    Guangyu Li
    Xiaoying Hu
    Songtian Che
    Multimedia Tools and Applications, 2024, 83 : 46065 - 46085
  • [27] Controllable fundus image generation based on conditional generative adversarial networks with mask guidance
    Guo, Xiaoxin
    Li, Xiang
    Lin, Qifeng
    Li, Guangyu
    Hu, Xiaoying
    Che, Songtian
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 46065 - 46085
  • [28] Music style migration based on generative Adversarial Networks
    Ji, Zhen
    Shen, Dan
    ALEXANDRIA ENGINEERING JOURNAL, 2025, 118 : 292 - 305
  • [29] Conditional Generative Adversarial Capsule Networks
    Kong R.
    Huang G.
    Zidonghua Xuebao/Acta Automatica Sinica, 2020, 46 (01): : 94 - 107
  • [30] Bidirectional Conditional Generative Adversarial Networks
    Jaiswal, Ayush
    AbdAlmageed, Wael
    Wu, Yue
    Natarajan, Premkumar
    COMPUTER VISION - ACCV 2018, PT III, 2019, 11363 : 216 - 232