Negative-Aware Training: Be Aware of Negative Samples

被引:1
|
作者
Li, Xin [1 ]
Jia, Xiaodong [1 ]
Jing, Xiao-Yuan [1 ,2 ,3 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan, Hubei, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Automat, Nanjing, Jiangsu, Peoples R China
[3] Guangdong Univ Petrochem Technol, Sch Comp, Maoming, Guangdong, Peoples R China
关键词
D O I
10.3233/FAIA200228
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Negative samples, whose class labels are not included in training sets, are commonly classified into random classes with high confidence and this severely limits the applications of traditional models. To solve this problem, we propose an approach called Negative-Aware Training (NAT), which introduces negative samples and trains them along with the original training set. The object function of NAT forces the classifier to output equal probability for each class on negative samples, other settings stay unchanged. Moreover, we introduce NAT into GAN and propose NAT-GAN, in which discriminator distinguishes between both generated samples and negative samples. With the assist of NAT, NAT-GAN can find more accurate decision boundaries, thus converges steadier and faster. Experimental results on synthesis and real-word datasets demonstrate that: 1) NAT gets better performance on negative samples in accordance with our proposed negative confidence rate metric. 2) NAT-GAN gets better quality scores than several traditional GANs and achieves state-of-the-art Inception Score (9.2) on CIFAR 10. Our demo and code are available at https://natpaper.github.io.
引用
收藏
页码:1269 / 1275
页数:7
相关论文
共 50 条
  • [41] A Category Aware Non-negative Matrix Factorization Approach for App Permission Recommendation
    Hu, Xiaocao
    Lu, Lili
    Wu, Haoyang
    [J]. 2020 IEEE 13TH INTERNATIONAL CONFERENCE ON WEB SERVICES (ICWS 2020), 2020, : 240 - 247
  • [42] Entity-Relation Distribution-Aware Negative Sampling for Knowledge Graph Embedding
    Yao, Naimeng
    Liu, Qing
    Yang, Yi
    Li, Weihua
    Bai, Quan
    [J]. SEMANTIC WEB, ISWC 2023, PART I, 2023, 14265 : 234 - 252
  • [43] Domain generalization by class-aware negative sampling-based contrastive learning
    Xie, Mengwei
    Zhao, Suyun
    Chen, Hong
    Li, Cuiping
    [J]. AI OPEN, 2022, 3 : 200 - 207
  • [44] Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training
    Sakr, Charbel
    Dai, Steve
    Venkatesan, Rangharajan
    Zimmer, Brian
    Dally, William J.
    Khailany, Brucek
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 19123 - 19138
  • [45] Negative Confidence-Aware Weakly Supervised Binary Classification for Effective Review Helpfulness Classification
    Wang, Xi
    Ounis, Iadh
    Macdonald, Craig
    [J]. CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 1565 - 1574
  • [46] ScanMap: Supervised Confounding Aware Non-negative Matrix Factorization for Polygenic Risk Modeling
    Luo, Yuan
    Mao, Chengsheng
    [J]. MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 126, 2020, 126 : 27 - 44
  • [47] Learning Audio-Visual Source Localization via False Negative Aware Contrastive Learning
    Sun, Weixuan
    Zhang, Jiayi
    Wang, Jianyuan
    Liu, Zheyuan
    Zhong, Yiran
    Feng, Tianpeng
    Guo, Yandong
    Zhang, Yanhao
    Barnes, Nick
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6420 - 6429
  • [48] Learning Audio-Visual Source Localization via False Negative Aware Contrastive Learning
    Sun, Weixuan
    Zhang, Jiayi
    Wang, Jianyuan
    Liu, Zheyuan
    Zhong, Yiran
    Feng, Tianpeng
    Guo, Yandong
    Zhang, Yanhao
    Barnes, Nick
    [J]. arXiv, 2023,
  • [49] Aware System, Aware Unit and Aware Logic
    Zhao, Qiangfu
    [J]. 2015 IEEE 2ND INTERNATIONAL CONFERENCE ON CYBERNETICS (CYBCONF), 2015, : 42 - 47
  • [50] Overcoming Oscillations in Quantization-Aware Training
    Nagel, Markus
    Fournarakis, Marios
    Bondarenko, Yelysei
    Blankevoort, Tijmen
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,