Improving Fractal Pre-training

被引:4
|
作者
Anderson, Connor [1 ]
Farrell, Ryan [1 ]
机构
[1] Brigham Young Univ, Provo, UT 84602 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/WACV51458.2022.00247
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The deep neural networks used in modern computer vision systems require enormous image datasets to train them. These carefully-curated datasets typically have a million or more images, across a thousand or more distinct categories. The process of creating and curating such a dataset is a monumental undertaking, demanding extensive effort and labelling expense and necessitating careful navigation of technical and social issues such as label accuracy, copyright ownership, and content bias. What if we had away to harness the power of large image datasets but with few or none of the major issues and concerns currently faced? This paper extends the recent work of Kataoka et al. [15], proposing an improved pre-training dataset based on dynamically-generated fractal images. Challenging issues with large-scale image datasets become points of elegance for fractal pre-training: perfect label accuracy at zero cost; no need to store/transmit large image archives; no privacy/demographic bias/concerns of inappropriate content, as no humans are pictured; limit-less supply and diversity of images; and the images are free/open-source. Perhaps surprisingly, avoiding these difficulties imposes only a small penalty in performance. Leveraging a newly-proposed pre-training task-multi-instance prediction-our experiments demonstrate that fine-tuning a network pre-trained using fractals attains 92.7-98.1% of the accuracy of an ImageNet pre-trained network. Our code is publicly available.(1)
引用
收藏
页码:2412 / 2421
页数:10
相关论文
共 50 条
  • [1] Improving fault localization with pre-training
    Zhang, Zhuo
    Li, Ya
    Xue, Jianxin
    Mao, Xiaoguang
    [J]. FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (01)
  • [2] Improving fault localization with pre-training
    Zhuo Zhang
    Ya Li
    Jianxin Xue
    Xiaoguang Mao
    [J]. Frontiers of Computer Science, 2024, 18
  • [3] Improving Monocular Depth Estimation by Semantic Pre-training
    Rottmann, Peter
    Posewsky, Thorbjorn
    Milioto, Andres
    Stachniss, Cyrill
    Behley, Jens
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 5916 - 5923
  • [4] Improving the Sample Efficiency of Pre-training Language Models
    Berend, Gabor
    [J]. ERCIM NEWS, 2024, (136): : 38 - 40
  • [5] SpanBERT: Improving Pre-training by Representing and Predicting Spans
    Joshi, Mandar
    Chen, Danqi
    Liu, Yinhan
    Weld, Daniel S.
    Zettlemoyer, Luke
    Levy, Omer
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2020, 8 : 64 - 77
  • [6] Improving Reinforcement Learning Pre-Training with Variational Dropout
    Blau, Tom
    Ott, Lionel
    Ramos, Fabio
    [J]. 2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 4115 - 4122
  • [7] PRE-TRAINING WITH FRACTAL IMAGES FACILITATES LEARNED IMAGE QUALITY ESTIMATION
    Silbernagel, Malte
    Wiegand, Thomas
    Eisert, Peter
    Bosse, Sebastian
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2625 - 2629
  • [8] Improving Knowledge Tracing via Pre-training Question Embeddings
    Liu, Yunfei
    Yang, Yang
    Chen, Xianyu
    Shen, Jian
    Zhang, Haifeng
    Yu, Yong
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1577 - 1583
  • [9] Improving AMR Parsing with Sequence-to-Sequence Pre-training
    Xu, Dongqin
    Li, Junhui
    Zhu, Muhua
    Min Zhang
    Zhou, Guodong
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2501 - 2511
  • [10] Improving negation detection with negation-focused pre-training
    Hung Thinh Truong
    Baldwin, Timothy
    Cohn, Trevor
    Verspoor, Karin
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 4188 - 4193