Mixed Autoencoder for Self-supervised Visual Representation Learning

被引:10
|
作者
Chen, Kai [1 ]
Liu, Zhili [1 ,2 ]
Hong, Lanqing [2 ]
Xu, Hang [2 ]
Li, Zhenguo [2 ]
Yeung, Dit-Yan [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Huawei Noahs Ark Lab, Montreal, PQ, Canada
关键词
D O I
10.1109/CVPR52729.2023.02178
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction. However, effective data augmentation strategies for MAE still remain open questions, different from those in contrastive learning that serve as the most important part. This paper studies the prevailing mixing augmentation for MAE. We first demonstrate that naive mixing will in contrast degenerate model performance due to the increase of mutual information (MI). To address, we propose homologous recognition, an auxiliary pretext task, not only to alleviate the MI increasement by explicitly requiring each patch to recognize homologous patches, but also to perform object-aware self-supervised pre-training for better downstream dense perception performance. With extensive experiments, we demonstrate that our proposed Mixed Autoencoder (MixedAE) achieves the state-of-the-art transfer results among masked image modeling (MIM) augmentations on different downstream tasks with significant efficiency. Specifically, our MixedAE outperforms MAE by +0.3% accuracy, +1.7 mIoU and +0.9 AP on ImageNet-1K, ADE20K and COCO respectively with a standard ViT-Base. Moreover, MixedAE surpasses iBOT, a strong MIM method combined with instance discrimination, while accelerating training by 2x. To our best knowledge, this is the very first work to consider mixing for MIM from the perspective of pretext task design. Code will be made available.
引用
收藏
页码:22742 / 22751
页数:10
相关论文
共 50 条
  • [41] Distilling Localization for Self-Supervised Representation Learning
    Zhao, Nanxuan
    Wu, Zhirong
    Lau, Rynson W. H.
    Lin, Stephen
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10990 - 10998
  • [42] Self-Supervised Relational Reasoning for Representation Learning
    Patacchiola, Massimiliano
    Storkey, Amos
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [43] Self-Supervised Learning for Specified Latent Representation
    Liu, Chicheng
    Song, Libin
    Zhang, Jiwen
    Chen, Ken
    Xu, Jing
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2020, 28 (01) : 47 - 59
  • [44] Self-supervised Representation Learning on Document Images
    Cosma, Adrian
    Ghidoveanu, Mihai
    Panaitescu-Liess, Michael
    Popescu, Marius
    DOCUMENT ANALYSIS SYSTEMS, 2020, 12116 : 103 - 117
  • [45] Adaptive Self-Supervised Graph Representation Learning
    Gong, Yunchi
    36TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2022), 2022, : 254 - 259
  • [46] SELF-SUPERVISED REPRESENTATION LEARNING FOR ULTRASOUND VIDEO
    Jiao, Jianbo
    Droste, Richard
    Drukker, Lior
    Papageorghiou, Aris T.
    Noble, J. Alison
    2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1847 - 1850
  • [47] Self-Supervised Point Cloud Representation Learning via Separating Mixed Shapes
    Sun, Chao
    Zheng, Zhedong
    Wang, Xiaohan
    Xu, Mingliang
    Yang, Yi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6207 - 6218
  • [48] SelfDoc: Self-Supervised Document Representation Learning
    Li, Peizhao
    Gu, Jiuxiang
    Kuen, Jason
    Morariu, Vlad, I
    Zhao, Handong
    Jain, Rajiv
    Manjunatha, Varun
    Liu, Hongfu
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5648 - 5656
  • [49] Solving Inefficiency of Self-supervised Representation Learning
    Wang, Guangrun
    Wang, Keze
    Wang, Guangcong
    Torr, Philip H. S.
    Lin, Liang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9485 - 9495
  • [50] Self-supervised Representation Learning for Astronomical Images
    Hayat, Md Abul
    Stein, George
    Harrington, Peter
    Lukic, Zarija
    Mustafa, Mustafa
    ASTROPHYSICAL JOURNAL LETTERS, 2021, 911 (02)