Fully Unsupervised Domain-Agnostic Image Retrieval

被引:0
|
作者
Zheng, Ziqiang [1 ]
Ren, Hao [2 ]
Wu, Yang [3 ]
Zhang, Weichuan [4 ]
Lu, Hong [2 ]
Yang, Yang [1 ]
Shen, Heng Tao [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200438, Peoples R China
[3] Tencent AI Lab, Shenzhen 518100, Peoples R China
[4] Griffith Univ, Inst Integrated & Intelligent Syst, Brisbane, Qld 4222, Australia
基金
中国国家自然科学基金;
关键词
One-shot image translation; unsupervised learning; image retrieval; domain adaptation;
D O I
10.1109/TCSVT.2023.3335147
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recent research in cross-domain image retrieval has focused on addressing two challenging issues: handling domain variations in the data and dealing with the lack of sufficient training labels. However, these problems have often been studied separately, limiting the practicality and significance of the research outcomes. The existing cross-domain setting is also restricted to cases where domain labels are known during training, and all samples have semantic category information or instance correspondences. In this paper, we propose a novel approach to address a more general and practical problem: fully unsupervised domain-agnostic image retrieval under the domain-unknown setting, where no annotations are provided. Our approach tackles both the domain variation and missing labels challenges simultaneously. We introduce a new fully unsupervised One-Shot Synthesis-based Contrastive learning method (termed OSSCo) to project images from different data distributions into a shared feature space for similarity measurement. To handle the domain-unknown setting, we propose One-Shot unpaired image-to-image Translation (OST) between a randomly selected one-shot image and the rest of the training images. By minimizing the global distance between the original images and the generated images from OST, the model learns domain-agnostic representations. To address the label-unknown setting, we employ contrastive learning with a synthesis-based transform module from the OST training. This allows for effective representation learning without any annotations or external constraints. We evaluate our proposed method on diverse datasets, and the results demonstrate its effectiveness. Notably, our approach achieves comparable performance to current state-of-the-art supervised methods.
引用
收藏
页码:5077 / 5090
页数:14
相关论文
共 50 条
  • [21] Fully Unsupervised Convolutional Learning for Fast Image Retrieval
    Tzelepi, Maria
    Tefas, Anastasios
    10TH HELLENIC CONFERENCE ON ARTIFICIAL INTELLIGENCE (SETN 2018), 2018,
  • [22] Domain-Agnostic Document Authentication Against Practical Recapturing Attacks
    Chen, Changsheng
    Zhang, Shuzheng
    Lan, Fengbo
    Huang, Jiwu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 2890 - 2905
  • [23] Deep Learning Model Portability for Domain-Agnostic Device Fingerprinting
    Gaskin, Jared
    Elmaghbub, Abdurrahman
    Hamdaoui, Bechir
    Wong, Weng-Keen
    IEEE ACCESS, 2023, 11 : 86801 - 86823
  • [24] DaCo: domain-agnostic contrastive learning for visual place recognition
    Hao Ren
    Ziqiang Zheng
    Yang Wu
    Hong Lu
    Applied Intelligence, 2023, 53 : 21827 - 21840
  • [25] GraphGen: A Scalable Approach to Domain-agnostic Labeled Graph Generation
    Goyal, Nikhil
    Jain, Harsh Vardhan
    Ranu, Sayan
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 1253 - 1263
  • [26] DOMAIN-AGNOSTIC VIDEO PREDICTION FROM MOTION SELECTIVE KERNELS
    Prinet, Veronique
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4205 - 4209
  • [27] DaCo: domain-agnostic contrastive learning for visual place recognition
    Ren, Hao
    Zheng, Ziqiang
    Wu, Yang
    Lu, Hong
    APPLIED INTELLIGENCE, 2023, 53 (19) : 21827 - 21840
  • [28] CyclePro: A Robust Framework for Domain-Agnostic Gait Cycle Detection
    Ma, Yuchao
    Ashari, Zhila Esna
    Pedram, Mahdi
    Amini, Navid
    Tarquinio, Daniel
    Nouri-Mahdavi, Kouros
    Pourhomayoun, Mohammad
    Catena, Robert D.
    Ghasemzadeh, Hassan
    IEEE SENSORS JOURNAL, 2019, 19 (10) : 3751 - 3762
  • [29] Registered multi-device/staining histology image dataset for domain-agnostic machine learning models
    Mieko Ochi
    Daisuke Komura
    Takumi Onoyama
    Koki Shinbo
    Haruya Endo
    Hiroto Odaka
    Miwako Kakiuchi
    Hiroto Katoh
    Tetsuo Ushiku
    Shumpei Ishikawa
    Scientific Data, 11
  • [30] Registered multi-device/staining histology image dataset for domain-agnostic machine learning models
    Ochi, Mieko
    Komura, Daisuke
    Onoyama, Takumi
    Shinbo, Koki
    Endo, Haruya
    Odaka, Hiroto
    Kakiuchi, Miwako
    Katoh, Hiroto
    Ushiku, Tetsuo
    Ishikawa, Shumpei
    SCIENTIFIC DATA, 2024, 11 (01)