Self-supervised Scalable Deep Compressed Sensing

被引:0
|
作者
Chen, Bin [1 ]
Zhang, Xuanyu [1 ]
Liu, Shuai [2 ]
Zhang, Yongbing [3 ]
Zhang, Jian [1 ]
机构
[1] Peking Univ, Sch Elect & Comp Engn, Shenzhen, Peoples R China
[2] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[3] Harbin Inst Technol Shenzhen, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Compressed sensing; Inverse imaging problems; Self-supervised learning; Algorithm unrolling; IMAGE SUPERRESOLUTION; NETWORK; RECONSTRUCTION; ALGORITHMS; FRAMEWORK; SIGNAL;
D O I
10.1007/s11263-024-02209-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Compressed sensing (CS) is a promising tool for reducing sampling costs. Current deep neural network (NN)-based CS approaches face the challenges of collecting labeled measurement-ground truth (GT) data and generalizing to real applications. This paper proposes a novel Self-supervised sCalable deep CS method, comprising a deep Learning scheme called SCL and a family of Networks named SCNet, which does not require GT and can handle arbitrary sampling ratios and matrices once trained on a partial measurement set. Our SCL contains a dual-domain loss and a four-stage recovery strategy. The former encourages a cross-consistency on two measurement parts and a sampling-reconstruction cycle-consistency regarding arbitrary ratios and matrices to maximize data utilization. The latter can progressively leverage the common signal prior in external measurements and internal characteristics of test samples and learned NNs to improve accuracy. SCNet combines both the explicit guidance from optimization algorithms and the implicit regularization from advanced NN blocks to learn a collaborative signal representation. Our theoretical analyses and experiments on simulated and real captured data, covering 1-/2-/3-D natural and scientific signals, demonstrate the effectiveness, superior performance, flexibility, and generalization ability of our method over existing self-supervised methods and its significant potential in competing against many state-of-the-art supervised methods. Code is available at https://github.com/Guaishou74851/SCNet.
引用
收藏
页码:688 / 723
页数:36
相关论文
共 50 条
  • [31] Weather Recognition Using Self-supervised Deep Learning
    Acuna-Escobar, Diego
    Intriago-Pazmino, Monserrate
    Ibarra-Fiallo, Julio
    SMART TECHNOLOGIES, SYSTEMS AND APPLICATIONS, SMARTTECH-IC 2021, 2022, 1532 : 161 - 174
  • [32] A Deep Cut Into Split Federated Self-Supervised Learning
    Przewiezlikowski, Marcin
    Osial, Marcin
    Zielinski, Bartosz
    Smieja, Marek
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT II, ECML PKDD 2024, 2024, 14942 : 444 - 459
  • [33] Deep Discrete Hashing with Self-supervised Pairwise Labels
    Song, Jingkuan
    He, Tao
    Fan, Hangbo
    Gao, Lianli
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2017, PT I, 2017, 10534 : 223 - 238
  • [34] Deep Bregman divergence for self-supervised representations learning
    Rezaei, Mina
    Soleymani, Farzin
    Bischl, Bernd
    Azizi, Shekoofeh
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 235
  • [35] Self-Supervised Deep Visual Odometry with Online Adaptation
    Li, Shunkai
    Wang, Xin
    Cao, Yingdian
    Xue, Fei
    Yan, Zike
    Zha, Hongbin
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6338 - 6347
  • [36] Self-supervised deep geometric subspace clustering network
    Baek, Sangwon
    Yoon, Gangjoon
    Song, Jinjoo
    Yoon, Sang Min
    INFORMATION SCIENCES, 2022, 610 : 235 - 245
  • [37] A scalable neural network architecture for self-supervised tomographic image reconstruction
    Dong, Hongyang
    Jacques, Simon D. M.
    Kockelmann, Winfried
    Price, Stephen W. T.
    Emberson, Robert
    Matras, Dorota
    Odarchenko, Yaroslav
    Middelkoop, Vesna
    Giokaris, Athanasios
    Gutowski, Olof
    Dippel, Ann-Christin
    von Zimmermann, Martin
    Beale, Andrew M.
    Butler, Keith T.
    Vamvakeros, Antonis
    DIGITAL DISCOVERY, 2023, 2 (04): : 967 - 980
  • [38] Sparse graph based self-supervised hashing for scalable image retrieval
    Wang, Weiwei
    Zhang, Haofeng
    Zhang, Zheng
    Liu, Li
    Shao, Ling
    INFORMATION SCIENCES, 2021, 547 : 622 - 640
  • [39] S3GC: Scalable Self-Supervised Graph Clustering
    Devvrit, Fnu
    Sinha, Aditya
    Dhillon, Inderjit
    Jain, Prateek
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [40] Self-Supervised Vision Transformers for Scalable Anomaly Detection over Images
    Samele, Stefano
    Matteucci, Matteo
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,