Finding the semantic similarity in single-particle diffraction images using self-supervised contrastive projection learning

被引:0
|
作者
Julian Zimmermann
Fabien Beguet
Daniel Guthruf
Bruno Langbehn
Daniela Rupp
机构
[1] ETH Zürich,
[2] Technische Universität Berlin,undefined
[3] Max-Born-Institut,undefined
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Single-shot coherent diffraction imaging of isolated nanosized particles has seen remarkable success in recent years, yielding in-situ measurements with ultra-high spatial and temporal resolution. The progress of high-repetition-rate sources for intense X-ray pulses has further enabled recording datasets containing millions of diffraction images, which are needed for the structure determination of specimens with greater structural variety and dynamic experiments. The size of the datasets, however, represents a monumental problem for their analysis. Here, we present an automatized approach for finding semantic similarities in coherent diffraction images without relying on human expert labeling. By introducing the concept of projection learning, we extend self-supervised contrastive learning to the context of coherent diffraction imaging and achieve a dimensionality reduction producing semantically meaningful embeddings that align with physical intuition. The method yields substantial improvements compared to previous approaches, paving the way toward real-time and large-scale analysis of coherent diffraction experiments at X-ray free-electron lasers.
引用
收藏
相关论文
共 50 条
  • [21] Contrastive Self-supervised Representation Learning Using Synthetic Data
    Dong-Yu She
    Kun Xu
    International Journal of Automation and Computing, 2021, 18 : 556 - 567
  • [22] Classification of Ground-Based Cloud Images by Contrastive Self-Supervised Learning
    Lv, Qi
    Li, Qian
    Chen, Kai
    Lu, Yao
    Wang, Liwen
    REMOTE SENSING, 2022, 14 (22)
  • [23] Point Contrastive Prediction with Semantic Clustering for Self-Supervised Learning on Point Cloud Videos
    Sheng, Xiaoxiao
    Shen, Zhiqiang
    Xiao, Gang
    Wang, Longguang
    Guo, Yulan
    Fan, Hehe
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16469 - 16478
  • [24] Vicsgaze: a gaze estimation method using self-supervised contrastive learning
    Gu, De
    Lv, Minghao
    Liu, Jianchu
    MULTIMEDIA SYSTEMS, 2024, 30 (06)
  • [25] Contrastive self-supervised learning from 100 million medical images with optional supervision
    Ghesu, Florin C.
    Georgescu, Bogdan
    Mansoor, Awais
    Yoo, Youngjin
    Neumann, Dominik
    Patel, Pragneshkumar
    Vishwanath, Reddappagari Suryanarayana
    Balter, James M.
    Cao, Yue
    Grbic, Sasa
    Comaniciu, Dorin
    JOURNAL OF MEDICAL IMAGING, 2022, 9 (06)
  • [26] Self-Supervised Contrastive Learning for Automated Segmentation of Brain Tumor MRI Images in Schizophrenia
    Meng, Lingmiao
    Zhao, Liwei
    Yi, Xin
    Yu, Qingming
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [27] G-SimCLR: Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling
    Chakraborty, Souradip
    Gosthipaty, Aritra Roy
    Paul, Sayak
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW 2020), 2020, : 912 - 916
  • [28] Semantic Segmentation of Remote Sensing Images With Self-Supervised Multitask Representation Learning
    Li, Wenyuan
    Chen, Hao
    Shi, Zhenwei
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 6438 - 6450
  • [29] Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations
    Bui, Nghi D. Q.
    Yu, Yijun
    Jiang, Lingxiao
    SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, : 511 - 521
  • [30] Using Expert Gaze for Self-Supervised and Supervised Contrastive Learning of Glaucoma from OCT Data
    Lau, Wai Tak
    Tian, Ye
    Kenia, Roshan
    Aima, Saanvi
    Thakoor, Kaveri A.
    CONFERENCE ON HEALTH, INFERENCE, AND LEARNING, 2024, 248 : 427 - 445