Deep residual networks for crystallography trained on synthetic data

被引:1
|
作者
Mendez, Derek [1 ]
Holton, James M. [1 ,2 ,3 ]
Lyubimov, Artem Y. [1 ]
Hollatz, Sabine [1 ]
Mathews, Irimpan I. [1 ]
Cichosz, Aleksander [4 ]
Martirosyan, Vardan [5 ]
Zeng, Teo [4 ]
Stofer, Ryan [4 ]
Liu, Ruobin [4 ]
Song, Jinhu [1 ]
McPhillips, Scott [1 ]
Soltis, Mike [1 ]
Cohen, Aina E. [1 ]
机构
[1] SLAC Natl Accelerator Lab, Stanford Synchrotron Radiat Lightsource, Menlo Pk, CA 94025 USA
[2] Lawrence Berkeley Natl Lab, Mol Biophys & Integrated Bioimaging Div, Berkeley, CA 94720 USA
[3] UC San Francisco, Dept Biochem & Biophys, San Francisco, CA 94158 USA
[4] UC Santa Barbara, Dept Stat & Appl Probabil, Santa Barbara, CA 93106 USA
[5] UC Santa Barbara, Dept Math, Santa Barbara, CA 93106 USA
关键词
artificial intelligence; serial crystallography; rotation crystallography; synchrotrons; XFELs; MACROMOLECULAR CRYSTALLOGRAPHY; DATA-COLLECTION; FEMTOSECOND CRYSTALLOGRAPHY; SAMPLE DELIVERY; PUMP; INSTRUMENT; RESOLUTION; LIGAND; CHIP; ICE;
D O I
10.1107/S2059798323010586
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
The use of artificial intelligence to process diffraction images is challenged by the need to assemble large and precisely designed training data sets. To address this, a codebase called Resonet was developed for synthesizing diffraction data and training residual neural networks on these data. Here, two per-pattern capabilities of Resonet are demonstrated: (i) interpretation of crystal resolution and (ii) identification of overlapping lattices. Resonet was tested across a compilation of diffraction images from synchrotron experiments and X-ray free-electron laser experiments. Crucially, these models readily execute on graphics processing units and can thus significantly outperform conventional algorithms. While Resonet is currently utilized to provide real-time feedback for macro-molecular crystallography users at the Stanford Synchrotron Radiation Lightsource, its simple Python-based interface makes it easy to embed in other processing frameworks. This work highlights the utility of physics-based simulation for training deep neural networks and lays the groundwork for the development of additional models to enhance diffraction collection and analysis.
引用
收藏
页码:26 / 43
页数:18
相关论文
共 50 条
  • [41] Trained Rank Pruning for Efficient Deep Neural Networks
    Xu, Yuhui
    Li, Yuxi
    Zhang, Shuai
    Wen, Wei
    Wang, Botao
    Dai, Wenrui
    Qi, Yingyong
    Chen, Yiran
    Lin, Weiyao
    Xiong, Hongkai
    FIFTH WORKSHOP ON ENERGY EFFICIENT MACHINE LEARNING AND COGNITIVE COMPUTING - NEURIPS EDITION (EMC2-NIPS 2019), 2019, : 14 - 17
  • [42] Attentional Masking for Pre-trained Deep Networks
    Wallenberg, Marcus
    Forssen, Per-Erik
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 6149 - 6154
  • [43] Understanding and mitigating noise in trained deep neural networks
    Semenova, Nadezhda
    Larger, Laurent
    Brunner, Daniel
    NEURAL NETWORKS, 2022, 146 : 151 - 160
  • [44] Colour Visual Coding in trained Deep Neural Networks
    Rafegas, Ivet
    Vanrell, Maria
    PERCEPTION, 2016, 45 : 214 - 214
  • [45] Synthetic Data for Deep Learning
    Horvath, Blanka
    QUANTITATIVE FINANCE, 2022, 22 (03) : 423 - 425
  • [46] Amplifying the Effects of Contrast Agents on Magnetic Resonance Images Using a Deep Learning Method Trained on Synthetic Data
    Fringuello Mingo, Alberto
    Colombo Serra, Sonia
    Macula, Anna
    Bella, Davide
    La Cava, Francesca
    Ali, Marco
    Papa, Sergio
    Tedoldi, Fabio
    Smits, Marion
    Bifone, Angelo
    Valbusa, Giovanni
    INVESTIGATIVE RADIOLOGY, 2023, 58 (12) : 853 - 864
  • [47] NEURAL NETWORKS OPTIMALLY TRAINED WITH NOISY DATA
    WONG, KYM
    SHERRINGTON, D
    PHYSICAL REVIEW E, 1993, 47 (06): : 4465 - 4482
  • [48] Interpreting Convolutional Networks Trained on Textual Data
    Marzban, Reza
    Crick, Christopher
    PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 196 - 203
  • [49] Marker-Less Motion Capture of Insect Locomotion With Deep Neural Networks Pre-trained on Synthetic Videos
    Arent, Ilja
    Schmidt, Florian P.
    Botsch, Mario
    Duerr, Volker
    FRONTIERS IN BEHAVIORAL NEUROSCIENCE, 2021, 15
  • [50] Open set task augmentation facilitates generalization of deep neural networks trained on small data sets
    Wadhah Zai El Amri
    Felix Reinhart
    Wolfram Schenck
    Neural Computing and Applications, 2022, 34 : 6067 - 6083