Temporal focusing multiphoton microscopy with cross-modality multi-stage 3D U-Net for fast and clear bioimaging

被引:2
|
作者
Hu, Yvonne Yuling [1 ]
Hsu, Chia-Wei [2 ]
Tseng, Yu-Hao [2 ]
Lin, Chun-Yu [2 ]
Chiang, Hsueh-Cheng [3 ]
Chiang, Ann-Shyn [4 ]
Chang, Shin-Tsu [5 ,6 ]
Chen, Shean-Jen [2 ]
机构
[1] Natl Cheng Kung Univ, Dept Photon, Tainan 701, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Coll Photon, Tainan 711, Taiwan
[3] Natl Cheng Kung Univ, Dept Pharmacol, Tainan 701, Taiwan
[4] Natl Tsing Hua Univ, Brain Res Ctr, Hsinchu 300, Taiwan
[5] Kaohsiung Vet Gen Hosp, Dept Phys Med & Rehabil, Kaohsiung 813, Taiwan
[6] Natl Def Med Ctr, Triserv Gen Hosp, Dept Phys Med & Rehabil, Taipei 114, Taiwan
关键词
2-PHOTON MICROSCOPY; LEARNING FRAMEWORK; ILLUMINATION; PRINCIPLES; RESOLUTION;
D O I
10.1364/BOE.484154
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Temporal focusing multiphoton excitation microscopy (TFMPEM) enables fast widefield biotissue imaging with optical sectioning. However, under widefield illumination, the imaging performance is severely degraded by scattering effects, which induce signal crosstalk and a low signal-to-noise ratio in the detection process, particularly when imaging deep layers. Accordingly, the present study proposes a cross-modality learning-based neural network method for performing image registration and restoration. In the proposed method, the point-scanning multiphoton excitation microscopy images are registered to the TFMPEM images by an unsupervised U-Net model based on a global linear affine transformation process and local VoxelMorph registration network. A multi-stage 3D U-Net model with a cross-stage feature fusion mechanism and self-supervised attention module is then used to infer in-vitro fixed TFMPEM volumetric images. The experimental results obtained for in-vitro drosophila mushroom body (MB) images show that the proposed method improves the structure similarity index measures (SSIMs) of the TFMPEM images acquired with a 10-ms exposure time from 0.38 to 0.93 and 0.80 for shallow-and deep-layer images, respectively. A 3D U-Net model, pretrained on in-vitro images, is further trained using a small in-vivo MB image dataset. The transfer learning network improves the SSIMs of in-vivo drosophila MB images captured with a 1-ms exposure time to 0.97 and 0.94 for shallow and deep layers, respectively.& COPY; 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
引用
收藏
页码:2478 / 2491
页数:14
相关论文
共 50 条
  • [31] Multi-level Glioma Segmentation using 3D U-Net Combined Attention Mechanism with Atrous Convolution
    Cheng, Jianhong
    Liu, Jin
    Liu, Liangliang
    Pan, Yi
    Wang, Jianxin
    2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2019, : 1031 - 1036
  • [32] CT-based Pancreatic Multi-organ Segmentation by a 3D Deep Attention U-Net Network
    Liu, Yingzi
    Lei, Yang
    Fu, Yabo
    Wang, Tonghe
    Tang, Xiangyang
    Tian, Sibo
    Liu, Tian
    Curran, Walter
    Patel, Pretesh
    Yang, Xiaofeng
    MEDICAL IMAGING 2020: IMAGING INFORMATICS FOR HEALTHCARE, RESEARCH, AND APPLICATIONS, 2020, 11318
  • [33] A 3D U-Net based two stage deep learning framework for predicting dose distributions in radiation treatment planning
    Chandran, Lekshmy P.
    Rahiman, Abdul Nazeer Kochannan Parampil Abdul
    Puzhakkal, Niyas
    Makuni, Dinesh
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (01)
  • [34] Deep learning 2D and 3D optical sectioning microscopy using cross-modality Pix2Pix cGAN image translation
    Zhuge, Huimin
    Summa, Brian
    Hamm, Jihun
    Brown, J. Quincy
    BIOMEDICAL OPTICS EXPRESS, 2021, 12 (12) : 7526 - 7543
  • [35] 3D-UMamba: 3D U-Net with state space model for semantic segmentation of multi-source LiDAR point clouds
    Lu, Dening
    Xu, Linlin
    Zhou, Jun
    Gao, Kyle
    Gong, Zheng
    Zhang, Dedong
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2025, 136
  • [36] 3D Dense U-Net for Fully Automated Multi-Organ Segmentation in Female Pelvic Magnetic Resonance Imaging
    Zabihollahy, F.
    Schmidt, E.
    Viswanathan, A.
    Lee, J.
    MEDICAL PHYSICS, 2021, 48 (06)
  • [37] A dense residual U-net for multiple sclerosis lesions segmentation from multi-sequence 3D MR images
    Sarica, Beytullah
    Seker, Dursun Zafer
    Bayram, Bulent
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2023, 170
  • [38] AUTOMATED MULTI-ORGAN SEGMENTATION IN PET IMAGES USING CASCADED TRAINING OF A 3D U-NET AND CONVOLUTIONAL AUTOENCODER
    Liebgott, Annika
    Lorenz, Charlotte
    Gatidis, Sergios
    Viet Chau Vu
    Nikolaou, Konstantin
    Yang, Bin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1145 - 1149
  • [39] 3D U-Net for Segmentation of Pulmonary Nodules in Volumetric CT Scans from Multi-Annotator Truth Estimation
    Funke, William
    Veasey, Benjamin
    Zurada, Jacek
    Frigui, Hichem
    Amini, Amir
    MEDICAL IMAGING 2020: COMPUTER-AIDED DIAGNOSIS, 2020, 11314
  • [40] Development and Validation of a Modality-Invariant 3D Swin U-Net Transformer for Liver and Spleen Segmentation on Multi-Site Clinical Bi-parametric MR Images
    Zhang, Huixian
    Li, Hailong
    Ali, Redha
    Jia, Wei
    Pan, Wen
    Reeder, Scott B.
    Harris, David
    Masch, William
    Aslam, Anum
    Shanbhogue, Krishna
    Parikh, Nehal A.
    Dillman, Jonathan R.
    He, Lili
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2024,