Temporal focusing multiphoton microscopy with cross-modality multi-stage 3D U-Net for fast and clear bioimaging

被引:2
|
作者
Hu, Yvonne Yuling [1 ]
Hsu, Chia-Wei [2 ]
Tseng, Yu-Hao [2 ]
Lin, Chun-Yu [2 ]
Chiang, Hsueh-Cheng [3 ]
Chiang, Ann-Shyn [4 ]
Chang, Shin-Tsu [5 ,6 ]
Chen, Shean-Jen [2 ]
机构
[1] Natl Cheng Kung Univ, Dept Photon, Tainan 701, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Coll Photon, Tainan 711, Taiwan
[3] Natl Cheng Kung Univ, Dept Pharmacol, Tainan 701, Taiwan
[4] Natl Tsing Hua Univ, Brain Res Ctr, Hsinchu 300, Taiwan
[5] Kaohsiung Vet Gen Hosp, Dept Phys Med & Rehabil, Kaohsiung 813, Taiwan
[6] Natl Def Med Ctr, Triserv Gen Hosp, Dept Phys Med & Rehabil, Taipei 114, Taiwan
关键词
2-PHOTON MICROSCOPY; LEARNING FRAMEWORK; ILLUMINATION; PRINCIPLES; RESOLUTION;
D O I
10.1364/BOE.484154
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Temporal focusing multiphoton excitation microscopy (TFMPEM) enables fast widefield biotissue imaging with optical sectioning. However, under widefield illumination, the imaging performance is severely degraded by scattering effects, which induce signal crosstalk and a low signal-to-noise ratio in the detection process, particularly when imaging deep layers. Accordingly, the present study proposes a cross-modality learning-based neural network method for performing image registration and restoration. In the proposed method, the point-scanning multiphoton excitation microscopy images are registered to the TFMPEM images by an unsupervised U-Net model based on a global linear affine transformation process and local VoxelMorph registration network. A multi-stage 3D U-Net model with a cross-stage feature fusion mechanism and self-supervised attention module is then used to infer in-vitro fixed TFMPEM volumetric images. The experimental results obtained for in-vitro drosophila mushroom body (MB) images show that the proposed method improves the structure similarity index measures (SSIMs) of the TFMPEM images acquired with a 10-ms exposure time from 0.38 to 0.93 and 0.80 for shallow-and deep-layer images, respectively. A 3D U-Net model, pretrained on in-vitro images, is further trained using a small in-vivo MB image dataset. The transfer learning network improves the SSIMs of in-vivo drosophila MB images captured with a 1-ms exposure time to 0.97 and 0.94 for shallow and deep layers, respectively.& COPY; 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
引用
收藏
页码:2478 / 2491
页数:14
相关论文
共 50 条
  • [21] Aggregating Multi-scale Prediction Based on 3D U-Net in Brain Tumor Segmentation
    Chen, Minglin
    Wu, Yaozu
    Wu, Jianhuang
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES (BRAINLES 2019), PT I, 2020, 11992 : 142 - 152
  • [22] Fast Segmentation and Dynamic Monitoring of Time-Lapse 3D GPR Data Based on U-Net
    Shang, Ke
    Zhang, Feizhou
    Song, Ao
    Ling, Jianyu
    Xiao, Jiwen
    Zhang, Zihan
    Qian, Rongyi
    REMOTE SENSING, 2022, 14 (17)
  • [23] 3D U-net with Multi-level Deep Supervision: Fully Automatic Segmentation of Proximal Femur in 3D MR Images
    Zeng, Guodong
    Yang, Xin
    Li, Jing
    Yu, Lequan
    Heng, Pheng-Ann
    Zheng, Guoyan
    MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2017), 2017, 10541 : 274 - 282
  • [24] Comparison of Multi-atlas Segmentation and U-Net Approaches for Automated 3D Liver Delineation in MRI
    Owler, James
    Irving, Ben
    Ridgeway, Ged
    Wojciechowska, Marta
    McGonigle, John
    Brady, Sir Michael
    MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2019, 2020, 1065 : 478 - 488
  • [25] Dilated multi-scale residual attention (DMRA) U-Net: three-dimensional (3D) dilated multi-scale residual attention U-Net for brain tumor segmentation
    Zhang, Lihong
    Li, Yuzhuo
    Liang, Yingbo
    Xu, Chongxin
    Liu, Tong
    Sun, Junding
    QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (10) : 7249 - 7264
  • [26] ThoraxNet: a 3D U-Net based two-stage framework for OAR segmentation on thoracic CT images
    Francis, Seenia
    Jayaraj, P. B.
    Pournami, P. N.
    Thomas, Manu
    Jose, Ajay Thoomkuzhy
    Binu, Allen John
    Puzhakkal, Niyas
    PHYSICAL AND ENGINEERING SCIENCES IN MEDICINE, 2022, 45 (01) : 189 - 203
  • [27] ThoraxNet: a 3D U-Net based two-stage framework for OAR segmentation on thoracic CT images
    Seenia Francis
    P. B. Jayaraj
    P. N. Pournami
    Manu Thomas
    Ajay Thoomkuzhy Jose
    Allen John Binu
    Niyas Puzhakkal
    Physical and Engineering Sciences in Medicine, 2022, 45 : 189 - 203
  • [28] A Two-Stage Fully Automatic Segmentation Scheme Using Both 2D and 3D U-Net for Multi-sequence Cardiac MR
    Xu, Haohao
    Xu, Zhuangwei
    Gu, Wenting
    Zhang, Qi
    STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART: MULTI-SEQUENCE CMR SEGMENTATION, CRT-EPIGGY AND LV FULL QUANTIFICATION CHALLENGES, 2020, 12009 : 309 - 316
  • [29] MRI to FDG-PET: Cross-Modal Synthesis Using 3D U-Net for Multi-modal Alzheimer's Classification
    Sikka, Apoorva
    Peri, Skand Vishwanath
    Bathula, Deepti R.
    SIMULATION AND SYNTHESIS IN MEDICAL IMAGING, 2018, 11037 : 80 - 89
  • [30] MMAF-Net: Multi-view multi-stage adaptive fusion for multi-sensor 3D object detection
    Zhang, Wensheng
    Shi, Hongli
    Zhao, Yunche
    Feng, Zhenan
    Lovreglio, Ruggiero
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 242