Temporal focusing multiphoton microscopy with cross-modality multi-stage 3D U-Net for fast and clear bioimaging

被引:2
|
作者
Hu, Yvonne Yuling [1 ]
Hsu, Chia-Wei [2 ]
Tseng, Yu-Hao [2 ]
Lin, Chun-Yu [2 ]
Chiang, Hsueh-Cheng [3 ]
Chiang, Ann-Shyn [4 ]
Chang, Shin-Tsu [5 ,6 ]
Chen, Shean-Jen [2 ]
机构
[1] Natl Cheng Kung Univ, Dept Photon, Tainan 701, Taiwan
[2] Natl Yang Ming Chiao Tung Univ, Coll Photon, Tainan 711, Taiwan
[3] Natl Cheng Kung Univ, Dept Pharmacol, Tainan 701, Taiwan
[4] Natl Tsing Hua Univ, Brain Res Ctr, Hsinchu 300, Taiwan
[5] Kaohsiung Vet Gen Hosp, Dept Phys Med & Rehabil, Kaohsiung 813, Taiwan
[6] Natl Def Med Ctr, Triserv Gen Hosp, Dept Phys Med & Rehabil, Taipei 114, Taiwan
关键词
2-PHOTON MICROSCOPY; LEARNING FRAMEWORK; ILLUMINATION; PRINCIPLES; RESOLUTION;
D O I
10.1364/BOE.484154
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Temporal focusing multiphoton excitation microscopy (TFMPEM) enables fast widefield biotissue imaging with optical sectioning. However, under widefield illumination, the imaging performance is severely degraded by scattering effects, which induce signal crosstalk and a low signal-to-noise ratio in the detection process, particularly when imaging deep layers. Accordingly, the present study proposes a cross-modality learning-based neural network method for performing image registration and restoration. In the proposed method, the point-scanning multiphoton excitation microscopy images are registered to the TFMPEM images by an unsupervised U-Net model based on a global linear affine transformation process and local VoxelMorph registration network. A multi-stage 3D U-Net model with a cross-stage feature fusion mechanism and self-supervised attention module is then used to infer in-vitro fixed TFMPEM volumetric images. The experimental results obtained for in-vitro drosophila mushroom body (MB) images show that the proposed method improves the structure similarity index measures (SSIMs) of the TFMPEM images acquired with a 10-ms exposure time from 0.38 to 0.93 and 0.80 for shallow-and deep-layer images, respectively. A 3D U-Net model, pretrained on in-vitro images, is further trained using a small in-vivo MB image dataset. The transfer learning network improves the SSIMs of in-vivo drosophila MB images captured with a 1-ms exposure time to 0.97 and 0.94 for shallow and deep layers, respectively.& COPY; 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
引用
收藏
页码:2478 / 2491
页数:14
相关论文
共 50 条
  • [41] A Double-Stage 3D U-Net for On-Cloud Brain Extraction and Multi-Structure Segmentation from 7T MR Volumes
    Tomassini, Selene
    Anbar, Haidar
    Sbrollini, Agnese
    Mortada, M. H. D. Jafar
    Burattini, Laura
    Morettini, Micaela
    INFORMATION, 2023, 14 (05)
  • [42] Automatic liver tumor segmentation used the cascade multi-scale attention architecture method based on 3D U-Net
    Wu, Yun
    Shen, Huaiyan
    Tan, Yaya
    Shi, Yucheng
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2022, 17 (10) : 1915 - 1922
  • [43] MSHP3D: Multi-stage cross-modal fusion based on Hybrid Perception for indoor 3D object detection
    Jiang, Xiangyang
    Wang, Dakai
    Bi, Kunpeng
    Wang, Shuang
    Zhang, Miaohui
    INFORMATION FUSION, 2024, 112
  • [44] Cerebrovascular Segmentation Model Based on Spatial Attention-Guided 3D Inception U-Net with Multi-Directional MIPs
    Liu, Yongwei
    Kwak, Hyo-Sung
    Oh, Il-Seok
    APPLIED SCIENCES-BASEL, 2022, 12 (05):
  • [45] MSDS-UNet: A multi-scale deeply supervised 3D U-Net for automatic segmentation of lung tumor in CT
    Yang, Jinzhu
    Wu, Bo
    Li, Lanting
    Cao, Peng
    Zaiane, Osmar
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2021, 92
  • [46] Modified U-Net based 3D reconstruction model to estimate volume from multi-view images of a solid object
    Dalai, Radhamadhab
    Senapati, Kishore Kumar
    Dalai, Nibedita
    IMAGING SCIENCE JOURNAL, 2023, 71 (02): : 110 - 127
  • [47] Automatic liver tumor segmentation used the cascade multi-scale attention architecture method based on 3D U-Net
    Yun Wu
    Huaiyan Shen
    Yaya Tan
    Yucheng Shi
    International Journal of Computer Assisted Radiology and Surgery, 2022, 17 : 1915 - 1922
  • [48] Organ at Risk Segmentation in Head and Neck CT Images Using a Two-Stage Segmentation Framework Based on 3D U-Net
    Wang, Yueyue
    Zhao, Liang
    Wang, Manning
    Song, Zhijian
    IEEE ACCESS, 2019, 7 : 144591 - 144602
  • [49] A Two-stage Method with a Shared 3D U-Net for Left Atrial Segmentation of Late Gadolinium-Enhanced MRI Images
    Bai, Jieyun
    Qiu, Ruiyu
    Chen, Jianyu
    Wang, Liyuan
    Li, Lulu
    Tian, Yanfeng
    Wang, Huijin
    Lu, Yaosheng
    Zhao, Jichao
    CARDIOVASCULAR INNOVATIONS AND APPLICATIONS, 2023, 8 (01)
  • [50] On multi-stage deformation and gradual energy absorption of 3D printed multi-cell tubes with varying cross-section
    Liu, Yisen
    Wang, Jin
    Tan, Qianbing
    Gao, Huijing
    Wang, Kui
    Yao, Song
    Peng, Yong
    ENGINEERING STRUCTURES, 2024, 319