DIFF•3: A Latent Diffusion Model for the Generation of Synthetic 3D Echocardiographic Images and Corresponding Labels

被引:0
|
作者
Ferdian, Edward [1 ,2 ]
Zhao, Debbie [1 ]
Talou, Gonzalo D. Maso [1 ]
Quill, Gina M. [1 ]
Legget, Malcolm E. [3 ]
Doughty, Robert N. [3 ,4 ]
Nash, Martyn P. [1 ,5 ]
Young, Alistair A. [6 ]
机构
[1] Univ Auckland, Auckland Bioengn Inst, Auckland, New Zealand
[2] Telkom Univ, Fac Informat, Bandung, Indonesia
[3] Univ Auckland, Dept Med, Auckland, New Zealand
[4] Auckland City Hosp, Green Lane Cardiovasc Serv, Auckland, New Zealand
[5] Univ Auckland, Dept Engn Sci & Biomed Engn, Auckland, New Zealand
[6] Kings Coll London, Sch Biomed Engn & Imaging Sci, London, England
关键词
Latent diffusion; 3D echocardiography; Deep learning; Generative AI; Synthetic data; ULTRASOUND;
D O I
10.1007/978-3-031-44689-4_13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large amounts of labelled data are typically needed to develop robust deep learning methods for medical image analysis. However, issues related to the high costs of acquisition, time-consuming analysis, and patient privacy, have limited the number of publicly available datasets. Recently, latent diffusion models have been employed to generate synthetic data in several fields. Compared to other imaging modalities, the manipulation of 3D echocardiograms is particularly challenging due to the higher dimensionality and complex noise characteristics, and lack of objective ground truth. We present DIFF center dot 3, a latent diffusion model for synthesizing realistic 3D echocardiograms with high-quality labels from matching cardiovascular magnetic resonance imaging (CMR) scans. Using in vivo 3D echocardiograms from 134 participants and corresponding registered labels derived from CMR, source images and labels are initially compressed by a variational autoencoder, followed by diffusion in the latent space. Synthetic datasets were subsequently generated by randomly sampling from the latent distribution, and evaluated in terms of fidelity and diversity. DIFF center dot 3 may provide an effective and more efficient means of generating labelled 3D echocardiograms to supplement real patient data.
引用
收藏
页码:129 / 140
页数:12
相关论文
共 50 条
  • [1] LN3DIFF: Scalable Latent Neural Fields Diffusion for Speedy 3D Generation
    Lan, Yushi
    Hong, Fangzhou
    Yang, Shuai
    Zhou, Shangchen
    Meng, Xuyi
    Dai, Bo
    Pan, Xingang
    Loy, Chen Change
    COMPUTER VISION-ECCV 2024, PT IV, 2025, 15062 : 112 - 130
  • [2] Decomposed Latent Diffusion Model for 3D Point Cloud Generation
    Zhao, Runfeng
    Ji, Junzhong
    Lei, Minglong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT VI, 2025, 15036 : 431 - 445
  • [3] Realistic Tumor generation using 3D conditional latent diffusion model
    Hu, Rui
    Yoon, Siyeop
    Wu, Dufan
    Tivnan, Matthew
    Chen, Zhennong
    Wang, Yuang
    Luo, Jie
    Cui, Jianan
    Li, Quanzheng
    Liu, Huafeng
    Guo, Ning
    JOURNAL OF NUCLEAR MEDICINE, 2024, 65
  • [4] Diff3DHPE: A Diffusion Model for 3D Human Pose Estimation
    Zhou, Jieming
    Zhang, Tong
    Hayder, Zeeshan
    Petersson, Lars
    Harandi, Mehrtash
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 2084 - 2094
  • [5] StructLDM: Structured Latent Diffusion for 3D Human Generation
    Hu, Tao
    Hong, Fangzhou
    Liu, Ziwei
    COMPUTER VISION - ECCV 2024, PT LI, 2025, 15109 : 363 - 381
  • [6] Synthetic CT Generation from MRI Using 3D Improved Diffusion Model
    Pan, Shaoyan
    Abouei, Elham
    Wynne, Jacob
    Wang, Tonghe
    Qiu, Richard L. J.
    Li, Yuheng
    Chang, Chih-Wei
    Peng, Junbo
    Qiu, Shihan
    Roper, Justin
    Patel, Pretesh
    Yu, David S.
    Mao, Hui
    Yang, Xiaofeng
    MEDICAL IMAGING 2024: IMAGE PROCESSING, 2024, 12926
  • [7] LION: Latent Point Diffusion Models for 3D Shape Generation
    Zeng, Xiaohui
    Vahdat, Arash
    Williams, Francis
    Gojcic, Zan
    Litany, Or
    Fidler, Sanja
    Kreis, Karsten
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] TEXT-TO-CITY Controllable 3D Urban Block Generation with Latent Diffusion Model
    Zhuang, Junling
    Li, Guanhong
    Xu, Hang
    Xu, Jintu
    Tian, Runjia
    PROCEEDINGS OF THE 29TH INTERNATIONAL CONFERENCE OF THE ASSOCIATION FOR COMPUTER-AIDED ARCHITECTURAL DESIGN RESEARCH IN ASIA, CAADRIA 2024, VOL 2, 2024, : 169 - 178
  • [9] Control3Diff: Learning Controllable 3D Diffusion Models from Single-view Images
    Gu, Jiatao
    Gao, Qingzhe
    Zhai, Shuangfei
    Chen, Baoquan
    Liu, Lingjie
    Susskind, Josh
    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024, 2024, : 685 - 696
  • [10] 3D city model generation from ground images
    Jang, Kyung Ho
    Jung, Soon Ki
    ADVANCES IN COMPUTER GRAPHICS, PROCEEDINGS, 2006, 4035 : 630 - 638