Predicting the diabetic foot in the population of type 2 diabetes mellitus from tongue images and clinical information using multi-modal deep learning

被引:1
|
作者
Tian, Zhikui [1 ]
Wang, Dongjun [2 ]
Sun, Xuan [3 ]
Cui, Chuan [4 ]
Wang, Hongwu [5 ]
机构
[1] Qilu Med Univ, Sch Rehabil Med, Zibo, Shandong, Peoples R China
[2] North China Univ Sci & Technol, Coll Tradit Chinese Med, Tangshan, Peoples R China
[3] Binzhou Med Univ, Coll Tradit Chinese Med, Yantai, Shandong, Peoples R China
[4] Qilu Med Univ, Sch Clin Med, Zibo, Shandong, Peoples R China
[5] Tianjin Univ Tradit Chinese Med, Sch Hlth Sci & Engn, Tianjin, Peoples R China
关键词
diabetic foot; tongue features; objectified parameters; prediction model; machine learning; AMPUTATION; SKIN; PREVENTION; MANAGEMENT; HARDNESS; ULCER; LIFE;
D O I
10.3389/fphys.2024.1473659
中图分类号
Q4 [生理学];
学科分类号
071003 ;
摘要
Aims Based on the quantitative and qualitative fusion data of traditional Chinese medicine (TCM) and Western medicine, a diabetic foot (DF) prediction model was established through combining the objectified parameters of TCM and Western medicine.Methods The ResNet-50 deep neural network (DNN) was used to extract depth features of tongue demonstration, and then a fully connected layer (FCL) was used for feature extraction to obtain aggregate features. Finally, a non-invasive DF prediction model based on tongue features was realized.Results Among the 391 patients included, there were 267 DF patients, with their BMI (25.2 vs. 24.2) and waist-to-hip ratio (0.953 vs. 0.941) higher than those of type 2 diabetes mellitus (T2DM) group. The diabetes (15 years vs. 8 years) and hypertension durations (10 years vs. 7.5 years) in DF patients were significantly higher than those in T2DM group. Moreover, the plantar hardness in DF patients was higher than that in T2DM patients. The accuracy and sensitivity of the multi-mode DF prediction model reached 0.95 and 0.9286, respectively.Conclusion We established a DF prediction model based on clinical features and objectified tongue color, which showed the unique advantages and important role of objectified tongue demonstration in the DF risk prediction, thus further proving the scientific nature of TCM tongue diagnosis. Based on the qualitative and quantitative fusion data, we combined tongue images with DF indicators to establish a multi-mode DF prediction model, in which tongue demonstration and objectified foot data can correct the subjectivity of prior knowledge. The successful establishment of the feature fusion diagnosis model can demonstrate the clinical practical value of objectified tongue demonstration. According to the results, the model had better performance to distinguish between T2DM and DF, and by comparing the performance of the model with and without tongue images, it was found that the model with tongue images performed better.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Predicting rectal cancer prognosis from histopathological images and clinical information using multi-modal deep learning
    Xu, Yixin
    Guo, Jiedong
    Yang, Na
    Zhu, Can
    Zheng, Tianlei
    Zhao, Weiguo
    Liu, Jia
    Song, Jun
    FRONTIERS IN ONCOLOGY, 2024, 14
  • [2] Predicting colorectal cancer tumor mutational burden from histopathological images and clinical information using multi-modal deep learning
    Huang, Kaimei
    Lin, Binghu
    Liu, Jinyang
    Liu, Yankun
    Li, Jingwu
    Tian, Geng
    Yang, Jialiang
    BIOINFORMATICS, 2022, 38 (22) : 5108 - 5115
  • [3] NMF for Quality Control of Multi-modal Retinal Images for Diagnosis of Diabetes Mellitus and Diabetic Retinopathy
    Benali, Anass
    Carrera, Laura
    Christin, Ann
    Martin, Ruben
    Ale, Anibal
    Barraso, Marina
    Bernal, Carolina
    Marin, Sara
    Feu, Silvia
    Rosines, Josep
    Hernandez, Teresa
    Vila, Irene
    Oliva, Cristian
    Vinagre, Irene
    Ortega, Emilio
    Gimenez, Marga
    Esmatjes, Enric
    Zarranz-Ventura, Javier
    Romero, Enrique
    Vellido, Alfredo
    BIOINFORMATICS AND BIOMEDICAL ENGINEERING, PT I, 2022, : 343 - 356
  • [4] Multi-modal Multi-instance Learning Using Weakly Correlated Histopathological Images and Tabular Clinical Information
    Li, Hang
    Yang, Fan
    Xing, Xiaohan
    Zhao, Yu
    Zhang, Jun
    Liu, Yueping
    Han, Mengxue
    Huang, Junzhou
    Wang, Liansheng
    Yao, Jianhua
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VIII, 2021, 12908 : 529 - 539
  • [5] Predicting Alzheimer’s disease progression using multi-modal deep learning approach
    Garam Lee
    Kwangsik Nho
    Byungkon Kang
    Kyung-Ah Sohn
    Dokyoon Kim
    Scientific Reports, 9
  • [6] Predicting Alzheimer's disease progression using multi-modal deep learning approach
    Lee, Garam
    Nho, Kwangsik
    Kang, Byungkon
    Sohn, Kyung-Ah
    Kim, Dokyoon
    Weiner, Michael W.
    Aisen, Paul
    Petersen, Ronald
    Jack, Clifford R., Jr.
    Jagust, William
    Trojanowki, John Q.
    Toga, Arthur W.
    Beckett, Laurel
    Green, Robert C.
    Saykin, Andrew J.
    Morris, John
    Shaw, Leslie M.
    Khachaturian, Zaven
    Sorensen, Greg
    Carrillo, Maria
    Kuller, Lew
    Raichle, Marc
    Paul, Steven
    Davies, Peter
    Fillit, Howard
    Hefti, Franz
    Holtzman, Davie
    Mesulam, M. Marcel
    Potter, William
    Snyder, Peter
    Montine, Tom
    Thomas, Ronald G.
    Donohue, Michael
    Walter, Sarah
    Sather, Tamie
    Jiminez, Gus
    Balasubramanian, Archana B.
    Mason, Jennifer
    Sim, Iris
    Harvey, Danielle
    Bernstein, Matthew
    Fox, Nick
    Thompson, Paul
    Schuff, Norbert
    DeCArli, Charles
    Borowski, Bret
    Gunter, Jeff
    Senjem, Matt
    Vemuri, Prashanthi
    Jones, David
    SCIENTIFIC REPORTS, 2019, 9 (1)
  • [7] Cloud Type Classification Using Multi-modal Information Based on Multi-task Learning
    Zhang, Yaxiu
    Xie, Jiazu
    He, Di
    Dong, Qing
    Zhang, Jiafeng
    Zhang, Zhong
    Liu, Shuang
    COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, VOL. 1, 2022, 878 : 119 - 125
  • [8] Multi-modal pseudo-information guided unsupervised deep metric learning for agricultural pest images
    Wang, Shansong
    Zeng, Qingtian
    Zhang, Xue
    Ni, Weijian
    Cheng, Cheng
    INFORMATION SCIENCES, 2023, 630 : 443 - 462
  • [9] Deep neural net for identification of neuropathic foot in subjects with type 2 diabetes mellitus using plantar foot thermographic images
    Evangeline, N. Christy
    Srinivasan, S.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 96
  • [10] Identification of Luminal A breast cancer by using deep learning analysis based on multi-modal images
    Liu, Menghan
    Zhang, Shuai
    Du, Yanan
    Zhang, Xiaodong
    Wang, Dawei
    Ren, Wanqing
    Sun, Jingxiang
    Yang, Shiwei
    Zhang, Guang
    FRONTIERS IN ONCOLOGY, 2023, 13