Attempts on detecting Alzheimer's disease by fine-tuning pre-trained model with Gaze Data

被引:0
|
作者
Nagasawa, Junichi [1 ,2 ]
Nakata, Yuichi [1 ,2 ]
Hiroe, Mamoru [1 ,3 ]
Zheng, Yujia [1 ]
Kawaguchi, Yutaka [1 ]
Maegawa, Yuji [1 ]
Hojo, Naoki [1 ]
Takiguchi, Tetsuya [1 ]
Nakayama, Minoru [4 ]
Uchimura, Maki [1 ]
Sonoda, Yuma [1 ]
Kowa, Hisatomo [1 ]
Nagamatsu, Takashi [1 ]
机构
[1] Kobe Univ, Kobe, Japan
[2] Kwansei Gakuin Univ, Sanda, Japan
[3] Osaka Seikei Univ, Osaka, Japan
[4] Tokyo Inst Technol, Tokyo, Japan
关键词
Alzheimer's disease; Antisaccade; Eye movement classifier; Finetuning;
D O I
10.1145/3649902.3656360
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Early detection of Alzheimer's disease (AD) is important but difficult. Screening for AD using neuropsychological tests such as mini-mental state examination (MMSE) is time-consuming and burdensome for patients. Recently, several methods have been reported for detecting AD based on eye movements. However, analyzing eye movements requires considerable effort. Although machine learning from eye movement data is a strong candidate for labor-saving, it requires large datasets. In this study, we modify an existing pretrained deep neural network model, gazeNet, for transfer learning. For evaluation, we exclusively used data from one participant and fine-tuned the model using data from all the remaining participants. We repeated this procedure separately for each of the 14 participants. The results of eye movement during the antisaccade task were not satisfactory for the discrimination of AD, and detailed analysis suggested that the data might potentially have a correlation with MMSE scores in the mild cognitive impairment range.
引用
收藏
页数:3
相关论文
共 50 条
  • [1] Poster: Attempts on detecting Alzheimer's disease by fine-tuning pre-trained model with Gaze Data
    Nagasawa, Junichi
    Nakata, Yuichi
    Hiroe, Mamoru
    Zheng, Yujia
    Kawaguchi, Yutaka
    Maegawa, Yuji
    Hojo, Naoki
    Takiguchi, Tetsuya
    Nakayama, Minoru
    Uchimura, Maki
    Sonoda, Yuma
    Kowa, Hisatomo
    Nagamatsu, Takashi
    [J]. PROCEEDINGS OF THE 2024 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2024, 2024,
  • [2] Disfluencies and Fine-Tuning Pre-trained Language Models for Detection of Alzheimer's Disease
    Yuan, Jiahong
    Bian, Yuchen
    Cai, Xingyu
    Huang, Jiaji
    Ye, Zheng
    Church, Kenneth
    [J]. INTERSPEECH 2020, 2020, : 2162 - 2166
  • [3] Enhancing Alzheimer's Disease Classification with Transfer Learning: Fine-tuning a Pre-trained Algorithm
    Boudi, Abdelmounim
    He, Jingfei
    Abd El Kader, Isselmou
    [J]. CURRENT MEDICAL IMAGING, 2024,
  • [4] Pruning Pre-trained Language ModelsWithout Fine-Tuning
    Jiang, Ting
    Wang, Deqing
    Zhuang, Fuzhen
    Xie, Ruobing
    Xia, Feng
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
  • [5] Span Fine-tuning for Pre-trained Language Models
    Bao, Rongzhou
    Zhang, Zhuosheng
    Zhao, Hai
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
  • [6] Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs
    Zhang, Zeren
    Li, Xingjian
    Hong, Tao
    Wang, Tianyang
    Ma, Jinwen
    Xiong, Haoyi
    Xu, Cheng-Zhong
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 293 - 308
  • [7] Waste Classification by Fine-Tuning Pre-trained CNN and GAN
    Alsabei, Amani
    Alsayed, Ashwaq
    Alzahrani, Manar
    Al-Shareef, Sarah
    [J]. INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (08): : 65 - 70
  • [8] Fine-tuning pre-trained voice conversion model for adding new target speakers with limited data
    Koshizuka, Takeshi
    Ohmura, Hidefumi
    Katsurada, Kouichi
    [J]. INTERSPEECH 2021, 2021, : 1339 - 1343
  • [9] Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models
    Zhou, Kun
    Zhao, Wayne Xin
    Wang, Sirui
    Zhang, Fuzheng
    Wu, Wei
    We, Ji-Rong
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3875 - 3887
  • [10] Fine-Tuning Pre-Trained Model to Extract Undesired Behaviors from App Reviews
    Zhang, Wenyu
    Wang, Xiaojuan
    Lai, Shanyan
    Ye, Chunyang
    Zhou, Hui
    [J]. 2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 1125 - 1134