Poster: Attempts on detecting Alzheimer's disease by fine-tuning pre-trained model with Gaze Data

被引:0
|
作者
Nagasawa, Junichi [1 ,2 ]
Nakata, Yuichi [1 ,2 ]
Hiroe, Mamoru [1 ,3 ]
Zheng, Yujia [1 ]
Kawaguchi, Yutaka [1 ]
Maegawa, Yuji [1 ]
Hojo, Naoki [1 ]
Takiguchi, Tetsuya [1 ]
Nakayama, Minoru [4 ]
Uchimura, Maki [1 ]
Sonoda, Yuma [1 ]
Kowa, Hisatomo [1 ]
Nagamatsu, Takashi [1 ]
机构
[1] Kobe Univ, Kobe, Hyogo, Japan
[2] Kwansei Gakuin Univ, Sanda, Japan
[3] Osaka Seikei Univ, Osaka, Japan
[4] Tokyo Inst Technol, Tokyo, Japan
关键词
Alzheimer's disease; Antisaccade; Eye movement classifier; Fine-tuning;
D O I
10.1145/3649902.3655650
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This poster presents a study on detecting Alzheimer's disease (AD) using deep learning from gaze data. In this study, we modify an existing pre-trained deep neural network model, gazeNet, for transfer learning. The results suggest the possibility of applying this method to mild cognitive impairment screening tests.
引用
收藏
页数:3
相关论文
共 50 条
  • [41] Confounder balancing in adversarial domain adaptation for pre-trained large models fine-tuning
    Jiang, Shuoran
    Chen, Qingcai
    Xiang, Yang
    Pan, Youcheng
    Wu, Xiangping
    Lin, Yukang
    NEURAL NETWORKS, 2024, 173
  • [42] BERT4ST:: Fine-tuning pre-trained large language model for wind power forecasting
    Lai, Zefeng
    Wu, Tangjie
    Fei, Xihong
    Ling, Qiang
    ENERGY CONVERSION AND MANAGEMENT, 2024, 307
  • [43] Fine-tuning of pre-trained convolutional neural networks for diabetic retinopathy screening: a clinical study
    Roshan, Saboora M.
    Karsaz, Ali
    Vejdani, Amir Hossein
    Roshan, Yaser M.
    INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2020, 21 (04) : 564 - 573
  • [44] Detecting Alzheimer's Disease Based on Acoustic Features Extracted from Pre-trained Models
    Mei, Kangdi
    Guo, Zhiqiang
    Liu, Zhaoci
    Liu, Lijuan
    Li, Xin
    Ling, Zhenhua
    ARTIFICIAL INTELLIGENCE, CICAI 2022, PT III, 2022, 13606 : 272 - 283
  • [45] Parameter-efficient fine-tuning of large-scale pre-trained language models
    Ning Ding
    Yujia Qin
    Guang Yang
    Fuchao Wei
    Zonghan Yang
    Yusheng Su
    Shengding Hu
    Yulin Chen
    Chi-Min Chan
    Weize Chen
    Jing Yi
    Weilin Zhao
    Xiaozhi Wang
    Zhiyuan Liu
    Hai-Tao Zheng
    Jianfei Chen
    Yang Liu
    Jie Tang
    Juanzi Li
    Maosong Sun
    Nature Machine Intelligence, 2023, 5 : 220 - 235
  • [46] Food Detection by Fine-Tuning Pre-trained Convolutional Neural Network Using Noisy Labels
    Alshomrani, Shroog
    Aljoudi, Lina
    Aljabri, Banan
    Al-Shareef, Sarah
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (07): : 182 - 190
  • [47] Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets
    Newton Spolaôr
    Huei Diana Lee
    Ana Isabel Mendes
    Conceição Veloso Nogueira
    Antonio Rafael Sabino Parmezan
    Weber Shoity Resende Takaki
    Claudio Saddy Rodrigues Coy
    Feng Chung Wu
    Rui Fonseca-Pinto
    Multimedia Tools and Applications, 2024, 83 (9) : 27305 - 27329
  • [48] FINE-TUNING OF PRE-TRAINED END-TO-END SPEECH RECOGNITION WITH GENERATIVE ADVERSARIAL NETWORKS
    Haidar, Md Akmal
    Rezagholizadeh, Mehdi
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6204 - 6208
  • [49] The fine-tuning of proteolytic pathways in Alzheimer’s disease
    Valentina Cecarini
    Laura Bonfili
    Massimiliano Cuccioloni
    Matteo Mozzicafreddo
    Mauro Angeletti
    Jeffrey N. Keller
    Anna Maria Eleuteri
    Cellular and Molecular Life Sciences, 2016, 73 : 3433 - 3451
  • [50] Fine-tuning pre-trained neural networks for medical image classification in small clinical datasets
    Spolaor, Newton
    Lee, Huei Diana
    Mendes, Ana Isabel
    Nogueira, Conceicao Veloso
    Sabino Parmezan, Antonio Rafael
    Resende Takaki, Weber Shoity
    Rodrigues Coy, Claudio Saddy
    Wu, Feng Chung
    Fonseca-Pinto, Rui
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (09) : 27305 - 27329