A longitudinal multi-modal dataset for dementia monitoring and diagnosis

被引:0
|
作者
Gkoumas, Dimitris [1 ]
Wang, Bo [2 ]
Tsakalidis, Adam [1 ,3 ]
Wolters, Maria [3 ,4 ]
Purver, Matthew [1 ,3 ,5 ]
Zubiaga, Arkaitz [1 ]
Liakata, Maria [1 ,3 ]
机构
[1] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London, England
[2] Massachusetts Gen Hosp, Ctr Precis Psychiat, Boston, MA USA
[3] Alan Turing Inst, London, England
[4] Univ Edinburgh, Sch Informat, Edinburgh, Scotland
[5] Jozef Stefan Inst, Dept Knowledge Technol, Ljubljana, Slovenia
基金
英国工程与自然科学研究理事会; 英国惠康基金;
关键词
Longitudinal multi-modal dementia corpus; Computational linguistics; Longitudinal dementia monitoring; ALZHEIMERS-DISEASE; SPEECH; PICTURE; REMINISCENCE; COHERENCE; DECLINE;
D O I
10.1007/s10579-023-09718-4
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Dementia affects cognitive functions of adults, including memory, language, and behaviour. Standard diagnostic biomarkers such as MRI are costly, whilst neuropsychological tests suffer from sensitivity issues in detecting dementia onset. The analysis of speech and language has emerged as a promising and non-intrusive technology to diagnose and monitor dementia. Currently, most work in this direction ignores the multi-modal nature of human communication and interactive aspects of everyday conversational interaction. Moreover, most studies ignore changes in cognitive status over time due to the lack of consistent longitudinal data. Here we introduce a novel fine-grained longitudinal multi-modal corpus collected in a natural setting from healthy controls and people with dementia over two phases, each spanning 28 sessions. The corpus consists of spoken conversations, a subset of which are transcribed, as well as typed and written thoughts and associated extra-linguistic information such as pen strokes and keystrokes. We present the data collection process and describe the corpus in detail. Furthermore, we establish baselines for capturing longitudinal changes in language across different modalities for two cohorts, healthy controls and people with dementia, outlining future research directions enabled by the corpus.
引用
收藏
页码:883 / 902
页数:20
相关论文
共 50 条
  • [1] Deep Multi-modal Latent Representation Learning for Automated Dementia Diagnosis
    Zhou, Tao
    Liu, Mingxia
    Fu, Huazhu
    Wang, Jun
    Shen, Jianbing
    Shao, Ling
    Shen, Dinggang
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT IV, 2019, 11767 : 629 - 638
  • [2] Longitudinal and Multi-Modal Data Learning for Parkinson's Disease Diagnosis
    Huang, Zhongwei
    Lei, Haijun
    Zhao, Yujia
    Zhou, Feng
    Yan, Jin
    Elazab, Ahmed
    Lei, Baiying
    [J]. 2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 1411 - 1414
  • [3] A multi-subject, multi-modal human neuroimaging dataset
    Daniel G Wakeman
    Richard N Henson
    [J]. Scientific Data, 2
  • [4] A multi-subject, multi-modal human neuroimaging dataset
    Wakeman, Daniel G.
    Henson, Richard N.
    [J]. SCIENTIFIC DATA, 2015, 2
  • [5] T1DiabetesGranada: a longitudinal multi-modal dataset of type 1 diabetes mellitus
    Ciro Rodriguez-Leon
    Maria Dolores Aviles-Perez
    Oresti Banos
    Miguel Quesada-Charneco
    Pablo J. Lopez-Ibarra Lozano
    Claudia Villalonga
    Manuel Munoz-Torres
    [J]. Scientific Data, 10
  • [6] T1DiabetesGranada: a longitudinal multi-modal dataset of type 1 diabetes mellitus
    Rodriguez-Leon, Ciro
    Aviles-Perez, Maria Dolores
    Banos, Oresti
    Quesada-Charneco, Miguel
    Lopez-Ibarra Lozano, Pablo J.
    Villalonga, Claudia
    Munoz-Torres, Manuel
    [J]. SCIENTIFIC DATA, 2023, 10 (01)
  • [7] A multi-modal dataset for gait recognition under occlusion
    Li, Na
    Zhao, Xinbo
    [J]. APPLIED INTELLIGENCE, 2023, 53 (02) : 1517 - 1534
  • [8] SynDrone - Multi-modal UAV Dataset for Urban Scenarios
    Rizzoli, Giulia
    Barbato, Francesco
    Caligiuri, Matteo
    Zanuttigh, Pietro
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 2202 - 2212
  • [9] MSDWILD: MULTI-MODAL SPEAKER DIARIZATION DATASET IN THE WILD
    Liu, Tao
    Fang, Shuai
    Xiang, Xu
    Song, Hongbo
    Lin, Shaoxiong
    Sun, Jiaqi
    Han, Tianyuan
    Chen, Siyuan
    Yao, Binwei
    Liu, Sen
    Wu, Yifei
    Qian, Yanmin
    Yu, Kai
    [J]. INTERSPEECH 2022, 2022, : 1476 - 1480
  • [10] MMChat: Multi-Modal Chat Dataset on Social Media
    Zheng, Yinhe
    Chen, Guanyi
    Liu, Xin
    Sun, Jian
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 5778 - 5786