A longitudinal multi-modal dataset for dementia monitoring and diagnosis

被引:0
|
作者
Gkoumas, Dimitris [1 ]
Wang, Bo [2 ]
Tsakalidis, Adam [1 ,3 ]
Wolters, Maria [3 ,4 ]
Purver, Matthew [1 ,3 ,5 ]
Zubiaga, Arkaitz [1 ]
Liakata, Maria [1 ,3 ]
机构
[1] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London, England
[2] Massachusetts Gen Hosp, Ctr Precis Psychiat, Boston, MA USA
[3] Alan Turing Inst, London, England
[4] Univ Edinburgh, Sch Informat, Edinburgh, Scotland
[5] Jozef Stefan Inst, Dept Knowledge Technol, Ljubljana, Slovenia
基金
英国工程与自然科学研究理事会; 英国惠康基金;
关键词
Longitudinal multi-modal dementia corpus; Computational linguistics; Longitudinal dementia monitoring; ALZHEIMERS-DISEASE; SPEECH; PICTURE; REMINISCENCE; COHERENCE; DECLINE;
D O I
10.1007/s10579-023-09718-4
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Dementia affects cognitive functions of adults, including memory, language, and behaviour. Standard diagnostic biomarkers such as MRI are costly, whilst neuropsychological tests suffer from sensitivity issues in detecting dementia onset. The analysis of speech and language has emerged as a promising and non-intrusive technology to diagnose and monitor dementia. Currently, most work in this direction ignores the multi-modal nature of human communication and interactive aspects of everyday conversational interaction. Moreover, most studies ignore changes in cognitive status over time due to the lack of consistent longitudinal data. Here we introduce a novel fine-grained longitudinal multi-modal corpus collected in a natural setting from healthy controls and people with dementia over two phases, each spanning 28 sessions. The corpus consists of spoken conversations, a subset of which are transcribed, as well as typed and written thoughts and associated extra-linguistic information such as pen strokes and keystrokes. We present the data collection process and describe the corpus in detail. Furthermore, we establish baselines for capturing longitudinal changes in language across different modalities for two cohorts, healthy controls and people with dementia, outlining future research directions enabled by the corpus.
引用
收藏
页码:883 / 902
页数:20
相关论文
共 50 条
  • [31] Longitudinal multi-modal neuroimaging in opsoclonus–myoclonus syndrome
    Sun-Young Oh
    Rainer Boegle
    Peter zu Eulenburg
    Matthias Ertl
    Ji-Soo Kim
    Marianne Dieterich
    [J]. Journal of Neurology, 2017, 264 : 512 - 519
  • [32] GRAPE: A multi-modal dataset of longitudinal follow-up visual field and fundus images for glaucoma management
    Huang, Xiaoling
    Kong, Xiangyin
    Shen, Ziyan
    Ouyang, Jing
    Li, Yunxiang
    Jin, Kai
    Ye, Juan
    [J]. SCIENTIFIC DATA, 2023, 10 (01)
  • [33] GRAPE: A multi-modal dataset of longitudinal follow-up visual field and fundus images for glaucoma management
    Xiaoling Huang
    Xiangyin Kong
    Ziyan Shen
    Jing Ouyang
    Yunxiang Li
    Kai Jin
    Juan Ye
    [J]. Scientific Data, 10
  • [34] Gesture Recognition and Multi-modal Fusion on a New Hand Gesture Dataset
    Schak, Monika
    Gepperth, Alexander
    [J]. PATTERN RECOGNITION APPLICATIONS AND METHODS, ICPRAM 2021, ICPRAM 2022, 2023, 13822 : 76 - 97
  • [35] SDT: A SYNTHETIC MULTI-MODAL DATASET FOR PERSON DETECTION AND POSE CLASSIFICATION
    Pramerdorfer, C.
    Strohmayer, J.
    Kampel, M.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1611 - 1615
  • [36] Dataset and Models for Item Recommendation Using Multi-Modal User Interactions
    Bruun, Simone Borg
    Balog, Krisztian
    Maistro, Maria
    [J]. PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 709 - 718
  • [37] A Benchmark Dataset and Comparison Study for Multi-modal Human Action Analytics
    Liu, Jiaying
    Song, Sijie
    Liu, Chunhui
    Li, Yanghao
    Hu, Yueyu
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2020, 16 (02)
  • [38] Multi-Modal Dataset Generation using Domain Randomization for Object Detection
    Marez, Diego
    Nans, Lena
    Borden, Samuel
    [J]. GEOSPATIAL INFORMATICS XI, 2021, 11733
  • [39] WinSet: The First Multi-Modal Window Dataset for Heterogeneous Window States
    Fan, Tzu-Yi
    Tsai, Tun-Chi
    Hsu, Cheng-Hsin
    Liu, Fanqi
    Venkatasubramanian, Nalini
    [J]. BUILDSYS'21: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILT ENVIRONMENTS, 2021, : 192 - 195
  • [40] Multi-Modal Fingerprint Presentation Attack Detection: Evaluation on a New Dataset
    Spinoulas L.
    Mirzaalian H.
    Hussein M.E.
    Abdalmageed W.
    [J]. Spinoulas, Leonidas (lspinoulas@isi.edu), 1600, Institute of Electrical and Electronics Engineers Inc. (03): : 347 - 364