A Chinese multi-modal neuroimaging data release for increasing diversity of human brain mapping

被引:0
|
作者
Peng Gao
Hao-Ming Dong
Si-Man Liu
Xue-Ru Fan
Chao Jiang
Yin-Shan Wang
Daniel Margulies
Hai-Fang Li
Xi-Nian Zuo
机构
[1] Taiyuan University of Technology,College of Information and Computer
[2] Beijing Normal University,State Key Laboratory of Cognitive Neuroscience and Learning
[3] National Basic Science Data Center,Institute of Psychology
[4] Chinese Academy of Sciences,School of Psychology
[5] Capital Normal University,Centre National de la Recherche Scientifique
[6] Frontlab,Developmental Population Neuroscience Research Center, IDG/McGovern Institute for Brain Research
[7] Brain and Spinal Cord Institute,Key Laboratory of Brain and Education, School of Education Science
[8] Beijing Normal University,undefined
[9] Nanning Normal University,undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
The big-data use is becoming a standard practice in the neuroimaging field through data-sharing initiatives. It is important for the community to realize that such open science effort must protect personal, especially facial information when raw neuroimaging data are shared. An ideal tool for the face anonymization should not disturb subsequent brain tissue extraction and further morphological measurements. Using the high-resolution head images from magnetic resonance imaging (MRI) of 215 healthy Chinese, we discovered and validated a template effect on the face anonymization. Improved facial anonymization was achieved when the Chinese head templates but not the Western templates were applied to obscure the faces of Chinese brain images. This finding has critical implications for international brain imaging data-sharing. To facilitate the further investigation of potential culture-related impacts on and increase diversity of data-sharing for the human brain mapping, we released the 215 Chinese multi-modal MRI data into a database for imaging Chinese young brains, namely’I See your Brains (ISYB)’, to the public via the Science Data Bank (https://doi.org/10.11922/sciencedb.00740).
引用
收藏
相关论文
共 50 条
  • [1] A Chinese multi-modal neuroimaging data release for increasing diversity of human brain mapping
    Gao, Peng
    Dong, Hao-Ming
    Liu, Si-Man
    Fan, Xue-Ru
    Jiang, Chao
    Wang, Yin-Shan
    Margulies, Daniel
    Li, Hai-Fang
    Zuo, Xi-Nian
    SCIENTIFIC DATA, 2022, 9 (01)
  • [2] A multi-modal neuroimaging data release for Meige Syndrome and Facial Paralysis Research
    Gao, Peng
    Luan, Jixin
    Yang, Aocai
    Xu, Manxi
    Lv, Kuan
    Hu, Pianpian
    Yu, Hongwei
    Yao, Zeshan
    Ma, Guolin
    SCIENTIFIC DATA, 2025, 12 (01)
  • [3] FUSION OF MULTI-MODAL NEUROIMAGING DATA AND ASSOCIATION WITH COGNITIVE DATA
    LoPresto, Mark D.
    Akhonda, M. A. B. S.
    Calhoun, Vince D.
    Adali, Tülay
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [4] A multi-subject, multi-modal human neuroimaging dataset
    Wakeman, Daniel G.
    Henson, Richard N.
    SCIENTIFIC DATA, 2015, 2
  • [5] A multi-subject, multi-modal human neuroimaging dataset
    Daniel G Wakeman
    Richard N Henson
    Scientific Data, 2
  • [6] Multi-modal Neuroimaging Phenotyping of Mnemonic Anosognosia in the Aging Brain
    Elisenda Bueichekú
    Ibai Diez
    Geoffroy Gagliardi
    Chan-Mi Kim
    Kayden Mimmack
    Jorge Sepulcre
    Patrizia Vannini
    Communications Medicine, 4
  • [7] Multi-modal Neuroimaging Phenotyping of Mnemonic Anosognosia in the Aging Brain
    Bueicheku, Elisenda
    Diez, Ibai
    Gagliardi, Geoffroy
    Kim, Chan-Mi
    Mimmack, Kayden
    Sepulcre, Jorge
    Vannini, Patrizia
    COMMUNICATIONS MEDICINE, 2024, 4 (01):
  • [8] Spatial mapping of multi-modal data in neuroscience
    Hawrylycz, Mike
    Sunkin, Susan
    Ng, Lydia
    METHODS, 2015, 73 : 1 - 3
  • [9] Multi-modal mapping
    Yates, Darran
    NATURE REVIEWS NEUROSCIENCE, 2016, 17 (09) : 536 - 536
  • [10] ACMTF for Fusion of Multi-Modal Neuroimaging Data and Identification of Biomarkers
    Acar, Evrim
    Levin-Schwartz, Yuri
    Calhoun, Vince D.
    Adali, Tulay
    2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2017, : 643 - 647