A large-scale fMRI dataset for human action recognition

被引:0
|
作者
Ming Zhou
Zhengxin Gong
Yuxuan Dai
Yushan Wen
Youyi Liu
Zonglei Zhen
机构
[1] Beijing Normal University,State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research
[2] Beijing Normal University,Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Human action recognition is a critical capability for our survival, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few action categories from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still needs to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.
引用
收藏
相关论文
共 50 条
  • [11] A Large-scale Benchmark Dataset for Event Recognition in Surveillance Video
    Oh, Sangmin
    Hoogs, Anthony
    Perera, Amitha
    Cuntoor, Naresh
    Chen, Chia-Chih
    Lee, Jong Taek
    Mukherjee, Saurajit
    Aggarwal, J. K.
    Lee, Hyungtae
    Davis, Larry
    Swears, Eran
    Wang, Xioyang
    Ji, Qiang
    Reddy, Kishore
    Shah, Mubarak
    Vondrick, Carl
    Pirsiavash, Hamed
    Ramanan, Deva
    Yuen, Jenny
    Torralba, Antonio
    Song, Bi
    Fong, Anesco
    Roy-Chowdhury, Amit
    Desai, Mita
    2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011,
  • [12] A large-scale dataset for Chinese historical document recognition and analysis
    Shi, Yongxin
    Peng, Dezhi
    Zhang, Yuyi
    Cao, Jiahuan
    Jin, Lianwen
    SCIENTIFIC DATA, 2025, 12 (01)
  • [13] LSSED: A LARGE-SCALE DATASET AND BENCHMARK FOR SPEECH EMOTION RECOGNITION
    Fan, Weiquan
    Xu, Xiangmin
    Xing, Xiaofen
    Chen, Weidong
    Huang, Dongyan
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 641 - 645
  • [14] Human action recognition with a large-scale brain-inspired photonic computer
    Piotr Antonik
    Nicolas Marsal
    Daniel Brunner
    Damien Rontani
    Nature Machine Intelligence, 2019, 1 : 530 - 537
  • [15] Human action recognition with a large-scale brain-inspired photonic computer
    Antonik, Piotr
    Marsal, Nicolas
    Brunner, Daniel
    Rontani, Damien
    NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 530 - 537
  • [16] Human Action Recognition in Large-Scale Datasets Using Histogram of Spatiotemporal Gradients
    Reddy, Kishore K.
    Cuntoor, Naresh
    Perera, Amitha
    Hoogs, Anthony
    2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE (AVSS), 2012, : 106 - 111
  • [17] UnityShip: A Large-Scale Synthetic Dataset for Ship Recognition in Aerial Images
    He, Boyong
    Li, Xianjiang
    Huang, Bo
    Gu, Enhui
    Guo, Weijie
    Wu, Liaoni
    REMOTE SENSING, 2021, 13 (24)
  • [18] A large-scale dataset for end-to-end table recognition in the wild
    Fan Yang
    Lei Hu
    Xinwu Liu
    Shuangping Huang
    Zhenghui Gu
    Scientific Data, 10
  • [19] Vietnam-Celeb: a large-scale dataset for Vietnamese speaker recognition
    Pham Viet Thanh
    Nguyen Xuan Thai Hoa
    Hoang Long Vu
    Nguyen Thi Thu Trang
    INTERSPEECH 2023, 2023, : 1918 - 1922
  • [20] A large-scale dataset for end-to-end table recognition in the wild
    Yang, Fan
    Hu, Lei
    Liu, Xinwu
    Huang, Shuangping
    Gu, Zhenghui
    SCIENTIFIC DATA, 2023, 10 (01)