Cross-Modal Object Recognition Is Viewpoint-Independent

被引:50
|
作者
Lacey, Simon [1 ]
Peters, Andrew [1 ]
Sathian, K. [1 ,2 ,3 ,4 ]
机构
[1] Emory Univ, Dept Neurol, Atlanta, GA 30322 USA
[2] Emory Univ, Dept Rehabil Med, Atlanta, GA 30322 USA
[3] Emory Univ, Dept Psychol, Atlanta, GA 30322 USA
[4] Atlanta Vet Affairs Med Ctr, Rehabil Res & Dev Ctr Excellence, Decatur, GA USA
来源
PLOS ONE | 2007年 / 2卷 / 09期
基金
美国国家科学基金会;
关键词
D O I
10.1371/journal.pone.0000890
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Background. Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. Methodology/Principal Findings. Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180 degrees about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross- modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y- axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. Conclusions/Significance. The viewpoint-independence of cross- modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross- modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] A Viewpoint-Independent Statistical Method for Fall Detection
    Zhang, Zhong
    Liu, Weihua
    Metsis, Vangelis
    Athitsos, Vassilis
    2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), 2012, : 3626 - 3630
  • [32] THE RELATIONSHIP BETWEEN METAPHORICAL AND CROSS-MODAL ABILITIES - FAILURE TO DEMONSTRATE METAPHORICAL RECOGNITION IN CHIMPANZEES CAPABLE OF CROSS-MODAL RECOGNITION
    ETTLINGER, G
    NEUROPSYCHOLOGIA, 1981, 19 (04) : 583 - 586
  • [33] Bumble bees display cross-modal object recognition between visual and tactile senses
    Solvi, Cwyn
    Al-Khudhairy, Selene Gutierrez
    Chittka, Lars
    SCIENCE, 2020, 367 (6480) : 910 - +
  • [34] A VIEWPOINT INDEPENDENT MODELING APPROACH TO OBJECT RECOGNITION
    MAGEE, M
    NATHAN, M
    IEEE JOURNAL OF ROBOTICS AND AUTOMATION, 1987, 3 (04): : 351 - 356
  • [35] Viewpoint-independent 3D object segmentation for randomly stacked objects using optical object detection
    Chen, Liang-Chia
    Nguyen, Thanh-Hung
    Lin, Shyh-Tsong
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2015, 26 (10)
  • [36] Implicit learning of viewpoint-independent spatial layouts
    Tsuchiai, Taiga
    Matsumiya, Kazumichi
    Kuriki, Ichiro
    Shioiri, Satoshi
    FRONTIERS IN PSYCHOLOGY, 2012, 3
  • [37] CROSS-MODAL KNOWLEDGE DISTILLATION FOR ACTION RECOGNITION
    Thoker, Fida Mohammad
    Gall, Juergen
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 6 - 10
  • [38] Cross-Modal Federated Human Activity Recognition
    Yang, Xiaoshan
    Xiong, Baochen
    Huang, Yi
    Xu, Changsheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5345 - 5361
  • [39] Cross-modal recognition of familiar conspecifics in goats
    Pitcher, Benjamin J.
    Briefer, Elodie F.
    Baciadonna, Luigi
    McElligott, Alan G.
    ROYAL SOCIETY OPEN SCIENCE, 2017, 4 (02):
  • [40] Characteristics of eye movements in 3-D object learning: Comparison between within-modal and cross-modal object recognition
    Ueda, Yoshiyuki
    Saiki, Jun
    PERCEPTION, 2012, 41 (11) : 1289 - 1298