Cross-Modal Object Recognition Is Viewpoint-Independent

被引:50
|
作者
Lacey, Simon [1 ]
Peters, Andrew [1 ]
Sathian, K. [1 ,2 ,3 ,4 ]
机构
[1] Emory Univ, Dept Neurol, Atlanta, GA 30322 USA
[2] Emory Univ, Dept Rehabil Med, Atlanta, GA 30322 USA
[3] Emory Univ, Dept Psychol, Atlanta, GA 30322 USA
[4] Atlanta Vet Affairs Med Ctr, Rehabil Res & Dev Ctr Excellence, Decatur, GA USA
来源
PLOS ONE | 2007年 / 2卷 / 09期
基金
美国国家科学基金会;
关键词
D O I
10.1371/journal.pone.0000890
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Background. Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch. Methodology/Principal Findings. Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180 degrees about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross- modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y- axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores. Conclusions/Significance. The viewpoint-independence of cross- modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross- modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] CROSS-MODAL EFFECTS ON VISUAL AND AUDITORY OBJECT PERCEPTION
    OLEARY, A
    RHODES, G
    PERCEPTION & PSYCHOPHYSICS, 1984, 35 (06): : 565 - 569
  • [42] Cross-modal transfer in visual and haptic object categorization
    Gaissert, N.
    Waterkamp, S.
    Van Dam, L.
    Buelthoff, I.
    PERCEPTION, 2011, 40 : 134 - 134
  • [43] Prototype-based cross-modal object tracking
    Liu, Lei
    Li, Chenglong
    Wang, Futian
    Shen, Longfeng
    Tang, Jin
    INFORMATION FUSION, 2025, 118
  • [44] Cross-Modal Object Detection Based on a Knowledge Update
    Gao, Yueqing
    Zhou, Huachun
    Chen, Lulu
    Shen, Yuting
    Guo, Ce
    Zhang, Xinyu
    SENSORS, 2022, 22 (04)
  • [45] RELEVANCE OF OBJECT CATEGORIZATION TO CROSS-MODAL PERFORMANCE BY CHIMPANZEES
    ETTLINGER, G
    DAVENPORT, RK
    BROWN, JV
    NEUROPSYCHOLOGIA, 1978, 16 (05) : 539 - 543
  • [46] Medical image understanding through the integration of cross-modal object recognition with formal domain knowledge
    Moeller, Manuel
    Sintek, Michael
    Buitelaar, Paul
    Mukherjee, Saikat
    Zhou, Xiang Sean
    Freund, Joerg
    HEALTHINF 2008: PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON HEALTH INFORMATICS, VOL 1, 2008, : 134 - +
  • [47] Scalable Medical Image Understanding by Fusing Cross-Modal Object Recognition with Formal Domain Semantics
    Moeller, Manuel
    Sintek, Michael
    Buitelaar, Paul
    Mukherjee, Saikat
    Zhou, Xiang Sean
    Freund, Joerg
    BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES, 2008, 25 : 390 - +
  • [48] Deep Active Cross-Modal Visuo-Tactile Transfer Learning for Robotic Object Recognition
    Murali, Prajval Kumar
    Wang, Cong
    Lee, Dongheui
    Dahiya, Ravinder
    Kaboli, Mohsen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 9557 - 9564
  • [49] Viewpoint-independent gait recognition through morphological descriptions of 3D human reconstructions
    Lopez-Fernandez, D.
    Madrid-Cuevas, F. J.
    Carmona-Poyato, A.
    Marin-Jimenez, M. J.
    Munoz-Salinas, R.
    Medina-Carnicer, R.
    IMAGE AND VISION COMPUTING, 2016, 48-49 : 1 - 13
  • [50] Viewpoint-independent Single-view 3D Object Reconstruction using Reinforcement Learning
    Ito, Seiya
    Ju, Byeongjun
    Kaneko, Naoshi
    Sumi, Kazuhiko
    PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2022, : 811 - 819