Predicting actions from subtle preparatory movements

被引:17
|
作者
Vaziri-Pashkam, Maryam [1 ]
Cormiea, Sarah [1 ]
Nakayama, Ken [1 ]
机构
[1] Harvard Univ, Dept Psychol, Vis Sci Lab, 33 Kirkland St, Cambridge, MA 02138 USA
关键词
Action prediction; Action reading; Motor interaction; Competitive interaction; Biological motion; BIOLOGICAL MOTION; POINT-LIGHT; VISUAL-PERCEPTION; MOTOR ANTICIPATION; ACTION SIMULATION; RECOGNITION; EYE; INFORMATION; EXPERTISE; INTENTION;
D O I
10.1016/j.cognition.2017.06.014
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
To study how people anticipate others' actions, we designed a competitive reaching task. Subjects faced each other separated by a Plexiglas screen and their finger movements in 3D space were recorded with sensors. The first subject (Attacker) was instructed to touch one of two horizontally arranged targets on the screen. The other subject (Blocker) touched the same target as quickly as possible. Average finger reaction times (fRTs) were fast, much faster than reactions to a dot moving on the screen in the same manner as the Attacker's finger. This suggests the presence of subtle preparatory cues in other parts of the Attacker's body. We also recorded videos of Attackers' movements and had Blockers play against unedited videos as well as videos that had all preparatory cues removed by editing out frames before Attacker finger movements started. Blockers' fRTs in response to the edited videos were significantly slower (similar to 90 ms). Also, reversing the preparatory movements in the videos tricked the Blockers into choosing the incorrect target at the beginning of their movement. Next, we occluded various body parts of the Attacker and showed that fRTs slow down only when most of the body of the Attacker is occluded. These results indicate that informative cues are widely distributed over the body and Blockers can use any piece from a set of redundant cues for action prediction. Reaction times in each condition remained constant over the duration of the testing sessions indicating a lack of learning during the experiment. These results suggest that during a dynamic two-person interaction, human subjects possess a remarkable and built-in action reading capacity allowing them to predict others' goals and respond efficiently in this competitive setting. Published by Elsevier B.V.
引用
收藏
页码:65 / 75
页数:11
相关论文
共 50 条
  • [21] Youth, actions and movements. Approaches from the southern Mexico
    Vommaro, Pablo
    ESPACIO ABIERTO, 2021, 30 (03) : 252 - 255
  • [22] From eye movements to actions: how batsmen hit the ball
    Land, MF
    McLeod, P
    NATURE NEUROSCIENCE, 2000, 3 (12) : 1340 - 1345
  • [23] From movements to actions: Two mechanisms for learning action sequences
    Endress, Ansgar D.
    Wood, Justin N.
    COGNITIVE PSYCHOLOGY, 2011, 63 (03) : 141 - 171
  • [24] SEEING BODILY MOVEMENTS AS ACTIONS
    ALDRICH, VC
    AMERICAN PHILOSOPHICAL QUARTERLY, 1967, 4 (03) : 222 - 230
  • [25] Predicting file system actions from prior events
    Kroeger, TM
    Long, DDE
    PROCEEDINGS OF THE USENIX 1996 ANNUAL TECHNICAL CONFERENCE, 1996, : 319 - 328
  • [26] Predicting Human Errors from Gaze and Cursor Movements
    Saboundji, Rachid Riad
    Rill, Robert Adrian
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [27] Predicting the Valence of a Scene from Observers' Eye Movements
    R-Tavakoli, Hamed
    Atyabi, Adham
    Rantanen, Antti
    Laukka, Seppo J.
    Nefti-Meziani, Samia
    Heikkila, Janne
    PLOS ONE, 2015, 10 (09):
  • [28] Recognising subtle emotional expressions: The role of facial movements
    Bould, Emma
    Morris, Neil
    Wink, Brian
    COGNITION & EMOTION, 2008, 22 (08) : 1569 - 1587
  • [29] Characterizing Subtle Facial Movements via Riemannian Manifold
    Hong, Xiaopeng
    Peng, Wei
    Harandi, Mehrtash
    Zhou, Ziheng
    Pietikainen, Matti
    Zhao, Guoying
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (03)
  • [30] The AR apprenticeship: Replication and onmidirectional viewing of subtle movements
    Sielhorst, T
    Traub, J
    Navab, N
    ISMAR 2004: THIRD IEEE AND ACM INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, 2004, : 290 - 291