Gesture helps learners learn, but not merely by guiding their visual attention

被引:59
|
作者
Wakefield, Elizabeth [1 ,2 ]
Novack, Miriam A. [1 ,3 ]
Congdon, Eliza L. [1 ,4 ]
Franconeri, Steven [3 ]
Goldin-Meadow, Susan [1 ]
机构
[1] Univ Chicago, Dept Psychol, 5848 S Univ Ave, Chicago, IL 60637 USA
[2] Loyola Univ, Dept Psychol, 6525 N Sheridan Rd, Chicago, IL 60626 USA
[3] Northwestern Univ, Dept Psychol, Evanston, IL USA
[4] Bucknell Univ, Dept Psychol, Lewisburg, PA 17837 USA
基金
美国国家科学基金会;
关键词
TEACHERS GESTURES; SPEECH; CHILDREN; HANDS; IDEAS;
D O I
10.1111/desc.12664
中图分类号
B844 [发展心理学(人类心理学)];
学科分类号
040202 ;
摘要
Teaching a new concept through gestureshand movements that accompany speechfacilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanismgesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesturethey look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learningfollowing along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] A Systematic Literature Review: Learning with Visual by The Help of Augmented Reality Helps Students Learn Better
    Liono, Rishka A.
    Amanda, Nadiran
    Pratiwi, Anisah
    Gunawan, Alexander A. S.
    [J]. 5TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND COMPUTATIONAL INTELLIGENCE 2020, 2021, 179 : 144 - 152
  • [42] Depth as attention to learn image representations for visual localization, using monocular images
    Hettiarachchi, Dulmini
    Tian, Ye
    Yu, Han
    Kamijo, Shunsuke
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 98
  • [43] Nine-month-olds' shared visual attention as a function of gesture and object location
    Flom, R
    Deák, GO
    Phill, CG
    Pick, AD
    [J]. INFANT BEHAVIOR & DEVELOPMENT, 2004, 27 (02): : 181 - 194
  • [44] Guiding Visual Attention in Decision Making-Verbal Instructions Versus Flicker Cueing
    Canal-Bruland, Rouwen
    [J]. RESEARCH QUARTERLY FOR EXERCISE AND SPORT, 2009, 80 (02) : 369 - 374
  • [45] A Comparison of Visual Attention Guiding Approaches for 360 Image-Based VR Tours
    Wallgrun, Jan Oliver
    Bagher, Manda M.
    Sajjadi, Pejman
    Klippel, Alexander
    [J]. 2020 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES (VR 2020), 2020, : 83 - 91
  • [46] Guiding visual attention in deep convolutional neural networks based on human eye movements
    van Dyck, Leonard Elia
    Denzler, Sebastian Jochen
    Gruber, Walter Roland
    [J]. FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [47] A model of active visual search with object-based attention guiding scan paths
    Lanyon, LJ
    Denham, SL
    [J]. NEURAL NETWORKS, 2004, 17 (5-6) : 873 - 897
  • [48] Simulating Working Memory Guiding Visual Attention for Capturing Target by Computational Cognitive Model
    Wang, Rifeng
    [J]. 2010 THE 3RD INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND INDUSTRIAL APPLICATION (PACIIA2010), VOL III, 2010, : 184 - 187
  • [49] Simulating Working Memory Guiding Visual Attention for Capturing Target by Computational Cognitive Model
    Wang, Rifeng
    [J]. APPLIED INFORMATICS AND COMMUNICATION, PT III, 2011, 226 : 352 - 359
  • [50] Visual Attention Saccadic Models Learn to Emulate Gaze Patterns From Childhood to Adulthood
    Le Meur, Olivier
    Coutrot, Antoine
    Liu, Zhi
    Rama, Pia
    Le Roch, Adrien
    Helo, Andrea
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (10) : 4777 - 4789