Speakers prioritise affordance-based object semantics in scene descriptions

被引:2
|
作者
Barker, M. [1 ,2 ]
Rehrig, G. [1 ]
Ferreira, F. [1 ]
机构
[1] Univ Calif Davis, Dept Psychol, Davis, CA USA
[2] Univ Calif Davis, Dept Psychol, Davis, CA 95616 USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
Language production; linearisation; discourse production; scene semantics; eye movements; SPREADING-ACTIVATION THEORY; CONCEPTUAL ACCESSIBILITY; SENTENCE PRODUCTION; LANGUAGE PRODUCTION; LEXICAL ACCESS; EYE-MOVEMENTS; LINEARIZATION; RETRIEVAL; FREQUENCY; DISSOCIATIONS;
D O I
10.1080/23273798.2023.2190136
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
This work investigates the linearisation strategies used by speakers when describing real-world scenes to better understand production plans for multi-utterance sequences. In this study, 30 participants described real-world scenes aloud. To investigate which semantic features of scenes predict order of mention, we quantified three features (meaning, graspability, and interactability) using two techniques (whole-object ratings and feature map values). We found that object-level semantic features, namely those affordance-based, predicted order of mention in a scene description task. Our findings provide the first evidence for an object-related semantic feature that guides linguistic ordering decisions and offers theoretical support for the role of object semantics in scene viewing and description.
引用
收藏
页码:1045 / 1067
页数:23
相关论文
共 50 条
  • [1] Affordance-based robot object retrieval
    Thao Nguyen
    Gopalan, Nakul
    Patel, Roma
    Corsaro, Matt
    Pavlick, Ellie
    Tellex, Stefanie
    AUTONOMOUS ROBOTS, 2022, 46 (01) : 83 - 98
  • [2] Affordance-based robot object retrieval
    Thao Nguyen
    Nakul Gopalan
    Roma Patel
    Matt Corsaro
    Ellie Pavlick
    Stefanie Tellex
    Autonomous Robots, 2022, 46 : 83 - 98
  • [3] Affordance-based 3D Feature for Generic Object Recognition
    Iizuka, M.
    Akizuki, S.
    Hashimoto, M.
    THIRTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION 2017, 2017, 10338
  • [4] Real-time Multisensory Affordance-based Control for Adaptive Object Manipulation
    Chu, Vivian
    Gutierrez, Reymundo A.
    Chernova, Sonia
    Thomaz, Andrea L.
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 7776 - 7783
  • [5] Affordance-based imitation learning in robots
    Lopes, Manuel
    Melo, Francisco S.
    Montesano, Luis
    2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, 2007, : 1021 - 1027
  • [6] An affordance-based framework for CVE evaluation
    Turner, P
    Turner, S
    PEOPLE AND COMPUTERS XVI- MEMORABLE YET INVISIBLE, PROCEEDINGS, 2002, : 89 - 103
  • [7] An Affordance-Based Methodology for Package Design
    de la Fuente, Javier
    Gustafson, Stephanie
    Twomey, Colleen
    Bix, Laura
    PACKAGING TECHNOLOGY AND SCIENCE, 2015, 28 (02) : 157 - 171
  • [8] Toward automating affordance-based design
    Mata, Ivan
    Fadel, Georges
    Mocko, Gregory
    AI EDAM-ARTIFICIAL INTELLIGENCE FOR ENGINEERING DESIGN ANALYSIS AND MANUFACTURING, 2015, 29 (03): : 297 - 305
  • [9] An Affordance-Based Methodology for Package Design
    de la Fuente, Javier
    Gustafson, Stephanie
    Twomey, Colleen
    Bix, Laura
    PACKAGING TECHNOLOGY AND SCIENCE, 2016, 29 (12) : 612 - 612
  • [10] Affordance-based control of visually guided action
    Fajen, Brett R.
    ECOLOGICAL PSYCHOLOGY, 2007, 19 (04) : 383 - 410