Learning the Effects of Physical Actions in a Multi-modal Environment

被引:0
|
作者
Dagan, Gautier [1 ]
Keller, Frank [1 ]
Lascarides, Alex [1 ]
机构
[1] Univ Edinburgh, Sch Informat, Edinburgh, Midlothian, Scotland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs) handle physical commonsense information inadequately. As a result of being trained in a disembodied setting, LLMs often fail to predict an action's outcome in a given environment. However, predicting the effects of an action before it is executed is crucial in planning, where coherent sequences of actions are often needed to achieve a goal. Therefore, we introduce the multi-modal task of predicting the outcomes of actions solely from realistic sensory inputs (images and text). Next, we extend an LLM to model latent representations of objects to better predict action outcomes in an environment. We show that multi-modal models can capture physical commonsense when augmented with visual information. Finally, we evaluate our model's performance on novel actions and objects and find that combining modalities help models to generalize and learn physical commonsense reasoning better.
引用
下载
收藏
页码:133 / 148
页数:16
相关论文
共 50 条
  • [1] Learning in an Inclusive Multi-Modal Environment
    Graham, Deryn
    Benest, Ian
    Nicholl, Peter
    JOURNAL OF CASES ON INFORMATION TECHNOLOGY, 2010, 12 (03) : 28 - 44
  • [2] The integration of information in a digital, multi-modal learning environment
    Schueler, Anne
    LEARNING AND INSTRUCTION, 2019, 59 : 76 - 87
  • [3] Multi-modal anchor adaptation learning for multi-modal summarization
    Chen, Zhongfeng
    Lu, Zhenyu
    Rong, Huan
    Zhao, Chuanjun
    Xu, Fan
    NEUROCOMPUTING, 2024, 570
  • [4] Creating a multi-modal self-help learning environment
    Denny, M.
    Cunningham, J.
    Ridge, M.
    JOURNAL OF APPLIED RESEARCH IN INTELLECTUAL DISABILITIES, 2010, 23 (05) : 493 - 493
  • [5] Unsupervised Multi-modal Learning
    Iqbal, Mohammed Shameer
    ADVANCES IN ARTIFICIAL INTELLIGENCE (AI 2015), 2015, 9091 : 343 - 346
  • [6] Learning Multi-modal Similarity
    McFee, Brian
    Lanckriet, Gert
    JOURNAL OF MACHINE LEARNING RESEARCH, 2011, 12 : 491 - 523
  • [7] MULTI-MODAL LEARNING - A LEARNING-ENVIRONMENT FOR THE 21ST-CENTURY
    DOBSON, HD
    BULLETIN OF SCIENCE TECHNOLOGY & SOCIETY, 1988, 8 (06) : 595 - 600
  • [8] An Advanced Learning Environment Aided by Recognition of Multi-modal Social Signals
    Chen, Jingying
    Chen, Dan
    Wang, Lizhe
    Lemon, Oliver
    ADVANCES IN WEB-BASED LEARNING-ICWL 2010, 2010, 6483 : 41 - +
  • [9] Physical Querying with Multi-Modal Sensing
    Baek, Iljoo
    Stine, Taylor
    Dash, Denver
    Xiao, Fanyi
    Sheikh, Yaser
    Movshovitz-Attias, Yair
    Chen, Mei
    Hebert, Martial
    Kanade, Takeo
    2014 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2014, : 183 - 190
  • [10] Multi-modal and multi-granular learning
    Zhang, Bo
    Zhang, Ling
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 2007, 4426 : 9 - +