Multimodal reasoning for automatic model construction

被引:0
|
作者
Stolle, R [1 ]
Bradley, E [1 ]
机构
[1] Univ Colorado, Dept Comp Sci, Boulder, CO 80309 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper describes a program called FRET that automates system identification, the process of finding a dynamical model of a black-box system. FRET performs both structural identification and parameter estimation by integrating several reasoning modes: qualitative reasoning, qualitative simulation, numerical simulation, geometric reasoning, constraint reasoning, resolution, reasoning with abstraction levels, declarative meta-level control, and a simple form of truth maintenance. Unlike other modeling programs that map structural or functional descriptions to model fragments, FRET combines hypotheses about the mathematics involved into candidate models that are intelligently tested against observations about the target system. We give two examples of system identification tasks that this automated modeling tool has successfully performed. The first, a simple linear system, was chosen because it facilitates a brief and clear presentation of PRET's features and reasoning techniques. In the second example, a difficult real-world modeling task, we show how FRET models a radio-controlled car used in the University of British Columbia's soccer-playing robot project.
引用
收藏
页码:181 / 188
页数:8
相关论文
共 50 条
  • [41] A Literature Review Of Multimodal Semiotic Reasoning In Mathematics
    Ekowati, Dyah Worowirastri
    Nusantara, Toto
    Muksar, Makbul
    Sudjimat, Dwi Agus
    PEGEM EGITIM VE OGRETIM DERGISI, 2024, 14 (02): : 261 - 274
  • [42] State Graph Reasoning for Multimodal Conversational Recommendation
    Wu, Yuxia
    Liao, Lizi
    Zhang, Gangyi
    Lei, Wenqiang
    Zhao, Guoshuai
    Qian, Xueming
    Chua, Tat-Seng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 3113 - 3124
  • [43] Visual Relationship Detection with Multimodal Fusion and Reasoning
    Xiao, Shouguan
    Fu, Weiping
    SENSORS, 2022, 22 (20)
  • [44] XRaySwinGen: Automatic medical reporting for X-ray exams with multimodal model
    Magalhaes Junior, Gilvan Veras
    Santos, Roney L. de S.
    Vogado, Luis H. S.
    de Paiva, Anselmo Cardoso
    Neto, Pedro de Alcantara dos Santos
    HELIYON, 2024, 10 (07)
  • [45] Multimodal deep learning for multimedia understanding and reasoning
    Han, Yahong
    Chen, Jingjing
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (11) : 17167 - 17167
  • [46] Multimodal deep learning for multimedia understanding and reasoning
    Multimedia Tools and Applications, 2021, 80 : 17167 - 17167
  • [47] Fuzzy commonsense reasoning for multimodal sentiment analysis
    Chaturvedi, Iti
    Satapathy, Ranjan
    Cavallari, Sandro
    Cambria, Erik
    PATTERN RECOGNITION LETTERS, 2019, 125 : 264 - 270
  • [48] MEmoR: A Dataset for Multimodal Emotion Reasoning in Videos
    Shen, Guangyao
    Wang, Xin
    Duan, Xuguang
    Li, Hongzhi
    Zhu, Wenwu
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 493 - 502
  • [49] Multimodal Learning and Reasoning for Visual Question Answering
    Ilievski, Ilija
    Feng, Jiashi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [50] APPLICATION OF METHODS OF AUTOMATIC CLASSIFICATION TO CONSTRUCTION OF A STATIC MODEL OF A PLANT
    DOROFEYUK, AA
    KASAVIN, AD
    TORGOVIT.IS
    AUTOMATION AND REMOTE CONTROL, 1970, (01) : 128 - +