Generation of Visual Representations for Multi-Modal Mathematical Knowledge

被引:0
|
作者
Wu, Lianlong [1 ]
Choi, Seewon [1 ]
Raggi, Daniel [1 ]
Stockdill, Aaron [2 ]
Garcia, Grecia Garcia [2 ]
Colarusso, Fiorenzo [2 ]
Cheng, Peter C. H. [2 ]
Jamnik, Mateja [1 ]
机构
[1] Univ Cambridge, Cambridge, England
[2] Univ Sussex, Brighton, E Sussex, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we introduce MaRE, a tool designed to generate representations of multiple modalities for a given mathematical problem while ensuring the correctness and interpretability of the transformations between these different representations. The theoretical foundation for this tool is Representational Systems Theory (RST), a mathematical framework for studying the structure and transformations of representations. In MaRE's web front-end user interface, a set of probability equations in Bayesian Notation can be rigorously transformed into Area Diagrams, Contingency Tables, and Probability Trees with just one click, utilising a back-end engine based on RST. A table of cognitive costs, based on the cognitive Representational Interpretive Structure Theory (RIST), that a representation places on a particular profile of user is produced at the same time. MaRE is general and domain independent, applicable to other representations encoded in RST. It may enhance mathematical education and research, facilitating multi-modal knowledge representation and discovery.
引用
收藏
页码:23850 / 23852
页数:3
相关论文
共 50 条
  • [1] Conceptual Coherence Revealed in Multi-Modal Representations of Astronomy Knowledge
    Blown, Eric
    Bryce, Tom G. K.
    INTERNATIONAL JOURNAL OF SCIENCE EDUCATION, 2010, 32 (01) : 31 - 67
  • [2] Multi-modal visual tracking based on textual generation
    Wang, Jiahao
    Liu, Fang
    Jiao, Licheng
    Wang, Hao
    Li, Shuo
    Li, Lingling
    Chen, Puhua
    Liu, Xu
    INFORMATION FUSION, 2024, 112
  • [3] Connecting Multi-modal Contrastive Representations
    Wang, Zehan
    Zhao, Yang
    Cheng, Xize
    Huang, Haifeng
    Liu, Jiageng
    Tang, Li
    Li, Linjun
    Wang, Yongqi
    Yin, Aoxiong
    Zhang, Ziang
    Zhao, Zhou
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] Temporal multi-modal knowledge graph generation for link prediction
    Li, Yuandi
    Ji, Hui
    Yu, Fei
    Cheng, Lechao
    Che, Nan
    Neural Networks, 2025, 185
  • [5] Radiology report generation with a learned knowledge base and multi-modal alignment
    Yang, Shuxin
    Wu, Xian
    Ge, Shen
    Zheng, Zhuozhao
    Zhou, S. Kevin
    Xiao, Li
    MEDICAL IMAGE ANALYSIS, 2023, 86
  • [6] Visual Prompt Multi-Modal Tracking
    Zhu, Jiawen
    Lai, Simiao
    Chen, Xin
    Wang, Dong
    Lu, Huchuan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9516 - 9526
  • [7] VISUAL AS MULTI-MODAL ARGUMENTATION IN LAW
    Novak, Marko
    BRATISLAVA LAW REVIEW, 2021, 5 (01): : 91 - 110
  • [8] Multi-modal measurement of the visual cortex
    Amano, Kaoru
    Takemura, Hiromasa
    I-PERCEPTION, 2014, 5 (04): : 408 - 408
  • [9] MMKG: Multi-modal Knowledge Graphs
    Liu, Ye
    Li, Hui
    Garcia-Duran, Alberto
    Niepert, Mathias
    Onoro-Rubio, Daniel
    Rosenblum, David S.
    SEMANTIC WEB, ESWC 2019, 2019, 11503 : 459 - 474
  • [10] MultiJAF: Multi-modal joint entity alignment framework for multi-modal knowledge graph
    Cheng, Bo
    Zhu, Jia
    Guo, Meimei
    NEUROCOMPUTING, 2022, 500 : 581 - 591