SymFormer: End-to-End Symbolic Regression Using Transformer-Based Architecture

被引:4
|
作者
Vastl, Martin [1 ,2 ]
Kulhanek, Jonas [1 ,3 ]
Kubalik, Jiri [1 ]
Derner, Erik [1 ]
Babuska, Robert [1 ,4 ]
机构
[1] Czech Tech Univ, Czech Inst Informat Robot & Cybernet, Prague 16000, Czech Republic
[2] Charles Univ Prague, Fac Math & Phys, Prague 12116, Czech Republic
[3] Czech Tech Univ, Fac Elect Engn, Prague 16000, Czech Republic
[4] Delft Univ Technol, Dept Cognit Robot, NL-2628 CD Delft, Netherlands
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Transformers; Mathematical models; Vectors; Symbols; Decoding; Optimization; Predictive models; Neural networks; Genetic programming; Computational complexity; Benchmark testing; Regression analysis; Symbolic regression; neural networks; transformers;
D O I
10.1109/ACCESS.2024.3374649
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many real-world systems can be naturally described by mathematical formulas. The task of automatically constructing formulas to fit observed data is called symbolic regression. Evolutionary methods such as genetic programming have been commonly used to solve symbolic regression tasks, but they have significant drawbacks, such as high computational complexity. Recently, neural networks have been applied to symbolic regression, among which the transformer-based methods seem to be most promising. After training a transformer on a large number of formulas, the actual inference, i.e., finding a formula for new, unseen data, is very fast (in the order of seconds). This is considerably faster than state-of-the-art evolutionary methods. The main drawback of transformers is that they generate formulas without numerical constants, which have to be optimized separately, yielding suboptimal results. We propose a transformer-based approach called SymFormer, which predicts the formula by outputting the symbols and the constants simultaneously. This helps to generate formulas that fit the data more accurately. In addition, the constants provided by SymFormer serve as a good starting point for subsequent tuning via gradient descent to further improve the model accuracy. We show on several benchmarks that SymFormer outperforms state-of-the-art methods while having faster inference.
引用
收藏
页码:37840 / 37849
页数:10
相关论文
共 50 条
  • [41] Multi-Encoder Learning and Stream Fusion for Transformer-Based End-to-End Automatic Speech Recognition
    Lohrenz, Timo
    Li, Zhengyang
    Fingscheidt, Tim
    INTERSPEECH 2021, 2021, : 2846 - 2850
  • [42] Improving Streaming End-to-End ASR on Transformer-based Causal Models with Encoder States Revision Strategies
    Li, Zehan
    Miao, Haoran
    Deng, Keqi
    Cheng, Gaofeng
    Tian, Sanli
    Li, Ta
    Yan, Yonghong
    INTERSPEECH 2022, 2022, : 1671 - 1675
  • [43] TLLFusion: An End-to-End Transformer-Based Method for Low-Light Infrared and Visible Image Fusion
    Lv, Guohua
    Fu, Xinyue
    Zhai, Yi
    Zhao, Guixin
    Gao, Yongbiao
    PATTERN RECOGNITION AND COMPUTER VISION, PT III, PRCV 2024, 2025, 15033 : 364 - 378
  • [44] Improving Transformer-based End-to-End Speech Recognition with Connectionist Temporal Classification and Language Model Integration
    Karita, Shigeki
    Soplin, Nelson Enrique Yalta
    Watanabe, Shinji
    Delcroix, Marc
    Ogawa, Atsunori
    Nakatani, Tomohiro
    INTERSPEECH 2019, 2019, : 1408 - 1412
  • [45] End-to-End lightweight Transformer-Based neural network for grasp detection towards fruit robotic handling
    Guo, Congmin
    Zhu, Chenhao
    Liu, Yuchen
    Huang, Renjun
    Cao, Boyuan
    Zhu, Qingzhen
    Zhang, Ranxin
    Zhang, Baohua
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 221
  • [46] CLTR: An End-to-End, Transformer-Based System for Cell Level Table Retrieval and Table Question Answering
    Pan, Feifei
    Canim, Mustafa
    Glass, Michael
    Gliozzo, Alfio
    Fox, Peter
    ACL-IJCNLP 2021: THE JOINT CONFERENCE OF THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING: PROCEEDINGS OF THE SYSTEM DEMONSTRATIONS, 2021, : 202 - 209
  • [47] Fast offline transformer-based end-to-end automatic speech recognition for real-world applications
    Oh, Yoo Rhee
    Park, Kiyoung
    Park, Jeon Gue
    ETRI JOURNAL, 2022, 44 (03) : 476 - 490
  • [48] CarcassFormer: an end-to-end transformer-based framework for simultaneous localization, segmentation and classification of poultry carcass defect
    Tran, Minh
    Truong, Sang
    Fernandes, Arthur F. A.
    Kidd, Michael T.
    Le, Ngan
    POULTRY SCIENCE, 2024, 103 (08)
  • [49] Identification of Geochemical Anomalies Using an End-to-End Transformer
    Yu, Shuyan
    Deng, Hao
    Liu, Zhankun
    Chen, Jin
    Xiao, Keyan
    Mao, Xiancheng
    NATURAL RESOURCES RESEARCH, 2024, 33 (03) : 973 - 994
  • [50] End-to-End Transformer Based Model for Image Captioning
    Wang, Yiyu
    Xu, Jungang
    Sun, Yingfei
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2585 - 2594