Generalisation and Domain Adaptation in GP with Gradient Descent for Symbolic Regression

被引:0
|
作者
Chen, Qi [1 ]
Xue, Bing [1 ]
Zhang, Mengjie [1 ]
机构
[1] Victoria Univ Wellington, Sch Engn & Comp Sci, Wellington, New Zealand
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Genetic programming (GP) has been widely applied to symbolic regression problems and achieved good success. Gradient descent has also been used in GP as a complementary search to the genetic beam search to further improve symbolic regression performance. However, most existing GP approaches with gradient descent (GPGD) to symbolic regression have only been tested on the "conventional" symbolic regression problems such as benchmark function approximations and engineering practical problems with a single (training) data set only and the effectiveness on unseen data sets in the same domain and in different domains has not been fully investigated. This paper designs a series of experiment objectives to investigate the effectiveness and efficiency of GPGD with various settings for a set of symbolic regression problems applied to unseen data in the same domain and adapted to other domains. The results suggest that the existing GPGD method applying gradient descent to all evolved program trees three times at every generation can perform very well on the training set itself, but cannot generalise well on the unseen data set in the same domain and cannot be adapted to unseen data in an extended domain. Applying gradient descent to the best program in the final generation of GP can also improve the performance over the standard GP method and can generalise well on unseen data for some of the tasks in the same domain, but perform poorly on the unseen data in an extended domain. Applying gradient descent to the top 20% programs in the population can generalise reasonably well on the unseen data in not only the same domain but also in an extended domain.
引用
收藏
页码:1137 / 1144
页数:8
相关论文
共 50 条
  • [1] Parametrizing GP Trees for Better Symbolic Regression Performance through Gradient Descent
    Verdu, Federico Julian Camerota
    Pietropolli, Gloria
    Manzoni, Luca
    Castelli, Mauro
    [J]. PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2023 COMPANION, 2023, : 619 - 622
  • [2] Bloat and Generalisation in Symbolic Regression
    Dick, Grant
    [J]. SIMULATED EVOLUTION AND LEARNING (SEAL 2014), 2014, 8886 : 491 - 502
  • [3] Granular regression with a gradient descent method
    Chen, Yumin
    Miao, Duoqian
    [J]. INFORMATION SCIENCES, 2020, 537 : 246 - 260
  • [4] Guest Editorial: Visual Domain Adaptation and Generalisation
    Wang, Qi
    Yu, Jun
    Liu, Tongliang
    Liu, Weifeng
    [J]. IET COMPUTER VISION, 2019, 13 (02) : 87 - 89
  • [5] Choosing function sets with better generalisation performance for symbolic regression models
    Miguel Nicolau
    Alexandros Agapitos
    [J]. Genetic Programming and Evolvable Machines, 2021, 22 : 73 - 100
  • [6] Improving Generalisation of Genetic Programming for Symbolic Regression with Structural Risk Minimisation
    Chen, Qi
    Xue, Bing
    Shang, Lin
    Zhang, Mengjie
    [J]. GECCO'16: PROCEEDINGS OF THE 2016 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2016, : 709 - 716
  • [7] Domain Adaptation in Regression
    Cortes, Corinna
    Mohri, Mehryar
    [J]. ALGORITHMIC LEARNING THEORY, 2011, 6925 : 308 - 323
  • [8] Choosing function sets with better generalisation performance for symbolic regression models
    Nicolau, Miguel
    Agapitos, Alexandros
    [J]. GENETIC PROGRAMMING AND EVOLVABLE MACHINES, 2021, 22 (01) : 73 - 100
  • [9] Stochastic Gradient Descent Meets Distribution Regression
    Muecke, Nicole
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [10] Local gain adaptation in stochastic gradient descent
    Schraudolph, NN
    [J]. NINTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS (ICANN99), VOLS 1 AND 2, 1999, (470): : 569 - 574