Metaheuristics and machine learning: an approach with reinforcement learning assisting neural architecture search

被引:0
|
作者
Venske, Sandra Mara Scos [1 ,2 ]
de Almeida, Carolina Paula [2 ]
Delgado, Myriam Regattieri [1 ]
机构
[1] UTFPR, Grad Program Elect & Comp Engn, Ave Sete Setembro 3165, BR-80230901 Curitiba, Parana, Brazil
[2] Univ Estadual Centro Oeste, Dept Comp Sci, Alameda Elio Antonio Dalla Vecchia 838, BR-85040167 Guarapuava, Parana, Brazil
关键词
Optimization; Protein structure prediction; Multilayer perceptron; Thompson sampling; PREDICTION; ALGORITHM;
D O I
10.1007/s10732-024-09526-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Methaheuristics (MHs) are techniques widely used for solving complex optimization problems. In recent years, the interest in combining MH and machine learning (ML) has grown. This integration can occur mainly in two ways: ML-in-MH and MH-in-ML. In the present work, we combine the techniques in both ways-ML-in-MH-in-ML, providing an approach in which ML is considered to improve the performance of an evolutionary algorithm (EA), whose solutions encode parameters of an ML model-artificial neural network (ANN). Our approach called TSin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}EAin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}ANN employs a reinforcement learning neighborhood (RLN) mutation based on Thompson sampling (TS). TS is a parameterless reinforcement learning method, used here to boost the EA performance. In the experiments, every candidate ANN solves a regression problem known as protein structure prediction deviation. We consider two protein datasets, one with 16,382 and the other with 45,730 samples. The results show that TSin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}EAin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}ANN performs significantly better than a canonical genetic algorithm (GAin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}ANN) and the evolutionary algorithm without reinforcement learning (EAin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}ANN). Analyses of the parameter's frequency are also performed comparing the approaches. Finally, comparisons with the literature show that except for one particular case in the largest dataset, TSin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}EAin\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{in}$$\end{document}ANN outperforms other approaches considered the state of the art for the addressed datasets.
引用
收藏
页码:199 / 224
页数:26
相关论文
共 50 条
  • [1] Reinforcement learning for neural architecture search: A review
    Jaafra, Yesmina
    Laurent, Jean Luc
    Deruyver, Aline
    Naceur, Mohamed Saber
    [J]. IMAGE AND VISION COMPUTING, 2019, 89 : 57 - 66
  • [2] Reinforcement Learning for Neural Architecture Search in Hyperspectral Unmixing
    Han, Zhu
    Hong, Danfeng
    Gao, Lianru
    Roy, Swalpa Kumar
    Zhang, Bing
    Chanussot, Jocelyn
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [3] Reinforcement Learning based Neural Architecture Search for Audio Tagging
    Liu, Haiyang
    Zhang, Cheng
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [4] Neural architecture search via standard machine learning methodologies
    Franchini, Giorgia
    Ruggiero, Valeria
    Porta, Federica
    Zanni, Luca
    [J]. MATHEMATICS IN ENGINEERING, 2022, 5 (01): : 1 - 21
  • [5] Neural architecture search via standard machine learning methodologies
    Franchini, Giorgia
    Ruggiero, Valeria
    Porta, Federica
    Zanni, Luca
    [J]. Mathematics In Engineering, 2023, 5 (01):
  • [6] Differentiable Architecture Search for Reinforcement Learning
    Miao, Yingjie
    Song, Xingyou
    Co-Reyes, John D.
    Peng, Daiyi
    Yue, Summer
    Brevdo, Eugene
    Faust, Aleksandra
    [J]. INTERNATIONAL CONFERENCE ON AUTOMATED MACHINE LEARNING, VOL 188, 2022, 188
  • [7] Scalable Reinforcement-Learning-Based Neural Architecture Search for Cancer Deep Learning Research
    Balaprakash, Prasanna
    Egele, Romain
    Salim, Misha
    Wild, Stefan
    Vishwanath, Venkatram
    Xia, Fangfang
    Brettin, Tom
    Stevens, Rick
    [J]. PROCEEDINGS OF SC19: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS, 2019,
  • [8] Neural Optimizer Search with Reinforcement Learning
    Bello, Irwan
    Zoph, Barret
    Vasudevan, Vijay
    Le, Quoc, V
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [9] RADARS: Memory Efficient Reinforcement Learning Aided Differentiable Neural Architecture Search
    Yan, Zheyu
    Jiang, Weiwen
    Hu, Xiaobo Sharon
    Shi, Yiyu
    [J]. 27TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2022, 2022, : 128 - 133
  • [10] Contrastive meta-reinforcement learning for heterogeneous graph neural architecture search
    Xu, Zixuan
    Wu, Jia
    [J]. Expert Systems with Applications, 2025, 260