Machine learning facilitated business intelligence (Part II) Neural networks optimization techniques and applications

被引:21
|
作者
Khan, Waqar Ahmed [1 ]
Chung, S. H. [1 ]
Awan, Muhammad Usman [2 ]
Wen, Xin [1 ]
机构
[1] Hong Kong Polytech Univ, Dept Ind & Syst Engn, Kowloon, Hong Kong, Peoples R China
[2] Univ Punjab, Inst Qual & Technol Management, Lahore, Pakistan
关键词
Optimization techniques; Data analytics; Machine learning; Feedforward neural network; Industrial management; UNIVERSAL APPROXIMATION; FEEDFORWARD NETWORKS; WHALE OPTIMIZATION; SWARM OPTIMIZATION; ALGORITHM; REGRESSION; CLASSIFICATION; VECTOR; HYBRID; RISK;
D O I
10.1108/IMDS-06-2019-0351
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Purpose The purpose of this paper is three-fold: to review the categories explaining mainly optimization algorithms (techniques) in that needed to improve the generalization performance and learning speed of the Feedforward Neural Network (FNN); to discover the change in research trends by analyzing all six categories (i.e. gradient learning algorithms for network training, gradient free learning algorithms, optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) collectively; and recommend new research directions for researchers and facilitate users to understand algorithms real-world applications in solving complex management, engineering and health sciences problems. Design/methodology/approach The FNN has gained much attention from researchers to make a more informed decision in the last few decades. The literature survey is focused on the learning algorithms and the optimization techniques proposed in the last three decades. This paper (Part II) is an extension of Part I. For the sake of simplicity, the paper entitled "Machine learning facilitated business intelligence (Part I): Neural networks learning algorithms and applications" is referred to as Part I. To make the study consistent with Part I, the approach and survey methodology in this paper are kept similar to those in Part I. Findings Combining the work performed in Part I, the authors studied a total of 80 articles through popular keywords searching. The FNN learning algorithms and optimization techniques identified in the selected literature are classified into six categories based on their problem identification, mathematical model, technical reasoning and proposed solution. Previously, in Part I, the two categories focusing on the learning algorithms (i.e. gradient learning algorithms for network training, gradient free learning algorithms) are reviewed with their real-world applications in management, engineering, and health sciences. Therefore, in the current paper, Part II, the remaining four categories, exploring optimization techniques (i.e. optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) are studied in detail. The algorithm explanation is made enriched by discussing their technical merits, limitations, and applications in their respective categories. Finally, the authors recommend future new research directions which can contribute to strengthening the literature. Research limitations/implications - The FNN contributions are rapidly increasing because of its ability to make reliably informed decisions. Like learning algorithms, reviewed in Part I, the focus is to enrich the comprehensive study by reviewing remaining categories focusing on the optimization techniques. However, future efforts may be needed to incorporate other algorithms into identified six categories or suggest new category to continuously monitor the shift in the research trends. Practical implications - The authors studied the shift in research trend for three decades by collectively analyzing the learning algorithms and optimization techniques with their applications. This may help researchers to identify future research gaps to improve the generalization performance and learning speed, and user to understand the applications areas of the FNN. For instance, research contribution in FNN in the last three decades has changed from complex gradient-based algorithms to gradient free algorithms, trial and error hidden units fixed topology approach to cascade topology, hyperparameters initial guess to analytically calculation and converging algorithms at a global minimum rather than the local minimum. Originality/value The existing literature surveys include comparative study of the algorithms, identifying algorithms application areas and focusing on specific techniques in that it may not be able to identify algorithms categories, a shift in research trends over time, application area frequently analyzed, common research gaps and collective future directions. Part I and II attempts to overcome the existing literature surveys limitations by classifying articles into six categories covering a wide range of algorithm proposed to improve the FNN generalization performance and convergence rate. The classification of algorithms into six categories helps to analyze the shift in research trend which makes the classification scheme significant and innovative.
引用
收藏
页码:128 / 163
页数:36
相关论文
共 50 条
  • [21] Prospective applications of artificial intelligence/machine learning techniques in earth sciences
    Kumar, Bipin
    Najundiah, Ravi S.
    Bhowmik, Moumita
    CURRENT SCIENCE, 2020, 119 (03): : 424 - 425
  • [22] Techniques and applications of Machine Learning and Artificial Intelligence in education: a systematic review
    Forero-Corba, Wiston
    Bennasar, Francisca Negre
    RIED-REVISTA IBEROAMERICANA DE EDUCACION A DISTANCIA, 2024, 27 (01):
  • [23] A machine learning approach for propeller design and optimization: Part II
    Doijode, Pranav Sumanth
    Hickel, Stefan
    van Terwisga, Tom
    Visser, Klaas
    APPLIED OCEAN RESEARCH, 2022, 124
  • [24] Benchmarking of hyperparameter optimization techniques for machine learning applications in production
    Motz, Maximilian
    Krauss, Jonathan
    Schmitt, Robert Heinrich
    ADVANCES IN INDUSTRIAL AND MANUFACTURING ENGINEERING, 2022, 5
  • [25] On the Fuzziness of Machine Learning, Neural Networks, and Artificial Intelligence in Radiation Oncology
    El Naqa, Issam
    Brock, Kristy
    Yu, Yan
    Langen, Katja
    Klein, Eric E.
    INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, 2018, 100 (01): : 1 - 4
  • [26] Advances in artificial neural networks, machine learning and computational intelligence Preface
    Oneto, Luca
    Navarin, Nicolo
    Schleif, Frank-Michael
    NEUROCOMPUTING, 2022, 507 : 311 - 314
  • [27] Effect of Optimization Techniques on Feedback Alignment Learning of Neural Networks
    Lee, Soha
    Park, Hyeyoung
    2023 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION, ICAIIC, 2023, : 227 - 231
  • [28] Parallel and Distributed Methods for Constrained Nonconvex Optimization-Part II: Applications in Communications and Machine Learning
    Scutari, Gesualdo
    Facchinei, Francisco
    Lampariello, Lorenzo
    Sardellitti, Stefania
    Song, Peiran
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2017, 65 (08) : 1945 - 1960
  • [29] A Survey of Stochastic Computing Neural Networks for Machine Learning Applications
    Liu, Yidong
    Liu, Siting
    Wang, Yanzhi
    Lombardi, Fabrizio
    Han, Jie
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (07) : 2809 - 2824
  • [30] Special issue on machine learning-based applications and techniques in cyber intelligence
    Lin Mei
    Zheng Xu
    Vijayan Sugumaran
    Neural Computing and Applications, 2019, 31 : 8135 - 8137