Integrating Gaussian Process with Reinforcement Learning for Adaptive Service Composition

被引:7
|
作者
Wang, Hongbing [1 ,2 ]
Wu, Qin [1 ,2 ]
Chen, Xin [1 ,2 ]
Yu, Qi [3 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing, Jiangsu, Peoples R China
[2] Southeast Univ, Key Lab Comp Network & Informat Integrat, Nanjing, Jiangsu, Peoples R China
[3] Rochester Inst Tech, Coll Comp & Informat Sci, Rochester, NY USA
来源
关键词
D O I
10.1007/978-3-662-48616-0_13
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Service composition offers a powerful software paradigm to build complex and value-added applications by exploiting a service oriented architecture. However, the frequent changes in the internal and external environment demand adaptiveness of a composition solution. Meanwhile, the increasingly complex user requirements and the rapid growth of the composition space give rise to the scalability issue. To address these key challenges, we propose a new service composition scheme, integrating gaussian process with reinforcement learning for adaptive service composition. It uses kernel function approximation to predict the distribution of the objective function value with strong communication skills and generalization ability based on an off-policy Q-learning algorithm. The experimental results demonstrate that our method clearly outperforms the standard Q-learning solution for service composition.
引用
收藏
页码:203 / 217
页数:15
相关论文
共 50 条
  • [21] Nonlinear Inverse Reinforcement Learning with Mutual Information and Gaussian Process
    Li, De C.
    He, Yu Q.
    Fu, Feng
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, 2014, : 1445 - 1450
  • [22] Gaussian Process Reinforcement Learning for Fast Opportunistic Spectrum Access
    Yan, Zun
    Cheng, Peng
    Chen, Zhuo
    Li, Yonghui
    Vucetic, Branka
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2020, 68 : 2613 - 2628
  • [23] Reinforcement learning for continuous spaces based on Gaussian process classifier
    Wang, Xue-Song
    Zhang, Yi-Yang
    Cheng, Yu-Hu
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2009, 37 (06): : 1153 - 1158
  • [24] Gaussian Process Reinforcement Learning for Fast Opportunistic Spectrum Access
    Yan, Zun
    Cheng, Peng
    Chen, Zhuo
    Li, Yonghui
    Vucetic, Branka
    [J]. 2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [25] Transfer Learning for Regression through Adaptive Gaussian Process
    Xu, Changhua
    Yang, Kai
    Chen, Xue
    Luo, Xiangfeng
    Yu, Hang
    [J]. 2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 42 - 47
  • [26] Multi-Objective Service Composition Using Reinforcement Learning
    Moustafa, Ahmed
    Zhang, Minjie
    [J]. SERVICE-ORIENTED COMPUTING, ICSOC 2013, 2013, 8274 : 298 - 312
  • [27] Preference-aware Web Service Composition by Reinforcement Learning
    Wang, Hongbing
    Tang, Pingping
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, VOL 2, PROCEEDINGS, 2008, : 379 - 386
  • [28] Adaptive Metro Service Schedule and Train Composition With a Proximal Policy Optimization Approach Based on Deep Reinforcement Learning
    Ying, Cheng-Shuo
    Chow, Andy H. F.
    Wang, Yi-Hui
    Chin, Kwai-Sang
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 6895 - 6906
  • [29] Multi-agent deep reinforcement learning for adaptive coordinated metro service operations with flexible train composition
    Ying, Cheng-Shuo
    Chow, Andy H. F.
    Nguyen, Hoa T. M.
    Chin, Kwai-Sang
    [J]. TRANSPORTATION RESEARCH PART B-METHODOLOGICAL, 2022, 161 : 36 - 59
  • [30] Reinforcement learning with Gaussian process regression using variational free energy
    Kameda, Kiseki
    Tanaka, Fuyuhiko
    [J]. JOURNAL OF INTELLIGENT SYSTEMS, 2023, 32 (01)