Energy-saving Service Offloading for the Internet of Medical Things Using Deep Reinforcement Learning

被引:8
|
作者
Jiang, Jielin [1 ,2 ]
Guo, Jiajie [3 ]
Khan, Maqbool [4 ,5 ]
Cui, Yan [6 ]
Lin, Wenmin [7 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Jiangsu Collaborat Innovat Ctr Atmospher Environm, Nanjing, Peoples R China
[2] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing, Peoples R China
[3] Nanjing Univ Informat Sci & Technol, Sch Comp & Software, Nanjing, Peoples R China
[4] Software Competence Ctr Hagenberg GmbH, Softwarepk, Austria
[5] SPCAI Pak Austria Fachhochsch, Inst Appl Sci & Technol, Haripur, Pakistan
[6] Nanjing Normal Univ Special Educ, Coll Math & Informat Sci, Nanjing, Peoples R China
[7] Hangzhou Normal Univ, Inst VR & Intelligent Syst, Alibaba Business Sch, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Service offloading; asynchronous advantage actor-critic; internet of medical things; deep reinforcement learning; ARTIFICIAL-INTELLIGENCE; RESOURCE-ALLOCATION; EDGE; CLOUD;
D O I
10.1145/3560265
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a critical branch of the Internet of Things (IoT) in the medicine industry, the Internet of Medical Things (IoMT) significantly improves the quality of healthcare due to its real-time monitoring and low medical cost. Benefiting from edge and cloud computing, IoMT is provided with more computing and storage resources near the terminal to meet the low-delay requirements of computation-intensive services. However, the service offloading from health monitoring units (HMUs) to edge servers generates additional energy consumption. Fortunately, artificial intelligence (AI), which has developed rapidly in recent years, has proved effective in some resource allocation applications. Taking both energy consumption and delay into account, we propose an energy-aware service offloading algorithm under an end-edge-cloud collaborative IoMT system with Asynchronous Advantage Actor-critic (A3C), named ECAC. Technically, ECAC uses the structural similarity between the natural distributed IoMT system and A3C, whose parameters are asynchronously updated. Besides, due to the typical delay-sensitivity mechanism and time-energy correction, ECAC can adjust dynamically to the diverse service types and system requirements. Finally, the effectiveness of ECAC for IoMT is proved on real data.
引用
下载
收藏
页数:20
相关论文
共 50 条
  • [31] Energy-saving operation in urban rail transit: A deep reinforcement learning approach with speed optimization
    Wang, Dahan
    Wu, Jianjun
    Wei, Yun
    Chang, Ximing
    Yin, Haodong
    TRAVEL BEHAVIOUR AND SOCIETY, 2024, 36
  • [32] A hybrid deep reinforcement learning ensemble optimization model for heat load energy-saving prediction
    Sun, Jiawang
    Gong, Mingju
    Zhao, Yin
    Han, Cuitian
    Jing, Lei
    Yang, Peng
    JOURNAL OF BUILDING ENGINEERING, 2022, 58
  • [33] Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme
    Ning, Zhaolong
    Dong, Peiran
    Wang, Xiaojie
    Guo, Liang
    Rodrigues, Joel
    Kong, Xiangjie
    Huang, Jun
    Kwok, Ricky Y. K.
    IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (04) : 1060 - 1072
  • [34] Deep Reinforcement Learning-Based Computation Offloading and Optimal Resource Allocation in Industrial Internet of Things with NOMA
    Gao, Haofeng
    Guo, Xing
    2022 11TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CIRCUITS AND SYSTEMS (ICCCAS 2022), 2022, : 198 - 203
  • [35] Dynamic Edge Computation Offloading for Internet of Vehicles With Deep Reinforcement Learning
    Yao, Liang
    Xu, Xiaolong
    Bilal, Muhammad
    Wang, Huihui
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (11) : 12991 - 12999
  • [36] A Reinforcement Learning-Based Service Model for the Internet of Things
    Cabrera, Christian
    Clarke, Siobhan
    SERVICE-ORIENTED COMPUTING (ICSOC 2021), 2021, 13121 : 790 - 799
  • [37] Left Ventricle Contouring in Cardiac Images in the Internet of Medical Things via Deep Reinforcement Learning
    Yin, Sixing
    Wang, Kaiyue
    Han, Yameng
    Pan, Jundong
    Wang, Yining
    Li, Shufang
    Yu, F. Richard
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (20): : 17705 - 17717
  • [38] Safe, Efficient, Comfort, and Energy-saving Automated Driving through Roundabout Based on Deep Reinforcement Learning
    Yuan, Henan
    Li, Penghui
    van Arem, Bart
    Kang, Liujiang
    Farah, Haneen
    Dong, Yongqi
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 6074 - 6079
  • [39] Prism blockchain enabled Internet of Things with deep reinforcement learning
    Gadiraju, Divija Swetha
    Aggarwal, Vaneet
    BLOCKCHAIN-RESEARCH AND APPLICATIONS, 2024, 5 (03):
  • [40] Value-based multi-agent deep reinforcement learning for collaborative computation offloading in internet of things networks
    Li, Han
    Meng, Shunmei
    Shang, Jing
    Huang, Anqi
    Cai, Zhicheng
    WIRELESS NETWORKS, 2023, 30 (8) : 6915 - 6928