Deep Reinforcement Learning-Based Multireconfigurable Intelligent Surface for MEC Offloading

被引:0
|
作者
Qu, Long [1 ]
Huang, An [1 ]
Pan, Junqi [2 ]
Dai, Cheng [2 ]
Garg, Sahil [3 ]
Hassan, Mohammad Mehedi [4 ]
机构
[1] Ningbo Univ, Fac Elect Engn & Comp Sci, Ningbo 315211, Peoples R China
[2] Sichuan Univ, Sch Comp Sci, Chengdu 610042, Peoples R China
[3] Ecole Technol Super, Dept Elect Engn, Montreal, PQ H3C 1K3, Canada
[4] King Saud Univ, Coll Comp & Informat Sci, Dept Informat Syst, Riyadh 11543, Saudi Arabia
基金
浙江省自然科学基金; 中国国家自然科学基金;
关键词
EDGE; EFFICIENT; DESIGN;
D O I
10.1155/2024/2960447
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Computational offloading in mobile edge computing (MEC) systems provides an efficient solution for resource-intensive applications on devices. However, the frequent communication between devices and edge servers increases the traffic within the network, thereby hindering significant improvements in latency. Furthermore, the benefits of MEC cannot be fully realized when the communication link utilized for offloading tasks experiences severe attenuation. Fortunately, reconfigurable intelligent surfaces (RISs) can mitigate propagation-induced impairments by adjusting the phase shifts imposed on the incident signals using their passive reflecting elements. This paper investigates the performance gains achieved by deploying multiple RISs in MEC systems under energy-constrained conditions to minimize the overall system latency. Considering the high coupling among variables such as the selection of multiple RISs, optimization of their phase shifts, transmit power, and MEC offloading volume, the problem is formulated as a nonconvex problem. We propose two approaches to address this problem. First, we employ an alternating optimization approach based on semidefinite relaxation (AO-SDR) to decompose the original problem into two subproblems, enabling the alternating optimization of multi-RIS communication and MEC offloading volume. Second, due to its capability to model and learn the optimal phase adjustment strategies adaptively in dynamic and uncertain environments, deep reinforcement learning (DRL) offers a promising approach to enhance the performance of phase optimization strategies. We leverage DRL to address the joint design of MEC-offloading volume and multi-RIS communication. Extensive simulations and numerical analysis results demonstrate that compared to conventional MEC systems without RIS assistance, the multi-RIS-assisted schemes based on the AO-SDR and DRL methods achieve a reduction in latency by 23.5% and 29.6%, respectively.
引用
下载
收藏
页数:16
相关论文
共 50 条
  • [1] Deep Reinforcement Learning-Based Adaptive Computation Offloading for MEC in Heterogeneous Vehicular Networks
    Ke, Hongchang
    Wang, Jian
    Deng, Lingyue
    Ge, Yuming
    Wang, Hui
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (07) : 7916 - 7929
  • [2] Deep Reinforcement Learning-based Mining Task Offloading Scheme for Intelligent Connected Vehicles in UAV-aided MEC
    Li, Chunlin
    Jiang, Kun
    Zhang, Yong
    Jiang, Lincheng
    Luo, Youlong
    Wan, Shaohua
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2024, 29 (03)
  • [3] Deep Reinforcement Learning-Based Dynamic Offloading Management in UAV-Assisted MEC System
    Tian, Kang
    Liu, Yameng
    Chai, Haojun
    Liu, Boyang
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [4] Deep Reinforcement Learning-Based Task Offloading and Resource Allocation for Industrial IoT in MEC Federation System
    Do, Huong Mai
    Tran, Tuan Phong
    Yoo, Myungsik
    IEEE ACCESS, 2023, 11 : 83150 - 83170
  • [5] Deep Reinforcement Learning-based Task Offloading and Resource Allocation in MEC-enabled Wireless Networks
    Engidayehu, Seble Birhanu
    Mahboob, Tahira
    Chung, Min Young
    2022 27TH ASIA PACIFIC CONFERENCE ON COMMUNICATIONS (APCC 2022): CREATING INNOVATIVE COMMUNICATION TECHNOLOGIES FOR POST-PANDEMIC ERA, 2022, : 226 - 230
  • [6] Deep Reinforcement Learning based Computation Offloading and Resource Allocation for MEC
    Li, Ji
    Gao, Hui
    Lv, Tiejun
    Lu, Yueming
    2018 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2018,
  • [7] Federated Deep Reinforcement Learning-Based Intelligent Dynamic Services in UAV-Assisted MEC
    Hou, Peng
    Jiang, Xiaohan
    Wang, Zongshan
    Liu, Sen
    Lu, Zhihui
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (23) : 20415 - 20428
  • [8] Intelligent Computation Offloading for MEC-Based Cooperative Vehicle Infrastructure System: A Deep Reinforcement Learning Approach
    Yang, Heng
    Wei, Zhiqing
    Feng, Zhiyong
    Chen, Xu
    Li, Yiheng
    Zhang, Ping
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (07) : 7665 - 7679
  • [9] Tasks Offloading and Resource Scheduling Algorithm Based on Deep Reinforcement Learning in MEC
    Xue N.
    Huo R.
    Zeng S.-Q.
    Wang S.
    Huang T.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2019, 42 (06): : 64 - 69and104
  • [10] A Deep Reinforcement Learning based Mobile Device Task Offloading Algorithm in MEC
    Li, Yang
    Shi, Bing
    2020 IEEE INTL SYMP ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, INTL CONF ON BIG DATA & CLOUD COMPUTING, INTL SYMP SOCIAL COMPUTING & NETWORKING, INTL CONF ON SUSTAINABLE COMPUTING & COMMUNICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2020), 2020, : 200 - 207