Optimizing Microservice Deployment in Edge Computing with Large Language Models: Integrating Retrieval Augmented Generation and Chain of Thought Techniques

被引:0
|
作者
Feng, Kan [1 ]
Luo, Lijun [1 ]
Xia, Yongjun [2 ]
Luo, Bin [2 ]
He, Xingfeng [1 ]
Li, Kaihong [3 ]
Zha, Zhiyong [4 ]
Xu, Bo [1 ,5 ]
Peng, Kai [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Hubei Key Lab Smart Internet Technol, Wuhan 430074, Peoples R China
[2] Hubei Huazhong Elect Power Technol Dev Co Ltd, Wuhan 430079, Peoples R China
[3] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
[4] State Grid Informat Telecommun Co Ltd, Wuhan 430048, Peoples R China
[5] Hubei ChuTianYun Co Ltd, Wuhan 430076, Peoples R China
来源
SYMMETRY-BASEL | 2024年 / 16卷 / 11期
关键词
large language models; retrieval augmented generation; microservice deployment; mobile edge computing;
D O I
10.3390/sym16111470
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Large Language Models (LLMs) have demonstrated impressive capabilities in autogenerating code based on natural language instructions provided by humans. We observed that in the microservice models of edge computing, the problem of deployment latency optimization can be transformed into an NP-hard mathematical optimization problem. However, in the real world, deployment strategies at the edge often require immediate updates, while human-engineered code tends to be lagging. To bridge this gap, we innovatively integrated LLMs into the decision-making process for microservice deployment. Initially, we constructed a private Retrieval Augmented Generation (RAG) database containing prior knowledge. Subsequently, we employed meticulously designed step-by-step inductive instructions and used the chain of thought (CoT) technique to enable the LLM to learn, reason, reflect, and regenerate. We decomposed the microservice deployment latency optimization problem into a collection of granular sub-problems (described in natural language), progressively providing instructions to the fine-tuned LLM to generate corresponding code blocks. The generated code blocks underwent integration and consistency assessment. Additionally, we prompted the LLM to generate code without the use of the RAG database for comparative analysis. We executed the aforementioned code and comparison algorithm under identical operational environments and simulation parameters, conducting rigorous result analysis. Our fine-tuned model significantly reduced latencies by 22.8% in handling surges in request flows, 37.8% in managing complex microservice types, and 39.5% in processing increased network nodes compared to traditional algorithms. Moreover, our approach demonstrated marked improvements in latency performance over LLMs not utilizing RAG technology and reinforcement learning algorithms reported in other literature. The use of LLMs also highlights the concept of symmetry, as the symmetrical structure of input-output relationships in microservice deployment models aligns with the LLM's inherent ability to process and generate balanced and optimized code. Symmetry in this context allows for more efficient resource allocation and reduces redundant operations, further enhancing the model's effectiveness. We believe that LLMs hold substantial potential in optimizing microservice deployment models.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Integrating Graph Retrieval-Augmented Generation With Large Language Models for Supplier Discovery
    Li, Yunqing
    Ko, Hyunwoong
    Ameri, Farhad
    JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING, 2025, 25 (02)
  • [2] Empowering large language models for automated clinical assessment with generation-augmented retrieval and hierarchical chain-of-thought
    Gu, Zhanzhong
    Jia, Wenjing
    Piccardi, Massimo
    Yu, Ping
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2025, 162
  • [3] Integrating Retrieval-Augmented Generation with Large Language Models in Nephrology: Advancing Practical Applications
    Miao, Jing
    Thongprayoon, Charat
    Suppadungsuk, Supawadee
    Valencia, Oscar A. Garcia
    Cheungpasitporn, Wisit
    MEDICINA-LITHUANIA, 2024, 60 (03):
  • [4] Benchmarking Large Language Models in Retrieval-Augmented Generation
    Chen, Jiawei
    Lin, Hongyu
    Han, Xianpei
    Sun, Le
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 17754 - 17762
  • [5] Integrating Small Language Models with Retrieval-Augmented Generation in Computing Education: Key Takeaways, Setup, and Practical Insights
    Yu, Zezhu
    Liu, Suqing
    Denny, Paul
    Bergen, Andreas
    Liut, Michael
    PROCEEDINGS OF THE 56TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE TS 2025, VOL 2, 2025, : 1302 - 1308
  • [6] Integrating Small Language Models with Retrieval-Augmented Generation in Computing Education: Key Takeaways, Setup, and Practical Insights
    Yu, Zezhu
    Liu, Suqing
    Denny, Paul
    Bergen, Andreas
    Liut, Michael
    PROCEEDINGS OF THE 56TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE TS 2025, VOL 1, 2025, : 1302 - 1308
  • [7] Enhancing Retrieval-Augmented Large Language Models with Iterative Retrieval-Generation Synergy
    Shaol, Zhihong
    Gong, Yeyun
    Shen, Yelong
    Huang, Minlie
    Duane, Nan
    Chen, Weizhu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9248 - 9274
  • [8] Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models
    Xu, Haocheng
    Hu, Haotian
    Huang, Sitao
    2024 IEEE LLM AIDED DESIGN WORKSHOP, LAD 2024, 2024,
  • [9] Enhancing textual textbook question answering with large language models and retrieval augmented generation
    Alawwad, Hessa A.
    Alhothali, Areej
    Naseem, Usman
    Alkhathlan, Ali
    Jamal, Amani
    PATTERN RECOGNITION, 2025, 162
  • [10] TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models
    Shanghai Jiao Tong University, China
    arXiv,