Reducing cold start delay in serverless computing using lightweight virtual machines

被引:0
|
作者
Karamzadeh, Amirmohammad [1 ]
Shameli-Sendi, Alireza [1 ]
机构
[1] Shahid Beheshti Univ SBU, Fac Comp Sci & Engn, Tehran, Iran
关键词
Serverless computing; Function as a service; Cold start delay; Lightweight virtual machine; Machine learning;
D O I
10.1016/j.jnca.2024.104030
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, serverless computing has gained considerable attention in academic, professional, and business circles. Unique features such as code development flexibility and the cost-efficient pay-as-you-go pricing model have led to predictions of widespread adoption of serverless services. Major players in the cloud computing sector, including industry giants like Amazon, Google, and Microsoft, have made significant advancements in the field of serverless services. However, cloud computing faces complex challenges, with two prominent ones being the latency caused by cold start instances and security vulnerabilities associated with container escapes. These challenges undermine the smooth execution of isolated functions, a concern amplified by technologies like Google gVisor and Kata Containers. While the integration of tools like lightweight virtual machines has alleviated concerns about container escape vulnerabilities, the primary issue remains the increased delay experienced during cold start instances in the execution of serverless functions. The purpose of this research is to propose an architecture that reduces cold start delay overhead by utilizing lightweight virtual machines within a commercial architecture, thereby achieving a setup that closely resembles real-world scenarios. This research employs supervised learning methodologies to predict function invocations by leveraging the execution patterns of other program functions. The goal is to proactively mitigate cold start scenarios by invoking the target function before actual user initiation, effectively transitioning from cold starts to warm starts. In this study, we compared our approach with two fixed and variable window strategies. Commercial platforms like Knative, OpenFaaS, and OpenWhisk typically employ a fixed 15-minute window during cold starts. In contrast to these platforms, our approach demonstrated a significant reduction in cold start incidents. Specifically, when calling a function 200 times with 5, 10, and 20 invocations within one hour, our approach achieved reductions in cold starts by 83.33%, 92.13%, and 90.90%, respectively. Compared to the variable window approach, which adjusts the window based on cold start values, our proposed approach was able to prevent 82.92%, 91.66%, and 90.56% of cold starts for the same scenario. These results highlight the effectiveness of our approach in significantly reducing cold starts, thereby enhancing the performance and responsiveness of serverless functions. Our method outperformed both fixed and variable window strategies, making it a valuable contribution to the field of serverless computing. Additionally, the implementation of preinvocation strategies to convert cold starts into warm starts results in a substantial reduction in the execution time of functions within lightweight virtual machines.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Improved QoS at the Edge Using Serverless Computing to Deploy Virtual Network Functions
    Chaudhry, Saqib Rasool
    Palade, Andrei
    Kazmi, Aqeel
    Clarke, Siobhan
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10): : 10673 - 10683
  • [22] Using lightweight virtual machines to achieve resource adaptation in middleware
    Duran-Limon, H. A.
    Siller, M.
    Blair, G. S.
    Lopez, A.
    Lombera-Landa, J. F.
    IET SOFTWARE, 2011, 5 (02) : 229 - 237
  • [23] Orchestrated sandboxed containers, unikernels, and virtual machines for isolation-enhanced multitenant workloads and serverless computing in cloud
    Mavridis, Ilias
    Karatza, Helen
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (11):
  • [24] FaaSLight: General Application-level Cold-start Latency Optimization for Function-as-a-Service in Serverless Computing
    Liu, Xuanzhe
    Wen, Jinfeng
    Chen, Zhenpeng
    Li, Ding
    Chen, Junkai
    Liu, Yi
    Wang, Haoyu
    Jin, Xin
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2023, 32 (05)
  • [25] FaaSLight: General Application-Level Cold-Start Latency Optimization for Function-as-a-Service in Serverless Computing
    Liu, Xuanzhe
    Wen, Jinfeng
    Chen, Zhenpeng
    Li, Ding
    Chen, Junkai
    Liu, Yi
    Wang, Haoyu
    Jin, Xin
    arXiv, 2022,
  • [26] Enhance the performance of virtual machines by using cluster computing architecture
    Tseng, C.-Y. (cytseng@ttu.edu.tw), 1600, Universitas Ahmad Dahlan (11):
  • [27] Using Processing Features for Allocation of Virtual Machines in Cloud Computing
    Rego, P. A. L.
    Coutinho, E. F.
    Lima, A. S.
    de Souza, J. N.
    IEEE LATIN AMERICA TRANSACTIONS, 2015, 13 (08) : 2798 - 2812
  • [28] iFaaSBus: A Security- and Privacy-Based Lightweight Framework for Serverless Computing Using IoT and Machine Learning
    Golec, Muhammed
    Ozturac, Ridvan
    Pooranian, Zahra
    Gill, Sukhpal Singh
    Buyya, Rajkumar
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (05) : 3522 - 3529
  • [29] Using Virtual Machines to Enhance the Educational Experience in an Introductory Computing Course
    Harvie, David P.
    Cody, Jason R.
    Morrell, Christopher
    Estes, Tanya T.
    PROCEEDINGS OF THE 20TH ANNUAL CONFERENCE ON INFORMATION TECHNOLOGY EDUCATION (SIGITE '19), 2019, : 28 - 32
  • [30] Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment
    Xiao, Zhen
    Song, Weijia
    Chen, Qi
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2013, 24 (06) : 1107 - 1117