Optimizing Cloud Performance: A Microservice Scheduling Strategy for Enhanced Fault-Tolerance, Reduced Network Traffic, and Lower Latency

Research output: Contribution to journalArticlepeer-review

Abstract

The emergence of microservice architecture has brought significant advancements in software development, offering improved scalability and availability of applications. Cloud computing benefits from microservice architecture by mitigating the risks of single failures and ensuring compliance with service-level agreements. However, using microservice architecture presents two challenges: 1) managing network traffic, which leads to latency and network congestion; and 2) inefficient resource allocation for microservices. Current approaches have limitations in addressing these challenges. To overcome these limitations, we propose a novel scheduling strategy that schedules microservice replicas using a modified particle swarm optimization algorithm to place them on the most suitable physical machine. Additionally, we balance the load across physical machines in the cluster using a simple round-robin algorithm. Furthermore, our scheduling strategy integrates with Kubernetes to tackle resource allocation and deployment challenges. The proposed strategy has been evaluated by simulating two scenarios using Alibaba and Google datasets. The experimental results demonstrate the effectiveness of our strategy in reducing traffic, balancing load, and utilizing CPU and memory efficiently.
Original languageEnglish
Pages (from-to)35135-35153
Number of pages19
JournalIEEE Access
Volume12
Early online date5 Mar 2024
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Optimizing Cloud Performance: A Microservice Scheduling Strategy for Enhanced Fault-Tolerance, Reduced Network Traffic, and Lower Latency'. Together they form a unique fingerprint.

Cite this