DRLBTSA: Deep reinforcement learning based task-scheduling algorithm in cloud computing

dc.contributor.authorMangalampalli, Sudheer
dc.contributor.authorKarri, Ganesh Reddy
dc.contributor.authorKumar, Mohit
dc.contributor.authorKhalaf, Osama Ibrahim
dc.contributor.authorTavera Romero, Carlos Andres
dc.contributor.authorAbdul Sahib, Ghaida Muttashar
dc.date.accessioned2025-07-11T16:18:41Z
dc.date.available2025-07-11T16:18:41Z
dc.date.issued2024
dc.description.abstractTask scheduling in cloud paradigm brought attention of all researchers as it is a challenging issue due to uncertainty, heterogeneity, and dynamic nature as they are varied in size, processing capacity and number of tasks to be scheduled. Therefore, ineffective scheduling technique may lead to increase of energy consumption SLA violations and makespan. Many of authors proposed heuristic approaches to solve task scheduling problem in cloud paradigm but it is fall behind to achieve goal effectively and need improvement especially while scheduling multimedia tasks as they consists of more heterogeneity, processing capacity. Therefore, to handle this dynamic nature of tasks in cloud paradigm, a scheduling mechanism, which automatically takes the decision based on the upcoming tasks onto cloud console and already running tasks in the underlying virtual resources. In this paper, we have used a Deep Q-learning network model to addressed the mentioned scheduling problem that search the optimal resource for the tasks. The entire extensive simulationsare performed usingCloudsim toolkit. It was carried out in two phases. Initially random generated workload is used for simulation. After that, HPC2N and NASA workload are used to measure performance of proposed algorithm. DRLBTSA is compared over baseline algorithms such as FCFS, RR, Earliest Deadline first approaches. From simulation results it is evident that our proposed scheduler DRLBTSA minimizes makespan over RR,FCFS, EDF, RATS-HM, MOABCQ by 29.76%, 41.03%, 27.4%, 33.97%, 33.57% respectively. SLA violation percentage for DRLBTSA minimized overRR,FCFS, EDF, RATS-HM, MOABCQ by48.12%, 41.57%, 37.57%, 36.36%, 30.59% respectively and energy consumption for DRLBTSA over RR,FCFS, EDF, RATS-HM, MOABCQ by36.58%,43.2%, 38.22%, 38.52%, 33.82%existing approaches.
dc.identifier.citationMangalampalli, S., Karri, G.R., Kumar, M. et al. DRLBTSA: Deep reinforcement learning based task-scheduling algorithm in cloud computing. Multimed Tools Appl 83, 8359–8387 (2024). https://doi.org/10.1007/s11042-023-16008-2
dc.identifier.issn13807501
dc.identifier.urihttps://repositorio.usc.edu.co/handle/20.500.12421/7394
dc.language.isoen
dc.publisherSpringer
dc.subjectCloud Computing
dc.subjectDeep Q- Learning
dc.subjectEnergy consumption
dc.subjectMachine Learning
dc.subjectMakespan
dc.subjectSLA violation
dc.subjectTask Scheduling
dc.titleDRLBTSA: Deep reinforcement learning based task-scheduling algorithm in cloud computing
dc.typeArticle

Files

Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
DRLBTSA.pdf
Size:
2.98 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: