
\documentclass[conference]{IEEEtran}


% *** CITATION PACKAGES ***
%
\usepackage{cite}
\usepackage{multirow}
\usepackage{array}
\usepackage{tabularx}
\usepackage{color}
%\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{algpseudocode}
% *** GRAPHICS RELATED PACKAGES ***
\ifCLASSINFOpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage[dvips]{graphicx}
\usepackage{graphicx}
\fi

\begin{document}

\title{An Empirical Investigation on the Simulation of Priority and Shortest-Job-First Scheduling for Cloud-based Software Systems}

\author{\IEEEauthorblockN{Jia Ru}
\IEEEauthorblockA{\emph{Department of Computing}\\
\emph{The Hong Kong Polytechnic University}\\
\emph{Hong Kong SAR}\\
\emph{Email:csrjia@comp.polyu.edu.hk}}
\and
\IEEEauthorblockN{Jacky Keung}
\IEEEauthorblockA{\emph{Department of Computer Science}\\
\emph{City University of Hong Kong}\\
\emph{Hong Kong SAR}\\
\emph{Email:Jacky.Keung@cityu.edu.hk}}
}

\maketitle
\begin{abstract}
\newline
\textit{Background}: Given the dynamics in resource allocation schemes offered by cloud computing, effective scheduling algorithms are important to utilize these benefits.
\newline
\textit{Aim}: In this paper, we propose a scheduling algorithm integrated with task grouping, priority-aware and SJF (shortest-job-first) to reduce the waiting time and makespan, as well as to maximize resource utilization.
\newline
\textit{Method}: Scheduling is responsible for allocating the tasks to the best suitable resources with consideration of some dynamic parameters, restrictions and demands, such as network restriction and resource processing capability as well as waiting time. The proposed scheduling algorithm is integrated with task grouping, prioritization of bandwidth awareness and SJF algorithm, which aims at reducing processing time, waiting time and overhead. In the experiment, tasks are generated using Gaussian distribution and resources are created using Random distribution as well as CloudSim framework is used to simulate the proposed algorithm under various conditions. Results are then compared with existing algorithms for evaluation.
%Scheduling in cloud computing is in charge of allocation the tasks to the best suitable resources by taking into account of some dynamic parameters, restrictions, demands and so on. For on-line scheduling, network condition - bandwidth and delay are important parameters to influence processing time of tasks. The fine-grained jobs are grouped as coarse-grained jobs on the basic requirements and constrains such as minimum execution time, cost and reduction delay due to bandwidth's limitation. The aim of the optimized algorithm is to maximize the cloud resource utilization, improve the computation ratio and reduce the makespan, overhead and delay.
\newline
\textit{Results}: In comparison with existing task grouping algorithms, results show that the proposed algorithm waiting time and processing time decreased significantly (over 30\%).
\newline
\textit{Conclusion}: The proposed method effectively minimizes waiting time and processing time and reduces processing cost to achieve optimum resources utilization and minimum overhead, as well as to reduce influence of bandwidth bottleneck in communication.
\newline
\end{abstract}

\begin{keywords}
Cloud Computing, Task Grouping, Scheduling, SJF, Software Metrics
\end{keywords}
\IEEEpeerreviewmaketitle

\section{Introduction}
Cloud computing is a new software system technology, which allows dynamic resource allocation on consolidated resources using a combination of techniques from parallel computing, distributed computing, as well as platform virtualization technologies \cite{selvarani2010improved}. Cloud computing has been a primary focus in both the research community and the industry over recent years because of its flexibility in software deployments, and of its elasticity capability on resource consolidation.

%Cloud Computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service and shaping the way IT hardware is designed and purchased \cite{buyya2009cloud}.

Software engineering for cloud platform systems is a new domain of research, requiring careful considerations on its characteristic with respect to traditional software development paradigms. More importantly the area of effective scheduling run time tasks become one of the research focuses. The aim of cloud computing is to realize cooperation work and resource sharing, but different kinds of resources reflect isomerism, dynamic nature and a diversity of user demands. This makes resource management very complex. Therefore, scheduling problem is an important research area in this regard. The utilization rate of huge resources in data centre is related to scheduling mechanisms applied. Adopting which scheduling mechanism to improve utilization rate of resource is a significant challenge due to a number of factors which we will explore in this study.


%Cloud computing can also be considered as a software delivery model where a software and associated data are centrally hosted on the cloud.

The fundamental mechanism of this new kind software system is to schedule the applications to the resources pool which is consisted of hugely distributed computers \cite{xu2011job}. Scheduling is a decision process, and its content is deploying resources to applications of different clients at a suitable time, or during a specific period of time. The target of optimizing scheduling considers one or two factors that include cost, task completion time, task priority, profit and so on. In the premise of guaranteed resource utilization rate, scheduling policies mainly focus on allocation management of resources and satisfy the resource demands of users. Eventually, scheduling policies should effectively improve the number of completed applications, increase profit of service party, reduce cost which is undertaken by service party when accepting applications and guarantee QoS (Quality of Service) demand of clients.

In cloud computing system, there exists some applications with a great deal of light-weight tasks. Dispatching these fine-grained tasks to a pool of resources that provide high processing capability is not economical and consumes extra waiting time and turnaround time by comparing a coarse-grained task allocation to the resource\cite{liu2009grouping}. Since that the overall turnaround time includes each task scheduling time, execution time and transmission time. A large amount of fine-grained tasks will spend a lot of time on scheduling and transmission.
It is rather impractical to consume the resource processing capability, and lowers resource utilization rate when a fine-grained task is allocated and executed to a resource with high processing capability. The total turnaround time of fine-grained tasks can be further reduced by grouping these fine-grained tasks as coarse-grained tasks in the entire scheduling process.

This paper mainly focuses on evaluating and improving such type of deployment policy, and conducts an empirical experiment to examine various different scheduling algorithms which are important to the development of software for the cloud platforms. CloudSim simulation platform has been employed in the experiments. We have extended the CloudSim simulator for our experimentation that enables useful parameters and data to be varied easily such as scheduling algorithms, task characteristics as well as file characteristics.

The rest of this paper is organized as follows: Section \ref{sec:background} discusses background. Section \ref{sec:proposed} presents the proposed algorithm and its strategy. 
Section \ref{sec:experimentsetup} provides simulations and experiments on the proposed scheduling algorithm using CloudSim. Section \ref{sec:results} provides results and discussion on the experiments in comparison with existing algorithms. Section \ref{sec:conclusion} concludes the paper and proposes future research directions. 
%Section 4 provides simulations and experiments executed on the proposed scheduling algorithm using CloudSim toolkit and results of experiments are presented in comparison with some of the existing algorithms. Section 5 concludes the paper and proposes future research directions.
\section{Background}
\label{sec:background}
Cloud computing consists of a cluster of computing resources that are delivered over a network, which is accomplished by utilizing virtualization technologies to consolidate and allocate resources suitable for various different software applications. It provides a platform for solutions requiring different configurations, emulating physical hardware combinations in a virtualized cloud environment managed by cloud platform software to deliver enhanced services. The strategies used in the cloud platform software become important, which directly influence the runtime performance of software applications running on its platform. Therefore the effective scheduling policies to maximize the utilization of the virtualized resources are the primary focus of this study. This section provides a summary of related scheduling approaches applied in cloud computing.
%Cloud computing is composed of clusters, and virtual machines  embodying different resources are distributed in these clusters, which are located in different regions. Therefore, the unique characteristic of cloud computing is multiple nodes. Hence, scheduling policy is very important in cloud computing to process applications which can be considered as jobs. What we focus on in this study is choosing suitable resource in virtual machine to jobs to maximize the utilization of resources. Thus scheduling algorithms adopted are appropriate for cloud-based systems to process multiple-nodes jobs. 
%Cloud computing is the development of parallel computing, distributed computing, and grid computing, which unique characteristic is multiple nodes. It implies that many virtual machines are distributed in different regions as clusters in cloud computing as well as all the resources are allocated in these virtual machines. Therefore, scheduling policy is very important to process applications which can be considered as jobs. What we focus on in this study is choosing suitable resource in virtual machine to jobs to maximize the utilization of resources. Hence, an efficient scheduling algorithm and deployment is significant in cloud computing.
\subsection{Scheduling models in Cloud Computing}
In traditional distributed environment, the aim of optimizing scheduling is mainly focusing on system performance, such as system throughput, CPU utilization rate and almost never considering QoS. In cloud computing environment, we are not only emphasizing resource utilization rate and system performance, but also requiring a guaranteed QoS of users based on different demands. Users can choose the resource in the cloud by themselves according to their own requirements.
\subsubsection{\textbf{Cloud computing scheduling model}}
Cloud computing scheduling model is mainly constructed by Client, Broker, Resources, Resources supporter and Information Service. Fig.\ref{scheduling} shows the scheduling model structure \cite{buyya2000economy}. The tasks that users need to implement usually can be divided into serial application, parallel application, parameter scan application, cooperation application and so on. System allows users to set up resource demand and parameter preference. Different clients use resources at different prices, which may vary from time to time. Broker is a middle interface between clients and resources as well as used to find resources, choose resources, accept tasks, return scheduling results, and exchange information between clients and resources. Broker supports different scheduling policies, which can allocate resources and schedule tasks in accordance to the demands of clients. Broker is constituted by Job Control Agent, Schedule Advisor, Explorer, Trade Manager and Deployment Agent.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
%\includegraphics[width= 11.5cm,height= 6cm]{schedulingmodel.pdf}
\includegraphics[width= 11cm]{schedulingmodel4.pdf}
%\includegraphics[width= 11cm]{schedulingmodel.pdf}
\caption{Chart of scheduling model structure}
\label{scheduling}
\end{figure*}
\begin{itemize}
  \item Job Control Agent: It is responsible for monitoring jobs in the software system, such as schedule generation, jobs creation, status of jobs and communicating with clients and schedule advisor.
  \item Schedule Advisor: It is used to determine resources, allocate available resources which satisfy the demands of clients such as deadline and cost, as well as to allocate jobs.
  \item Cloud Explorer: It is a tool that communicates with cloud information service to find resources and identifies the list of authorized machines as well as records resources status information.
  \item Trade Manager: It determines resources access cost and tries to communicate with resources at a low cost under the guidance of schedule advisor.
  \item Deployment Agent: It uses scheduler instruction to activate the execution of tasks as well as to update the status of execution sending back to Job Control Agent in regular intervals.
\end{itemize}
%Information service is mainly used to available resource information. If brokers want to find out appropriate resource, they must make enquiry in information service and gain the resource information which satisfies condition of clients. After this, brokers can only interact with resource providers. If resource providers have new resource to lease, they must register in information service firstly, and then brokers are able to find the resource.

During the transaction between clients and service providers, service providers register resource information at first. After clients submit tasks to broker, broker searches resources in information service and deploys tasks to appropriate resources in accordance to the corresponding scheduling algorithms. Before execution of tasks, broker evaluates completion time and cost of tasks. If the time exceeds deadline or the cost is higher than budget of clients, the broker will deny tasks. If the execution of tasks is accomplished, broker will return the deployment results to clients and gain relevant profits, otherwise, send error message back to clients.
\subsubsection{\textbf{Basic scheduling methods}}
Scheduling methods always consider two aspects: one is characteristics of tasks, and the other is characteristics of datacenter resources \cite{korkhov2009dynamic}\cite{gomathi2011adaptive}. Tasks submit on the resources which are free and where the input data is available or on the other hand, tasks submit on some specific resources based on some criteria \cite{singh2011greedy}.
\subsubsection{\textbf{Resource allocation}}
The resources in the cloud computing can be allocated in many different ways. Traditional and simple method of task scheduling in cloud environment uses the client tasks as the overhead application base \cite{selvarani2010improved}. The allocation of resources that need to consider maximum utilization rate of resources are FCFS (First-Come-First-Service), SJF (Shortest-Job-First) scheduling, priority scheduling, RR (Round-Robin) scheduling, random, greedy, Genetic Algorithm \cite{sivanandam2007introduction}. The scheduling of tasks can also be FCFS, SJF, priority-based, RR, job grouping and so on \cite{choudharydynamic}. Scheduling algorithms choose a task to be performed and corresponding resource in which the task will be executed, according to different characters of resource such as bandwidth, processing capabilities, cost, load balancing, and so on, as well as base on clients requirements of deployment. 
%for example, deadline, profit, cost, priority and so forth, choosing a suitable resource to a task is very significant.

In this paper, we both focus on resource allocation and task scheduling, as well as take into account some specific criteria or priorities of tasks and resources, such as resource bandwidth and processing capability, task granularity (fine-grained and coarse-grained) and deadline.
\subsection{Task grouping scheduling algorithm}
Task grouping implies that similar type or characteristic tasks can be grouped together in a group and scheduled corporately \cite{wild2004understanding}. The dynamic grouping strategy mainly pays attention to utilization of resources. Clients submit tasks to the scheduler and the scheduler gains the correlative characters of resources. Then the scheduler selects a specific resource according to some priorities and multiplies resource MIPS (processing capability) with granularity size. The calculated result implies a resource total MI (processing requirement) during granularity size. Next the scheduler groups tasks of clients by accumulating each of task MI. By comparing the accumulation result (total tasks MI) and a resource total MI, when the accumulation result is over a resource total MI, grouping will be stopped. The last task added to the group should be removed and then the accumulation result subtracts the last added task MI. The final accumulation result is used for a new task MI. Sequentially, a new grouped task is created, whose MI is the final accumulation result with a unique ID. Finally, the scheduler allocates the new grouped task to the corresponding resource. The grouping process will continue until all the tasks are performed in new groups and allocated to the cloud resources. Then cloud resources execute all these new grouped tasks and send the executed grouped tasks back to the clients after completion of processing tasks.

A basic factor that influences job grouping is granularity size. Granularity size is the time when a job is processed at the resource. It is used to account for the number of jobs executed on a specific resource during a particular time. The total number of tasks, processing requirements of tasks, the total number of resources and processing capabilities of resources need to be considered. Therefore, it is very important to determine the value of granularity size to minimize the overhead, cost and waiting time and to maximize the utilization of these resources \cite{muthuvelu2005dynamic}.
\subsection{Prioritization with bandwidth awareness}
The basic concept of bandwidth-aware scheduling is used at the Stream Control Transmission Protocol (SCTP) layer \cite{ang2009bandwidth}. A general estimation of bandwidth available on each round-trip path is operated through transmitting pairs of SCTP heartbeats on each path. The corresponding heartbeat-acks are sent back to the receiver and evaluated by using the Packet-Pair Bandwidth Estimation (PPBE) technology \cite{carter1996measuring}. Due to current network low bandwidth and high latency, reasonable distribution of jobs to resources can reduce the delays and overheads. Before grouping, the scheduler receives the information of resources such as processing capabilities and network bandwidth. The scheduler sorts the resources based on descending order of network bandwidth to reduce communication latency between tasks and resources and minimize the waiting time to a certain extent. Firstly, the scheduler selects the resource with highest communication and transmission rate and groups independent fine-grained tasks based on the selected resource processing capability and after grouping sends the grouped coarse-grained task to the chosen resource. Secondly, the resource with second highest communication and transmission rate is chosen to operate tasks using the above method. This process is repeated until all the tasks are grouped and executed.

%\subsection{SJF (Shortest-Job-First) scheduling algorithm}
%SJF (Shortest-Job-First) scheduling algorithm can be preemptive or non-preemptive, where the job with the smallest estimated completion time is executed next. In another word, when CPU is available, it is assigned to the process that has smallest next CPU burst. SJF scheduling algorithm can achieve the minimum average waiting time for a series of jobs on a specific resource, and is suitable for batch jobs given processing time are known in advance \cite{silberschatz1998operating}. The problem of SJF is how to know the estimated processing time of jobs. In this paper, SJF scheduling algorithm associates the tasks processing requirement and resources processing capability and are combined with job grouping and bandwidth-aware. In this work, assuming task processing requirements and resource processing capabilities are given in advance. Therefore, estimated processing time can be calculated by given task processing requirements and resource processing capabilities. Thus, SJF is best integrated with the proposed algorithm.

\subsection{Waiting time}
Waiting time is the sum of the periods spent waiting in the ready queue \cite{silberschatz1998operating}. Minimizing waiting time is an optimized target in job scheduling, which is a famous scheduling problem and significant in providing Quality of Service (QoS) in industries \cite{li2007job}. For a batch of jobs, a job executing on a resource, minimizing waiting time can reduce the completion time and meet task priority-deadline demands.

%Since each scheduling policy has drawbacks in certain extent, we extract the beneficial aspects of these algorithms and propose a integrated algorithm to minimize the drawbacks of resultant algorithms. Compared with SJF algorithm, the proposed algorithm can decrease waiting time and compared with task grouping, it can execute tasks whose requirements are random and large and reduce processing cost as well as processing time. The proposed algorithm also is integrated prioritization of bandwidth, so it can reduce network latency, minimize makespan and improve the utilization of resources. reduces network delay,
\section{The Proposed Algorithm}
\label{sec:proposed}
The proposed algorithm integrates with the three algorithms mentioned in Section \ref{sec:background}, and they are task grouping, prioritization (bandwidth-aware) and SJF. The proposed algorithm maximizes the utilization of cloud resources, and aims to reduce task waiting time. Fig.\ref{proposed} shows main procedures of proposed algorithm.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
\includegraphics[width= 11cm]{simulationdee.pdf}
\caption{Flow chart of proposed algorithm}
\label{proposed}
\end{figure*}
\subsection{Improved task scheduling algorithm}
The traditional grouping-based scheduling algorithm does not consider the network bandwidth and tasks file size \cite{liu2009grouping}. The traditional grouping-based scheduling algorithm only takes into account the fine-grained jobs which processing requirements are small and almost equal \cite{selvarani2010improved}\cite{liu2009grouping}\cite{muthuvelu2005dynamic}\cite{ang2009bandwidth}. However, in practice, tasks are uncertain and very random, and therefore the size of tasks cannot be very lightweight or processing requirements are not pre-established. In this paper, proposed grouping-based scheduling algorithm is suitable for both very lightweight jobs and the tasks which processing requirements are random and unpredictable.

\textbf{Improvement:} Traditional task grouping scheduling does not take into account in situations such that a task processing requirement (MI) is over each resource total MI occasionally. Under this circumstance, grouping will not continue. In the proposed algorithm, once a task MI is over each resource total MI, the task is grouped itself as a new grouped task with its own MI and a unique ID. This grouped task is allocated to the sequential resource.
\subsection{Prioritization with processing capability awareness}
Before grouping, the scheduler receives the information of each cloud resource processing capability. The scheduler sorts the resources based on descending order of processing capabilities (MIPS) to reduce processing time of grouped tasks and minimize the waiting time to a certain extent. Firstly, the scheduler selects the resource with largest MIPS and groups independent fine-grained tasks based on the selected resource processing capability as well as after grouping submits the grouped coarse-grained task to the chosen resource. Secondly, the resource with second largest MIPS is chosen to group the rest tasks according to the selected resource processing capability as well as similarly after grouping submits the grouped coarse-grained task to the chosen resource. This process is repeated until all the tasks are grouped and executed.
%operate tasks using previously mentioned method and so on. This process is repeated until all the tasks are grouped and executed.
\subsection{Proposed scheduling algorithm (pseudo code)}
TABLE \ref{definitionpro} illustrates the definitions of the proposed algorithm. The proposed algorithm can be divided into 4 phases.
\begin{table*}[!htbp]
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
\centering
\caption { the definitions in the proposed algorithm}
\label{definitionpro}
\begin{tabular}{|l|l|}
\hline
\multicolumn{2}{|c|}{\textbf{Definitions of the proposed algorithm}} \\
\hline
MI: &Million instruction or processing requirements of a task  \\
MIPS: &Million instruction per second or processing capability of a resource\\
FileSize: &The input file size of the task before submitting to a cloud Resource (Unit: Bytes)\\
OutputFileSize: &The output size of the task after submitting and executing to a cloud Resource (Unit: Bytes)\\
$\mbox{cloudletList}_{i}$\_FileSize: &The $\mbox{i}^{th}$ task input file size before execution\\
$\mbox{cloudletList}_{i}$\_OutputFileSize: &The $\mbox{i}^{th}$ task file size after submitting and executing to a resource\\
cloudletList: & List of the tasks submitted to the broker\\
ResList: & List of resources available in datacenter\\
cloudletList\_size: &The total number of tasks\\
ResList\_size: & The total number of resources\\
granularity\_size: &Granularity size (unit: second) for task grouping\\
total\_jobMI: &The sum of tasks processing requirement in one group\\
$\mbox{total\_resMI}_{j}$: &Total processing capabilities during a time of the $\mbox{j}^{th}$  resource\\
$\mbox{ResList}_{j}$\_MIPS:  &The $\mbox{j}^{th}$  resource MIPS\\
max\_cloudletFileSize: &The maximum of task input fie size in one group\\
max\_cloudletOutputFileSize:& The maximum of task file size after submitting and executing to a resource in one group\\
$\mbox{cloudletList}_{i}$\_MI: &The $\mbox{i}^{th}$ task processing requirement (MI)\\
groupedcloudletList: &List of tasks after grouping scheduling\\
groupedvmList: &List of resources for each task group\\
groupedcloudletList\_size:  &The total number of grouped tasks after grouping\\
\hline
\end{tabular}
\end{table*}
\subsubsection{\textbf{Phase 1}}
Initialize input data and available resources. Gaussian distribution function is used to create tasks (cloudlets). These tasks processing requirements (MI) follows Normal Distribution, since actually all the tasks are generated randomly and cannot be predicted. Random function is used to set up tasks (cloudlets) file size and output file size, which belong to Uniform Distribution. Similarly, random function is used to create resources which MIPS and bandwidth follow Uniform Distribution.
\subsubsection{\textbf{Phase 2}}
Prioritize tasks with different demands and factors. For bandwidth awareness, the resource which network has highest communication and transmission rate is selected firstly to reduce the transmission latency. The scheduler sorts resources based on bandwidth in a descending order. For processing capability awareness, the resource with largest processing capability is chosen firstly to decrease the execution time and minimize the overhead time. The scheduler sorts resources based on processing capabilities in a descending order. The following shows the prioritization method using the sort function.
%Algorithm.\ref{Prioritization} depicts the prioritization method pseudo code.
%TABLE \ref{Prioritization1} depicts the prioritization method pseudo code.
%TABLE \ref{Prioritization2} depicts the prioritization method pseudo code.
%\begin{table}[!htbp]
%\centering
%\caption { Prioritization method}
%\label{Prioritization2}
%\begin{tabular}{|ll|}
%\hline
%\multicolumn{2}{|c|}{\textbf{Phase 2: Prioritization method}} \\
%\hline
%1. & \textbf{sort} (ResList, bandwidth in descending order);\\
%2. & \textbf{sort} (ResList, MIPS in descending order);\\
%\hline
%\end{tabular}
%\end{table}
%\begin{table}
%\centering
%\includegraphics[width= 8.5cm]{try1.pdf}
%\caption{Prioritization method}
%\label{Prioritization1}
%\end{table}
{\renewcommand\baselinestretch{0.8}\selectfont
\begin{algorithm} [h]
\caption{ \textbf{prioritization method}}
\label{Prioritization}
\begin{algorithmic}[1]
\State \textbf{sort} (ResList, bandwidth in descending order)
\State \textbf{sort} (ResList, MIPS in descending order)
\end{algorithmic}
\end{algorithm}
\par}
\subsubsection{\textbf{Phase 3}}
In the improved task grouping algorithm, coarse-grained task is considered and can be executed as an individual grouped task. Then the grouped task is assigned to the sequential resource. The data file size after execution is collected to calculate processing cost.
Algorithm \ref{improved} describes the improved task scheduling algorithm pseudo code.
%TABLE \ref{improved1} describes the improved task scheduling algorithm pseudo code.
%TABLE \ref{improved2} describes the improved task scheduling algorithm pseudo code.
Firstly, justify whether a task MI is less or more than cloud resources MIPS*granularity\_size. If a task MI is larger than $\mbox{total\_resMI}_{j}$, the task is grouped itself as one group. Secondly, otherwise a group is constituted by some fine-grained tasks, and maximum task file size and output file size of each group is selected as grouped task file size and grouped task output file size, correspondingly.
%\begin{table}[!htbp]
%\centering
%\caption { the improved task scheduling algorithm}
%\label{improved2}
%\begin{tabular}{|ll|}
%\hline
%\multicolumn{2}{|c|}{\textbf{Phase 3: the improved task scheduling algorithm}} \\
%\hline
%1. & groupid = 0;\\
%2. &\textbf{for} i := 0 to cloudletList\_size-1 \textbf{do}\\
%3. &m = i;\\
%4. & \textbf{for} j := 0 to ResList\_size-1 \textbf{do}\\
%5. &total\_jobMI = 0;\\
%6. &max\_cloudletFileSize = 0, max\_cloudletOutputSize = 0;\\
%7. &pre\_cloudletFileSize = 0, pre\_ cloudletOutputSize = 0;\\
%8. &$\mbox{total\_resMI}_{j} = \mbox{ResList}_{j}$\_MIPS$*$granularity\_size;\\
%9. &\textbf{while} total\_jobMI$\le$$\mbox{total\_resMI}_{j}$ and i$\le$cloudletList\_size-1 \textbf{do}\\
%10. & total\_jobMI = total\_jobMI$+$$\mbox{cloudletList}_{i}$\_MI;\\
%11. & \textbf{if} max\_cloudletFileSize$<$$\mbox{cloudletList}_{i}$\_FileSize \textbf{then} \\
%12. &max\_cloudletFileSize = $\mbox{cloudletList}_{i}$\_FileSize;\\
%13. & \textbf{end if};\\
%14. & \textbf{if} max\_cloudletOutputSize$<$$\mbox{cloudletList}_{i}$\_OutputFileSize \textbf{then}\\
%15. &max\_cloudletOutputSize = $\mbox{cloudletList}_{i}$\_OutputFileSize;\\
%16. & \textbf{end if};\\
%17. & i$++$;\\
%18. &\textbf{end while};\\
%19. & i$--$;\\
%20. & \textbf{if} total\_jobMI$>$$\mbox{total\_resMI}_{j}$ \textbf{then}\\
%21. & total\_jobMI = total\_jobMI-$\mbox{cloudletList}_{i}$\_MI;\\
%22. & max\_cloudletFileSize = pre\_cloudletFileSize;\\
%23. & max\_cloudletOutputFileSize = pre\_cloudletOutputFileSize;\\
%24. & i$--$;\\
%25. & \textbf{end if};\\
%26. & \textbf{if} (m-1) == i \textbf{then}\\
%27. & i$++$;\\
%28. &total\_jobMI = $\mbox{cloudletList}_{i}$\_MI;\\
%29. &\textbf{end if};\\
%30. &Create a new task whose MI equals total\_jobMI\\
%31. &Set a unique ID (groupid) to the created task\\
%32. &Insert this task into groupedcloudletList\\
%33. &Put the task on the $\mbox{groupedcloudletList}_{groupid}$\\
%34. &Insert the corresponding resource $\mbox{ResList}_{j}$ into groupedvmList\\
%35. &Put the corresponding resource $\mbox{ResList}_{j}$ on\\& $\mbox{groupedvmList}_{groupid}$\\
%36. &\textbf{end for};\\
%37. &\textbf{end for};\\
%\hline
%\end{tabular}
%\end{table}
%\begin{table}
%\centering
%\includegraphics[width= 8.5cm]{try2.pdf}
%\caption{the improved task scheduling algorithm}
%\label{improved1}
%\end{table}
{\renewcommand\baselinestretch{0.8}\selectfont
\begin{algorithm} [t]
\caption{ \textbf{the improved task scheduling algorithm}}
\label{improved}
\begin{algorithmic}[1]
\raggedright
\State groupid = 0
\For{i := 0 to cloudletList\_size-1}
\State m = i
\For{j := 0 to ResList\_size-1}
\State total\_jobMI = 0
\State max\_cloudletFileSize = 0
\State max\_cloudletOutputSize = 0
\State pre\_cloudletFileSize = 0
\State pre\_ cloudletOutputSize = 0
\State $\mbox{total\_resMI}_{j} = \mbox{ResList}_{j}$\_MIPS$*$granularity\_size
\While{total\_jobMI$\le$$\mbox{total\_resMI}_{j}$ and i$\le$cloudletList\_size-1}
\State total\_jobMI = total\_jobMI$+$$\mbox{cloudletList}_{i}$\_MI
\If{max\_cloudletFileSize$<$$\mbox{cloudletList}_{i}$\_FileSize}
\State max\_cloudletFileSize = $\mbox{cloudletList}_{i}$\_FileSize
\EndIf
\If{max\_cloudletOutputSize$<$$\mbox{cloudletList}_{i}$\_OutputFileSize}
\State max\_cloudletOutputSize = $\mbox{cloudletList}_{i}$\_OutputFileSize
\EndIf
\State i$++$
\EndWhile
\State i$--$
\If{total\_jobMI$>$$\mbox{total\_resMI}_{j}$}
\State total\_jobMI = total\_jobMI-$\mbox{cloudletList}_{i}$\_MI
\State max\_cloudletFileSize = pre\_cloudletFileSize
\State max\_cloudletOutputFileSize = pre\_cloudletOutputFileSize
\State i$--$
\EndIf
\If{(m-1) == i}
\State i$++$
\State total\_jobMI = $\mbox{cloudletList}_{i}$\_MI
\EndIf
\State Create a new task whose MI equals total\_jobMI
\State Set a unique ID (groupid) to the created task
\State Insert this task into groupedcloudletList
\State Put the task on the $\mbox{groupedcloudletList}_{groupid}$
\State Insert the corresponding resource $\mbox{ResList}_{j}$ into groupedvmList
\State Put the corresponding resource $\mbox{ResList}_{j}$ on $\mbox{groupedvmList}_{groupid}$
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\par}
\subsubsection{\textbf{Phase 4}}
SJF scheduling algorithm is adopted to reduce tasks waiting time. In order to decrease waiting time of tasks, the grouped tasks are sorted based on grouped tasks processing requirements in ascending order. After sorting, the grouped task with shortest processing requirement is executed first on the corresponding resource, and similarly the grouped task with second shortest processing requirement is operated next on its commensurate resource. The rest can be done in the same approach. 
Algorithm \ref{SJF} describes the pseudo code of SJF scheduling algorithm and allocation of grouped tasks to resources.
%TABLE \ref{SJF1} describes the pseudo code of SJF scheduling algorithm and allocation of grouped tasks to resources.
%TABLE \ref{SJF2} describes the pseudo code of SJF scheduling algorithm and allocation of grouped tasks to resources.
%\begin{table*}[!htbp]
%\centering
%\caption { SJF scheduling algorithm and allocation of grouped tasks to resources}
%\label{SJF2}
%\begin{tabular}{|ll|}
%\hline
%\multicolumn{2}{|c|}{\textbf{Phase 4:SJF scheduling algorithm and allocation of grouped tasks to resources}} \\
%\hline
%1. & \textbf{sort} (groupedcloudletList, MI in ascending order);\\
%2. &\textbf{for} k := 0 to groupedcloudletList\_size-1 \textbf{do}\\
%3. & assign the $\mbox{groupedcloudletList}_{k}$ to the $\mbox{groupedvmList}_{k}$ for executing tasks\\
%4. & \textbf{end for};\\
%5. & \textbf{for} k := 0 to groupedcloudletList\_size-1 \textbf{do}\\
%6. & receive accomplished grouped tasks-$\mbox{groupedcloudletList}_{k}$ from the $\mbox{groupedvmList}_{k}$ ;\\
%7. &\textbf{end for};\\
%\hline
%\end{tabular}
%\end{table*}
%\begin{table*}
%\centering
%\includegraphics[width= 8.5cm]{try3.pdf}
%\caption{SJF scheduling algorithm and allocation of grouped tasks to resources}
%\label{SJF1}
%\end{table*}
{\renewcommand\baselinestretch{0.8}\selectfont
\begin{algorithm*}
\caption{ \textbf{SJF scheduling algorithm and allocation of grouped tasks to resources}}
\label{SJF}
\begin{algorithmic}[1]
\State \textbf{sort} (groupedcloudletList, MI in ascending order)
\For{k := 0 to groupedcloudletList\_size-1}
\State assign the $\mbox{groupedcloudletList}_{k}$ to the $\mbox{groupedvmList}_{k}$ for executing tasks
\EndFor
\For {k := 0 to groupedcloudletList\_size-1}
\State receive accomplished grouped tasks-$\mbox{groupedcloudletList}_{k}$ from the $\mbox{groupedvmList}_{k}$
\EndFor
\end{algorithmic}
\end{algorithm*}
\par}
\section{Experiment Setup}
\label{sec:experimentsetup}
CloudSim is a new generation and extensible simulation platform which enables seamless modeling, simulation, and experimentation of emerging Cloud computing infrastructures and management services to be accomplished \cite{buyya2009cloud}\cite{calheiros2009cloudsim}\cite{calheiros2011cloudsim}. CloudSim is used to verify the correctness of the proposed algorithm. CloudSim toolkit is used to simulate heterogeneous resource environment and the communication environment. The layered CloudSim architecture mainly contains 4 layers: SimJava, GridSim, CloudSim and User code \cite{calheiros2009cloudsim}.

Cloud resources are depicted as some parameters, such as Resource ID, processing elements, processing capability (MIPS), bandwidth, RAM, cost and so on in TABLE \ref{characteristic}.
%CloudSim is a framework and allows users to investigate on cloud resource provisioning, as well as energy efficient management of data center resources.
\begin{table}[htbp]
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
\centering
\caption { characteristic of Cloud Resources for the simulation}
\label{characteristic}
%\renewcommand\arraystretch{0.85}
\begin{tabular}{|p{1.2cm}<{\centering}|p{2.2cm}<{\centering}|p{0.9cm}<{\centering}|p{0.9cm}<{\centering}|p{1.3cm}<{\centering}|}
\hline
\textbf{
%\begin{center}
Resource ID
%\end{center}
}	&	\textbf{
%\begin{center}
Processing capability (MIPS)
%\end{center}
}	&	\textbf{
%\begin{center}
Bandwidth
%\end{center}
}	&	\textbf{
%\begin{center}
Memory
%\end{center}
}	&	\textbf{
%\begin{center}
Cost per bandwidth
%\end{center}
}	\\ 	\hline
R1	&	165	&	885	&	2GB	&	0.005	\\ 	\hline
R2	&	257	&	971	&	2GB	&	0.005	\\ 	\hline
R3	&	183	&	1100	&	2GB	&	0.005	\\ 	\hline
R4	&	250	&	1264	&	2GB	&	0.005	\\ 	\hline
R5	&	252	&	700	&	2GB	&	0.005	\\ 	\hline
R6	&	201	&	802	&	2GB	&	0.005	\\ 	\hline
R7	&	179	&	736	&	2GB	&	0.005	\\ 	\hline
R8	&	254	&	1097	&	2GB	&	0.005	\\ 	\hline
R9	&	239	&	771	&	2GB	&	0.005	\\ 	\hline
R10	&	212	&	1023	&	2GB	&	0.005	\\ 	\hline
\end{tabular}
\end{table}


In this experiment, 10 time-shared cloud resources are created to simulate and each resource is allocated in one virtual machine with different characteristics as shown in TABLE \ref{characteristic}. The bandwidth, processing capability of resources are main factors that we consider in this experiment. The cost of resources plays a significant part in this experiment to evaluate the total processing cost and verifies the superiority of proposed algorithm. All the tasks submit to the datacenter broker. The processing capability (MIPS) of resources is defined as 200MI with a random variation of -30\% to 30\%. The bandwidth of resources is defined as 1,000 with a random variation of -30\% to 30\%. In CloudSim, tasks are packaged as cloudlets, which contain jobs requirements (MI), size of job input and output data (in bytes) and other various parameters related with execution when tasks are deployed to corresponding resources by broker. These cloudlets simulate Cloud-based application services, such as content transfer, social networking, and so on. Each application complexity is described by computational requirements. Therefore, each application has a pre-defined processing requirements component which is inherited from cloudlets and amount of data transfer which is related with input file size and output file size \cite{calheiros2009cloudsim}. To ensure tasks are similar to real jobs in practice, we use Gaussian distribution function to generate simulated tasks and ensure their processing requirements larger than 1 MI, as well as adopt Random function to assign tasks file size and output file size. The file size of tasks is defined as 100 bytes with a random variation of -30\% to 30\%. Similarly, the Output file size of tasks is defined as 100 bytes with a random variation of -30\% to 30\%.
%The processing requirement of tasks follows Normal Distribution and it is greater than 1 MI. The file size of tasks is defined as 100 bytes with a random variation -30\% to 30\%. Similarly, the Output file size of tasks is defined as 100 bytes with a random variation -30\% to 30\%.
The number of tasks (cloudlets) varies 1,000 to 7,000 with 1,000 step size. The granularity size changes 10 seconds to 30 seconds with 5 seconds step size. The selection of granularity size influences experimental evaluation. Therefore, it is an important factor of task grouping algorithm to achieve minimum tasks turnaround time and maximum cloud resources utilization.
\section{Results}
\label{sec:results}
\subsection{Experimental Results}
For the simulation, we focus on average waiting time, the total processing time, and processing cost.

%\subsubsection{The waiting time and total finishing time is computed in seconds}
(1) The waiting time and total finishing time is computed in seconds.

The execution time of a grouped task is given as follow:
\begin{equation}
\label{exectime}
T_{exec}^i = T_{comp}^i + T_{comm}^i
\end{equation}

Where,

$T_{exec}^i$ implies the $\mbox{i}^{th}$ grouped task execution time;

$T_{comp}^i$ implies the computation time of $\mbox{i}^{th}$ grouped task;

$T_{comm}^i$ implies the communication time of $\mbox{i}^{th}$ grouped task.

The turnaround time formula is given as follow:

Turnaround time = resource waiting time + tasks processing requirement / resource processing capability

\begin{equation}
\label{proctime}
{T_{proc}} = {T_{group}} + {T_{sub}} + {T_{exec}} + {T_{rece}}
\end{equation}

Where,

$T_{proc}$ implies total processing time (turnaround time);

$T_{group}$ implies tasks grouping time;

$T_{sub}$ implies time when all the groups submit to resources;

$T_{exec}$ implies tasks execution time;

$T_{rece}$ implies time when all the executed tasks are received by users.

Simulations on different number of cloudlets are performed to analyze and compare with different scheduling algorithms.
TABLE \ref{definitiondiff} shows different scheduling algorithms in this experiment. Fig.\ref{doublefig} shows different number of tasks average waiting time with different granularity size and TABLE \ref{improvement} represents the improvement ratio of waiting time.
\begin{table}[h]
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
\centering
\caption {the definitions in different scheduling algorithms}
\label{definitiondiff}
\begin{tabular}{|p{2.6cm}<{\centering}|p{4.8cm}|}
\hline
\multicolumn{2}{|c|}{\textbf{definitions in different scheduling algorithms}} \\
\hline

 \begin{center}
 Task grouping
\end{center}&
	task grouping scheduling algorithm without considering any other constraint conditions on resources processing capabilities and tasks processing requirements awareness	\\ 	\hline

 \begin{center}
 Grouping with SJF
\end{center}&
	task grouping scheduling algorithm integrated SJF (shortest-job-first) scheduling algorithm	\\ 	\hline

\begin{center}
Grouping with SJF and bandwidth
\end{center}
&	task grouping scheduling algorithm integrated SJF (shortest-job-first) scheduling algorithm and regarding on bandwidth awareness	\\ 	\hline

\begin{center}
group on res.
\end{center}
&	task grouping scheduling algorithm regarding on resources processing capabilities awareness	\\ 	\hline

\begin{center}
group on len.
\end{center}
&	task grouping scheduling algorithm regarding on tasks processing requirements awareness	\\	\hline

\begin{center}
group on res. and len.
\end{center}
&	task grouping scheduling algorithm regarding on resources processing capabilities and tasks processing requirements awareness	\\ 	\hline
\end{tabular}
\end{table}
\begin{figure*}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
%\includegraphics[width= 12cm]{1020wait.pdf}
%\includegraphics[width= 11.5cm]{1020waitsma.pdf}
\includegraphics[width= 11.5cm]{1020waitsmafi.pdf}
\caption{Average waiting time with different number of tasks}
\label{doublefig}
\end{figure*}
\begin{table}[!htbp]
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
\centering
\caption { improvement ratio of waiting time}
\label{improvement}
\begin{tabular}{|p{0.84cm}<{\centering}|p{0.9cm}<{\centering}|p{1.15cm}<{\centering}|p{1.0cm}<{\centering}|p{1.15cm}<{\centering}|p{1.15cm}<{\centering}|}
\hline
\multicolumn{6}{|p{8.15cm}<{\centering}|}{\textbf{Improvement ratio of waiting time compared with task grouping ( granularity size = 10 seconds)}} \\
\hline
No. of cloudlets  (tasks)	&	Grouping with SJF	&	Grouping with SJF and bandwidth	&	Group on res.	&	Group on len.	&	Group on res. and len.	\\	\hline
7000	&	26.56\%	&	30.48\%	&	\emph{-5.60\%}	&	\emph{-16.78\%}	&	\emph{-16.12\%}	\\	\hline
6000	&	26.07\%	&	29.54\%	&	\emph{-8.64\%}	&	\emph{-17.60\%}	&	\emph{-18.94\%}	\\	\hline
5000	&	25.84\%	&	31.34\%	&	\emph{-8.41\%}	&	\emph{-20.38\%}	&	\emph{-19.72\%}	\\	\hline
4000	&	26.51\%	&	31.41\%	&	\emph{-7.45\%}	&	\emph{-20.73\%}	&	\emph{-19.38\%}	\\	\hline
3000	&	26.79\%	&	30.91\%	&	\emph{-8.30\%}	&	\emph{-19.61\%}       &	\emph{-19.12\%}	\\	\hline
2000	&	28.25\%	&	32.48\%	&	\emph{-4.45\%}	&	\emph{-17.64\%}	&	\emph{-17.41\%}	\\	\hline
1000	&	27.29\%	&	33.52\%	&	\emph{-6.76\%}	&	\emph{-20.86\%}	&	\emph{-22.41\%}	\\	\hline
\multicolumn{6}{|p{8.15cm}<{\centering}|}{\textbf{Improvement ratio of waiting time compared with task grouping ( granularity size = 20 seconds)}} \\
\hline
No. of cloudlets  (tasks)	&	Grouping with SJF	&	Grouping with SJF and bandwidth	&	Group on res.	&	Group on len.	&	Group on res. and len.	\\	\hline
7000	&	13.29\%	&	14.62\%	&	\emph{-1.96\%}	&	\emph{-82.41\%}	&	\emph{-81.78\%}       \\        \hline
6000	&	13.2\%	&	15.32\%	&	\emph{-2.39\%}	&	\emph{-77.91\%}       &	\emph{-77.47\%}	\\	\hline
5000	&	12.85\%	&	11.95\%	&	\emph{-1.34\%}	&	\emph{-93.53\%}	&	\emph{-92.02\%}	\\	\hline
4000	&	12.19\%	&	12.49\%	&	\emph{-6.63\%}	&	\emph{-87.29\%}	&	\emph{-87.36\%}	\\	\hline
3000	&	12.65\%	&	15.85\%	&	5.25\%	&	\emph{-75.32\%}	&	\emph{-74.73\%}	\\	\hline
2000	&	13.72\%	&	14.41\%	&	0.78\%	&	\emph{-75.75\%}	&	\emph{-76.06\%}	\\	\hline
1000	&	10.07\%	&	10.25\%	&	\emph{-16.29\%}&	\emph{-109.13\%}&	\emph{-111.25\%}	\\	\hline
\end{tabular}
\end{table}

In the case of granularity size being 10 seconds, waiting time of task grouping integrated with SJF and bandwidth awareness is the smallest regardless the number of cloudlets applied. The waiting time of task grouping with SJF is the second smallest. According to TABLE \ref{improvement}, compared with pure task grouping algorithm, task grouping integrated with SJF and bandwidth awareness waiting time decreases by 31.38\% while task grouping with bandwidth awareness reduced by 26.75\%. The difference of these two algorithms waiting time is about 4.63\%. With the reduction of number of cloudlets, the curves of these two algorithms are nearly overlapped. It indicates that when the number of tasks is the smallest, the processing requirements of tasks and processing capabilities of resources are significant factors to determine the turnaround time. When the number of tasks is very large, network bandwidth plays an important role in task scheduling, because of the network delays and communication overheads. The waiting time of traditional task grouping is the third smallest and much larger than that of above mentioned two algorithms. It is able to show that task scheduling integrated with SJF can effectively reduce waiting time.
Waiting time influences deadline, as the waiting time is smaller, the tasks can be much more influential to meet the deadline constraint. However, task grouping with resources processing capabilities awareness or tasks processing requirements awareness takes much more time of execution. Compared with pure task grouping algorithm, grouping with processing capabilities awareness algorithm waiting time is increased by 7.08\% and grouping with processing requirements awareness algorithm waiting time is increased by about 19.08\%. We can observe that when tasks processing requirements are considered redundant, task grouping method consumes a hefty amount of time in grouping. Since tasks are sorted based on processing requirements in a descending order, the tasks with large processing requirements are difficult to be grouped together with others. Therefore, each of these kind of tasks can only be grouped themselves in an individual group.

In the case of granularity size being 20 seconds, waiting time of task grouping with SJF and network bandwidth awareness and task grouping with SJF are almost the same. Increasing granularity size will increase the processing capabilities during a period of time and under this circumstance, network latency hardly plays a part in processing tasks.

Fig.\ref{processing} shows the total processing time of different number of tasks. TABLE \ref{improvementpro} depicts the improvement ratio of processing time compared with task grouping. Obviously, when the number of tasks reduces, the turnaround time decreases. Compared with pure task grouping scheduling algorithm, task grouping with SJF, task grouping with SJF and bandwidth awareness, task grouping with SJF and bandwidth awareness turnaround time is smallest and decreased by 7.25\% on average. Grouping with SJF processing time is decreased by 3.53\% compared with pure task grouping scheduling algorithm. When the number of tasks decreases, the improvement ratio of turnaround time increases smoothly in general, and simultaneously when the number of tasks equals 5,000 or 4,000, the improvement ratio rises. In the case of granularity size being 10 seconds, communication rate-bandwidth significantly influences tasks execution performance. Therefore bottleneck bandwidth factor should be taken into account when granularity size is not very large.
\begin{table}
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
\centering
\caption { improvement ratio of processing time}
\label{improvementpro}
\begin{tabular}{|p{1.9cm}<{\centering}|p{1.8cm}<{\centering}|p{3.0cm}<{\centering}|}
\hline
\multicolumn{3}{|p{7.6cm}<{\centering}|}{\textbf{Improvement ratio of processing time compared with task grouping ( granularity size = 10 seconds)}} \\
\hline
\textbf{No. of cloudlets (tasks)}	&	\textbf{Grouping with SJF}	&	\textbf{Grouping with SJF and bandwidth}	\\	\hline
7000	&	2.97\%	&	6.13\%	\\	\hline
6000	&	2.65\%	&	5.31\%	\\	\hline
5000	&	3.70\%	&	8.23\%	\\	\hline
4000	&	3.80\%	&	8.22\%	\\	\hline
3000	&	2.19\%	&	5.71\%	\\	\hline
2000	&	4.64\%	&	7.76\%	\\	\hline
1000	&	4.78\%	&	9.36\%	\\	\hline
\end{tabular}
\end{table}
\begin{figure}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
%\includegraphics[width= 8cm]{processingtimem.pdf}
%\includegraphics[width= 8.1cm]{processingtimemsma.pdf}
\includegraphics[width= 8.1cm]{processingtimemsmafi.pdf}
\caption{Total processing time with different number of tasks}
\label{processing}
\end{figure}

Fig.\ref{averagediff} presents average waiting time with different granularity. When granularity size is less than 35 seconds, average waiting time of all the experimental scheduling algorithms are different. Whenever the number of tasks is 5,000 or 7,000, average waiting time of task grouping with task processing requirements awareness is largest, but grouping integrated with SJF and bandwidth awareness waiting time is smallest. Simultaneously, grouping with task processing requirements awareness and grouping with both processing capabilities and tasks processing requirement awareness curves are nearly overlapped. That implies when task processing requirements are considered as primary constraint condition, sorting resources based on processing capabilities almost does not affect scheduling and processing. Especially when granularity size is very large, these two curves are much more closely. These two kind algorithms average waiting time are both smaller than that of pure task grouping scheduling algorithm. Thus, we can see that even if granularity size changes, scheduling algorithms of grouping with SJF and grouping with SJF and bandwidth awareness perform better to achieve minimum turnaround time.
\begin{figure*}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
%\includegraphics[width= 12cm]{70005000wait.pdf}
%\includegraphics[width= 11.5cm]{70005000waitsma.pdf}
\includegraphics[width= 11.5cm]{70005000waitsmafi.pdf}
\caption{Average waiting time with different granularity size}
\label{averagediff}
\end{figure*}

(2) The processing cost is calculated based on the actual CPU time when the tasks are accomplished to execute on the cloud resources and the cost of resources per second.

The processing cost formula is given as follow:

Processing Cost = input data transfer cost + processing cost + output transfer cost

\begin{equation}
\label{cost}
{C_{proc}} = \sum\limits_{i = 0}^n {T_{exec}^i} *{C_{reso}}
\end{equation}

Where,

${C_{proc}}^i$ implies processing cost;

$\sum\limits_{i = 0}^n {T_{exec}^i}$ implies the total execution time of tasks (the number of tasks: n);

${C_{reso}}$ implies the cost of resources per second.

Fig.\ref{processcostgran} describes processing cost with different granularity size. Fig.\ref{processcosttask} presents processing cost with different number of tasks. From Fig.\ref{processcosttask}, whenever the number of tasks changes, the variance trend and difference of all the experimental scheduling algorithms are similar. When the number of tasks is between 1,000 and 7,000, the processing cost of task grouping, task grouping combined with SJF and task grouping integrated with SJF and bandwidth awareness are nearly the same and are smaller than that of other three algorithms. Only when the number of tasks is over 4,000, processing cost of task grouping integrated with SJF and bandwidth awareness is smallest inconspicuously. Processing cost determines scheduling cost. We can distinctly observe that when the constraint factor-processing capability is prioritized, processing cost is increased compared with pure task grouping. While the constraint factor-tasks processing requirements is prioritized, processing cost is largest in this experiment. Since if these constraint factors are prioritized first, then task grouping algorithms will take much time on grouping and executing, as well as the processing cost will be larger. Because our tasks adopt Normal Distribution, when some tasks processing requirements are relatively larger, grouping will cost much more time. But adopting this model enables experiments to be like practical cloud environment. In Fig.\ref{processcostgran}, when granularity size is smaller than 30 seconds, processing cost decreases significantly with the improvement of granularity size. While granularity size is over 30 seconds, the curves of experimental scheduling algorithms are almost unchanged. Even if granularity size is configured larger, the processing cost does not change. Therefore, granularity size is a factor affecting processing cost and choosing an appropriate granularity size is crucial.
%\begin{figure*}
\begin{figure*} [t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
%\includegraphics[width= 12cm]{processingcostm.pdf}
%\includegraphics[width= 11.5cm]{processingcostmsma.pdf}
\includegraphics[width= 11.5cm]{processingcostmsmafi.pdf}
\caption{Processing cost with different granularity size}
\label{processcostgran}
\end{figure*}
%\begin{figure*} 
\begin{figure*} [!htbp]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
%\includegraphics[width= 12cm]{prosscostm.pdf}
%\includegraphics[width= 11.5cm]{prosscostmsma.pdf}
\includegraphics[width= 11.5cm]{prosscostmsmafilarge.pdf}
\caption{Processing cost with different number of tasks}
\label{processcosttask}
\end{figure*}

From the experiments, it shows the proposed task grouping algorithm integrated with SJF and bandwidth awareness can effectively minimize waiting time and processing time and reduce processing cost to achieve maximum of resources utilization and minimum overhead.
\subsection{Discussion}
In this work, we focus on simulated experiments, and therefore all the experiments are operated in a simulated framework.  The results from the experiments have been simulated for 100 times and the figures show the average results given in those simulations. Although all the hypotheses are close to real, the resources and tasks are still simulative and not real data. In the future, to gain more scientific and realistic results, we will collect real data sets and adopt the real cloud-based environment. Further experiments will be carried out on other heuristic scheduling methods such as Genetic algorithm \cite{sivanandam2007introduction} and Ant algorithm \cite{chang2009ant}. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems.
%\section{Discussion}
%In this work, we focus on simulated experiments, and therefore all the experiments are operated in a simulated framework. Although all the hypothesises are close to real, the resources and tasks are still simulative and not real data. In the future, to gain more scientific and strict results, we will collect real data sets and adopt the real cloud-based environment. Further experiments will be carried out on other heuristic scheduling methods such as Genetic algorithm and Ant algorithm. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems.
\section{Conclusion}
\label{sec:conclusion}
%\subsection{Discussion}
%In this work, we focus on simulated experiments, and therefore all the experiments are operated in a simulated framework. Although all the hypothesises are close to real, the resources and tasks are still simulative and not real data. In the future, to gain more scientific and strict results, we will collect real data sets and adopt the real cloud-based environment. Further experiments will be carried out on other heuristic scheduling methods such as Genetic algorithm \cite{sivanandam2007introduction} and Ant algorithm \cite{chang2009ant}. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems.

%\subsection{Conclusion}
To enhance the scheduling capability on cloud computing based software systems, simulations are used to facilitate the evaluations on different approaches under various run-time scenarios in a cloud environment. The study proposes a task grouping scheduling algorithm combined with shortest-job first and bandwidth awareness algorithms in an attempt to reduce the waiting time and its associated processing costs.

%Experimental results indicate improvements on the utilisation of resources and it is able to minimise turnaround time, and reduces influence on the bottleneck of bandwidth usage.

Experimental results show improved utilization of resources and minimized turnaround time, as well as reduced influence on the bottleneck of  bandwidth usage. Compared with existing task grouping algorithms, the proposed algorithm waiting time has been reduced by 31.38\%. Even if in comparison with the task grouping with bandwidth awareness algorithm, the waiting time has also been reduced by 4.63\%. The proposed scheduling algorithm achieves a minimum waiting time which is also able to determine deadline constraints. Better yet, the processing time of proposed algorithm has also been reduced to satisfy the real-time demand of clients. The empirical results illustrate that the proposed scheduling policies are a significant improvement and serve an important baseline for benchmarking, and for the future development of scheduling algorithms for cloud-based software applications and systems.

%Further experiments will be carried out on other heuristic scheduling methods such as Genetic algorithm and Ant algorithm. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems.

%and will implement the proposed solution in a real run-time environment fortaking into account QoS (Quality of Service) of clients and load balance demands.

%For improving the scheduling mechanism, an accurate model to describe clients' demands will be developed.


\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,myreference}
\end{document}




