
\documentclass[conference]{IEEEtran}

% *** CITATION PACKAGES ***
%
\usepackage{cite}
\usepackage{multirow}
\usepackage{array}
\usepackage{tabularx}
\usepackage{color}
%\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{algpseudocode}
% *** GRAPHICS RELATED PACKAGES ***
\ifCLASSINFOpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage[dvips]{graphicx}
\usepackage{graphicx}
\fi

\begin{document}

\title{An Investigation on Scheduling Policies for Cloud-based Software Systems}

\author{\IEEEauthorblockN{Jia Ru}
\IEEEauthorblockA{\emph{Department of Computing}\\
\emph{The Hong Kong Polytechnic University}\\
\emph{Hong Kong SAR}\\
\emph{Email:csrjia@comp.polyu.edu.hk}}
}

\maketitle
\begin{abstract}
\newline
\textit{Background}: The rapid diffusion of cloud computing technology has been a focus of interest for enterprises due to its higher scalability and availability and greater elasticity. Nevertheless the limited scheduling mechanisms for running applications in the cloud have been a major challenge.
%Since the appearance of cloud computing, with the booming development of IT science and technology both in academic fields and industries, the applications of cloud computing are going on developing, and cloud computing is applying theory to practical applications.
\newline
\textit{Aim}: This project introduces an effective scheduling algorithm, which attempts to maximize cloud resources utilization, improve the computation ratio, and reduce makespan, overhead and delay in a cloud-based software system.
%In this project, we will introduce an effective scheduling algorithm, which will maximize the cloud resource utilization, improve the computation ratio, and reduce the makespan, overhead and delay in the cloud-based software system. The secondary aim is at optimising MapReduce framework.
%The main objective in this project is: (1) to optimize MapReduce framework, such as proposing a novel scheduling algorithm that can be used effectively on MapReduce,
%(2) to improve scheduling strategies to maximize the cloud resource utilization, and improve the computation ratio as well as reduce the makespan, overhead and delay in the cloud-based software system.
% The aim is at optimising the MapReduce framework.
\newline
\textit{Method}: (1) Analyze different scheduling algorithms which can be adopted in cloud-based systems and simulate these algorithms using CloudSim. (2) Evaluate these algorithms performance and determine both of their advantage and disadvantage. (3) Propose an improved scheduling algorithm or policy and verify the proposed algorithm in CloudSim as well as extend CloudSim. (4) Study MapReduce framework and grasp its operating principles. (5) Analyze MapReduce scheduling algorithms. (6) Propose and optimize an efficient scheduling algorithm on MapReduce.%(1) Analyzing different scheduling algorithms which can be adopted in cloud-based systems, such as FCFS (First-Come-First-Service), SJF (Shortest-Job-First) scheduling, priority scheduling, RR (Round-Robin) scheduling, random, greedy, Genetic Algorithm, Ant Algorithm and simulating these different algorithms in CloudSim-a cloud-based simulated framework. (2) Evaluating these algorithms performance and summarizing advantage and disadvantage. (3) Proposing an improved scheduling algorithm or policy and verifying the proposed algorithm in CloudSim as well as extending CloudSim. (4) Studying MapReduce framework and grasping its operating principles. (5) Analyzing MapReduce scheduling algorithms and proposing or optimizing an efficient scheduling algorithm on MapReduce.
%(4) Testing the proposed scheduling policy in a real cloud environment such as Amazon EC2 or Google. (5) Studying MapReduece framework and grasping its operating principle. (6) Analysing MapReduce scheduling algorithms and proposing or rewriting an efficient scheduling algorithm to MapReduce.
%(5) Analyzing the different existing data center structure and evaluating their performance, advantage and disadvantages. (6) For reference existing and famous data centres structure, proposing a new data centre structure.
\newline
\textit{Conclusion}: Proposed scheduling policies should effectively in improving the number of completed tasks, reduce costs, and promote the development of scheduling environment. In the premise of processing accuracy, improved MapReduce scheduling policy can improve the utilization rate of resources, reduce workloads of nodes and optimize management of resources.
%Proposed scheduling policies should effectively improve the number of completed tasks, increase profit of service party, reduce cost which is undertaken by service party when accepting tasks, and promote the development of scheduling environment. In the premise of processing accuracy, improved MapReduce scheduling policy can improve the utilization rate of resources, reduce workloads of nodes and optimize management of resources.
\newline
\end{abstract}
\begin{keywords}
Cloud Computing, CloudSim, MapReduce, Scheduling, Software Metrics
\end{keywords}
\IEEEpeerreviewmaketitle

\section{Introduction}
%Since the appearance of cloud computing, with the booming development of IT science and technology both in academic fields and industries, the applications of cloud computing are going on developing, and cloud computing is applying theory to practical applications.
With further development and research on cloud computing, there are some interesting research challenges and problems to be overcome, such as data centre network structure  expansibility, energy conservation, replica policies and scheduling mechanism \cite{fox2009above}\cite{foster2008cloud}\cite{germain2009convergence}\cite{leiba2009having}. The primary objective of this project is optimizing scheduling polices.
% to overcome this challenge.
%Software engineering for cloud-based software systems is a new domain of research, requiring careful considerations on its characteristics with respect to traditional software development paradigms. The fundamental mechanism of this new kind software system is to schedule the applications to the resources pool which is consisted of hugely distributed computers \cite{xu2011job}.
The aim of cloud computing is to realize cooperation work and resources sharing. However, the resources in cloud-based systems are heterogeneous, dynamic and diversified due to different client requirements. This makes management of resources very complex.
%Therefore, scheduling problem is an important research area in this regard.
%The primary  objective of this project is optimizing scheduling mechanisms  to improve the performance and  utilization of resources. diversity of user requirements
Adopting an appropriate scheduling policy and optimizing scheduling mechanisms  to improve the performance and  utilization of resources are a significant research area in this regard.
%challenge due to a number of factors.
%This project considers two aspects of scheduling: performance and quality of service (QoS). On scheduling based on performance, completion time and optical makespan is as the final objectives. Typical algorithms contains Min-min, Max-min, Genetic, Ant colony, Greedy and simulated annealing. On scheduling based on QoS, we need to consider users' priority, scheduling deadline, profit of service party and resource risk.
%Therefore, the primary objective of this project is optimizing scheduling polices in order to overcome this challenge.
In traditional distributed environment, the aim of optimizing scheduling is mainly focusing on system performance, such as system throughput, CPU utilization rate and almost never considering QoS (Quality of Service). In this project, I am not only emphasizing resource utilization rate and system performance, but also requiring a guaranteed QoS of users based on different demands.
%Users can choose the resource in the cloud by themselves and according to their own requirements.
%Scheduling is a decision process, and its content is deploying resources to applications of different clients at a suitable time, or during a specific period of time.
%The target of optimizing scheduling policies considers one or two factors that include cost, task completion time, task priority, profit and so on.
In the premise of guaranteed resource utilization rate, scheduling policies mainly focus on allocation management of resources and satisfy all the resource demands of users. This project will propose scheduling policies that can effectively improve performance guarantees such as task completion rate, as well as reduce overall costs.
%reduce cost which is undertaken by service party when accepting applications
%and guarantees QoS demand of clients.
Consequently, considering some specific criteria (cost, completion time, etc.) or priorities of tasks and resources, we will propose an effective scheduling algorithm, which will maximize the cloud resource utilization, improve the computation ratio, and reduce the makespan and delay in the cloud-based software system.
%We also both focus on resource allocation and task scheduling, as well as take into account some specific criteria or priorities of tasks and resources to optimize scheduling algorithms, such as resource bandwidth and processing capability, and deadline.

Nowadays, MapReduce as a distributed processing model and a high-efficiency task scheduling model is designed and developed by Google \cite{dean2008mapreduce}. It is a useful tool to process data-intensive jobs. However, the default MapReduce scheduling algorithms have some drawbacks. For example, in a MapReduce cluster, there only exists one JabTracker used to allocate jobs to TaskTrackers. JobTracer takes charge of  the entire system scheduling. Once JobTracker fails or crashes, the compute nodes will not be able to complete. On the other hand, if a greater number of jobs are submitted by hundreds of clients and a large amount of TaskTrackers are distributed in clusters, then JobTacker will face heavy working pressure and workload. Hence the efficiency of the entire system will be reduced due to the execution capability of the JabTracker. Moreover, if the network bandwidth of JobTracker is lower than normal, then the system performance will also be decreased. To solve the problem of single JobTracker node failure, this project will propose a new scheduling policy, which can be considered as a pluggable component for MapReduce. The proposed scheduling algorithm involves more than one JobTrackers distributed in the cluster to manage resources and schedule jobs to TaskTrackers.

% and GFS (Google File System) as a distributed file system, Hadoop scheduling algorithms are simple and therefore the entire system performance can be influenced.
%Cloud computing is a new software system technology, which allows dynamic resource allocation on consolidated resources using a combination of techniques from parallel computing, distributed computing, as well as platform virtualization technologies \cite{selvarani2010improved}\cite{fox2009above}. The primary objective of this project is to optimize scheduling strategies to improve the performance and utilization in the cloud-based systems.

%Software engineering for cloud-based software systems is a new domain of research, requiring careful considerations on its characteristics with respect to traditional software development paradigms. More importantly the area of effective scheduling run time tasks becomes one of the research focuses. The aim of cloud computing is to realize cooperation work and resource sharing, but different kinds of resources reflect isomerism, dynamic nature and a diversity of user demands. This makes resource management very complex. Therefore, scheduling problem is an important research area in this regard. The utilization rate of huge resources in data centre is related to scheduling mechanisms applied. Adopting what scheduling mechanism and what kind of data centre structure to improve utilization rate of resources is a significant challenge due to a number of factors. The fundamental mechanism of this new kind software system is to schedule the applications to the resources pool which is consisted of hugely distributed computers \cite{xu2011job}.

%Nowadays, MapReduce \cite{dean2008mapreduce} as a distributed processing model and GFS (Google File System) as a distributed file system are designed and developed by Google, which are  very useful tools to process data-intensive jobs. Hadoop is an open source model of MapReduce and GFS, which draws more attention both in industries and academic fields. Hadoop clusters own great horizontal scalability, and simultaneously normal PCs can be adopted on computing nodes in clusters. Therefore, it can significantly reduce the cost of hardware on Hadoop clusters. Moreover, Hadoop has better fault tolerance and usability. Hadoop is also a platform to process large data sets and analyze big data much accurately. However, Hadoop default job scheduling algorithms are not very efficient, and hence Hadoop scheduling algorithms as a pluggable component should be improved to enhance performance of the entire system and framework. It is the proposed work in this project.
\section{Literature review}
\subsection{Concepts of Cloud Computing}
Cloud computing is based on the concept of infrastructure convergence \cite{vaquero2008break} and sharing services \cite{buyya2009cloud}, in an attempt to provide unique types of services through provisioning of dynamically scalable and virtualized resources.
It is a combination of techniques from parallel computing, distributed computing, as well as platform virtualization technologies.
%Cloud computing is both a combination and commercial implementations of parallel computing, distributed computing, and grid computing, as well as a integrated evolution of virtualization, utility computing, IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service) and so on.
%Cloud computing has been a primary focus in both the research community and the industry over recent years because of its flexibility in software deployments, and of its elasticity capability on resource consolidation.
%Cloud computing is a combination of parallel computing, distributed computing, and grid computing, which is also commercial implementations of all these concepts as well as a combined evolution of virtualization, utility computing, IaaS(Infrastructure as a Service), PaaS(Platform as a Service), SaaS(Software as a Service) and so on \cite{buyya2009modeling}.
%Cloud computing is composed of clusters so its unique characteristic is multiple nodes.
%SaaS provides a fully functional software system ready-to-use for its end-users such as Google Doc, SalesForge. PaaS provides the development platforms for developers, for example using Google App Engine API. IaaS is one of the most popular deployment models and offers the flexibility for developers providing on-demand virtual machines, users are able to deploy their software as in local servers, and a typical example is the cloud offering by Amazon EC2 which is essentially an IaaS cloud service.
\subsection{Scheduling model in Cloud Computing}
Scheduling is a decision process, and its content is deploying resources to applications of different clients at a suitable time, or during a specific period of time.
%In traditional distributed environment, the aim of optimizing scheduling is mainly focusing on system performance, such as system throughput, CPU utilization rate and almost never considering QoS. In cloud computing environment, we are not only emphasizing resource utilization rate and system performance, but also requiring a guaranteed QoS of users based on different demands. Users can choose the resource in the cloud by themselves and according to their own requirements.
\subsubsection{\textbf{Cloud computing scheduling model}}
Cloud computing scheduling model is mainly constructed by Client, Broker, Resources, Resources supporter and Information Service. Fig.\ref{scheduling} shows the scheduling model structure \cite{buyya2000economy}. The tasks implemented can be divided into serial application, parallel application, parameter scan application, cooperation application and so on. Different clients use resources at different prices.
%System allows users to set up resource demand and parameter preference.
%Different clients use resources at different prices, which may vary from time to time.
Broker is a middle interface between clients and resources. It is used to find resources, choose resources, accept tasks, return scheduling results, and exchange information. Broker supports different scheduling policies, which can allocate resources and schedule tasks in accordance to the demands of clients. Broker is constituted by Job Control Agent, Schedule Advisor, Explorer, Trade Manager and Deployment Agent.
\begin{figure} [h]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
\includegraphics[width= 9cm]{schedulingmodel4.pdf}
\caption{Chart of scheduling model structure}
\label{scheduling}
\end{figure}
\begin{itemize}
  \item Job Control Agent: It is responsible for monitoring jobs in the software system, such as schedule generation, jobs creation, status of jobs and communicating with clients and schedule advisor.
  \item Schedule Advisor: It is used to determine resources, allocate available resources which satisfy the demands of clients such as deadline and cost, and to assign jobs.
  \item Cloud Explorer: It communicates with cloud information service to find resources, identifies the list of authorized machines and records resources status information.
  \item Trade Manager: It determines resources access cost and tries to communicate with resources at a low cost under the guidance of schedule advisor.
  \item Deployment Agent: It uses scheduler instruction to activate the execution of tasks and update the status of tasks execution to Job Control Agent in regular intervals.
\end{itemize}
%Information service is mainly used to available resource information. If brokers want to find out appropriate resource, they must make enquiry in information service and gain the resource information which satisfies condition of clients. After this, brokers can only interact with resource providers. If resource providers have new resource to lease, they must register in information service firstly, and then brokers are able to find the resource.
%During the transaction between clients and service providers, service providers register resource information at first. After clients submit tasks to brokers, brokers research resources in information service and deploy tasks to appropriate resources in accordance with corresponding scheduling algorithms. Before execution of tasks, brokers evaluate completion time and cost of tasks. If the time exceeds deadline or the cost is higher than budget of clients, the brokers will deny tasks. If the execution of tasks is accomplished, brokers will return the deployment results to clients and gain relevant profits, otherwise, send error message back to clients.
\subsubsection{\textbf{Basic scheduling methods}}
%Scheduling problem is a NP-Complete problem \cite{fernandez1989allocating}.
Scheduling methods always consider two aspects: one is characteristics of tasks, and the other is characteristics of %datacenter resources \cite{korkhov2009dynamic}\cite{gomathi2011adaptive}.
datacenter resources \cite{korkhov2009dynamic}. There are two possibilities of submission tasks on  resources. One is tasks submission on the resources where the input data is available. The other is submission on some specific resources based on some criteria \cite{singh2011greedy}.
\subsubsection{\textbf{Resource allocation}}
The resources in the cloud computing can be allocated in different ways. Traditional method of task scheduling in cloud environment uses the direct tasks of  clients as the overhead application base \cite{selvarani2010improved}. Considering maximum utilization rate of resources, the algorithms for resources allocation can be FCFS (First Come First Service), SJF (Shortest Job First), priority scheduling, RR (Round Robin) scheduling, random, greedy, Genetic Algorithm \cite{sivanandam2007introduction}, Ant Algorithm and other heuristic scheduling methods \cite{choudharydynamic}.
%The allocation of resources that need to consider maximum utilization rate of resources are FCFS (First-Come-First-Service), SJF (Shortest-Job-First) scheduling, priority scheduling, RR (Round-Robin) scheduling, random, greedy, Genetic Algorithm \cite{sivanandam2007introduction}, Ant Algorithm and other heuristic scheduling methods. The scheduling of tasks can also be FCFS, SJF, priority-based, RR, job grouping and so on \cite{choudharydynamic}.
%\textcolor{red}{Scheduling algorithms choose a task to be performed and corresponding resource in which the task will be executed. According to different characters of resource such as bandwidth, processing capabilities, cost,load balancing, and so on, as well as base on clients requirements of deployment, for example, deadline, profit, cost, priority and so forth, choosing a suitable resource to a task is very significant.}
\subsection{Scheduling mechanism}
\subsubsection{\textbf{Scheduling policy based on replica}}
Replica is a hot topic in cloud computing, which makes up single-point failure of storage object, bad fault tolerance, bad access performance and so on. K-means algorithm \cite{beckmann1990r} is a typical dynamic clustering algorithm which modifies iteration point-to-point. Its principle is base on quadratic sum function.
\subsubsection{\textbf{Online scheduling problem}}
According to different scheduling time, task scheduling can be divided into two models: batch scheduling and online scheduling. In online scheduling, as long as one task arrives, it is scheduled to a free time resource to perform immediately. If all the resources are busy, the task needs to wait. In existing cloud computing environment, online scheduling policy mainly focuses on distribution management of resources, and satisfies various demands of clients.
However, it does not pay enough attention to supporting service.
According to different part given online, online scheduling can be classified into  4 models: scheduling jobs one by one, unknown running time, jobs arrive over time and interval scheduling \cite{sgall1998line}.
%For on-line scheduling the most important classification of the on-line problems according to which part of the problem is given on-line \cite{sgall1998line}: (1) Scheduling jobs one by one, (2) Unknown running time, (3) Jobs arrive over time (4) Interval scheduling. different part of problems given online
\subsubsection{\textbf{Batch scheduling problem}}
In batch scheduling, when a task arrives, the task does not gain service immediately and is collected in a task set. During some special time or after an event, all the tasks can be scheduled to resources. Min-min is a typical batch scheduling algorithm. The concept of Min-min is allocating the best resource to the task which computing capabilities are minimum. It can improve system throughout and enable task sets to gain minimum completion time.
\subsubsection{\textbf{MapReduce}}
%MapReduce is not only a programming model but also a high-efficiency scheduling model for processing, excavating and analyzing large data sets, and the name of an implementation of the model by Google \cite{dean2008mapreduce}.
%MapReduce as a very popular cloud computing programming model is used on clusters of computers in distributed computing widely. MapReduce system adopted in clusters of commodity machines in distributed computing
MapReduce system partitions input data and schedules the execution of programs in clusters of commodity machines. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.
The functional model with user-specified map and reduce operations allows us to parallelize large computations easily and to use re-execution as the primary mechanism for fault tolerance \cite{foster2008cloud}\cite{dean2008mapreduce}. %\subsection{Open Source Framework-Hadoop}
%Hadoop is open source of Google MapReduce \cite{white2012hadoop}, and as a basic distributed framework it is used to develop distributed or parallel program, even if bottom layer of distributed infrastructure is not known in detail. Hadoop clusters which are compose of normal PCs as computing nodes own great horizontal scalability, high expandability, low cost, high efficiency, well reliability, high-performance computing capability and storage.
%%as follow:
%%\begin{itemize}
%%  \item high expandability: reliably store and process big data which size is petabytes-level
%%  \item low cost: through clusters constituted by normal PCs to distribute and process data
%%  \item high efficiency: through distributing data to different nodes, Hadoop can effectively process data in parallel
%%  \item reliability: automatically maintain and repair data replications; automatically redeploy tasks to nodes after tasks fail to be executed
%%\end{itemize}
%\subsubsection{\textbf{Chart of Hadoop MapReduce cluster architecture}}
%One hadoop cluster which processes a MapReduce task usually contains 4 components. Fig.\ref{Hadoop} shows Hadoop MapReduce architecture.
%\begin{figure}
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
%\centering
%\includegraphics[width= 9cm]{hadoopmapreduce.pdf}
%\caption{Chart of Hadoop MapReduce architecture}
%\label{Hadoop}
%\end{figure}
%\begin{itemize}
%  \item  Client: It is an interactive interface between users and clusters.
%  \item  JobTracker: It is responsible for scheduling the entire job to execute. Each cluster can only have one JobTracker.
%  \item TaskTracker: It is a real task executer which can implement map tasks and reduce tasks. Each cluster can have a great deal of TaskTrackers.
%  \item  Hadoop Distributed File System (HDFS): It is used to store input/output data.
%\end{itemize}
%\subsubsection{\textbf{Hadoop default scheduling algorithm}}
\begin{itemize}
   \item FIFO (First In First Out) Scheduling algorithm:
   \end{itemize}
  % In the early stage of design of MapReduce,
JobTracker uses FIFO scheduling algorithm to schedule MapReduce tasks of clients, and then puts all the MapReduce tasks into a queue. Finally, according to tasks submission time order, tasks can be selected to execute. Though, FIFO scheduling algorithm is simple but it ignores the different demands of different tasks. Hence, this makes the execution performance and utilization of resources lower.
%  \item Map Reduce tolerant scheduling algorithm in Hadoop cluster: Hadoop uses First In First Out (FIFO) scheduling algorithm to schedule MapReduce tasks of clients, and then puts all the MapReduce tasks into a queue. Finally, according to tasks submission time order, tasks can be chose to be preformed.
  \begin{itemize}
  \item Fair Scheduling algorithm:
  \end{itemize}
  It is firstly proposed by Facebook. It supports classification of tasks and allocates different types of resources to the various types of tasks in order to improve performance quality. Moreover, it can dynamically appraise the number of parallel tasks to make full use of resources.
 The fair scheduler organizes jobs into pools, divides resources fairly between these pools and also allows assigning guaranteed minimum shares to pools. By default, there is a separate pool for each user, so that each user gets an equal share of the cluster. It is also possible to set a job's pool based on a configurable attribute, such as user name and unix group. Within each pool, jobs can be scheduled using either fair sharing or FIFO scheduling \cite{white2012hadoop}. However, this scheduling policy does not consider actual workload of the task nodes and it results in workload unbalance. At runtime, the actual workload of a task node is determined by resource consumption of the running tasks rather than number of those.
   \begin{itemize}
    \item Capability Scheduling algorithm:
    \end{itemize}
    It is proposed by Yahoo. It supports parallel operations of multi tasks and dynamically allocates resources to increase  the efficiency of tasks execution \cite{white2012hadoop}. The Capacity Scheduler is designed to allow sharing a large cluster while giving each queue a minimum capacity guarantee. When a task is submitted, it is put into a queue. Each queue gets some TaskTracker resources based on configuration to process Map and Reduce operations. Available resources can be dynamically allocated to the queues with heavy workloads. In a queue, tasks can be operated according to their different  priorities. High-level priority task will be executed first, but Capacity Scheduler does not support preempting priority. Nevertheless, this scheduling algorithm cannot automatically set up configuration of queues and also cannot choose queues by itself.

%and its aim is to enable Hadoop cluster MapReduce programming model to perform different types of tasks in parallel.
%Users can fairly use resources on clusters. Fair Scheduling puts all the submitted tasks together and forms a user pool in which each client can share resources fairly.

%However, the development of Hadoop scheduling algorithms is not very well. Hadoop scheduling algorithms are simple and therefore the entire system performance can be influenced. The weakness of Hadoop scheduling is only existing one JobTracker to schedule jobs. Once a great number of jobs are submitted by hundreds of clients and a lot of TaskTrackers are distributed in clusters, JobTacker will face heavy execution pressure. Once JobTracker fails or crashes, the whole cluster cannot work. Therefore, in this project, we want to propose a new scheduling policy which can be operated and cooperated by some JobTrackers, not only one. Therefore relevant distributed JobTracker scheduling and resource management algorithm will be our future research. We also both focus on resource allocation and task scheduling, as well as take into account some specific criteria or priorities of tasks and resources to optimize scheduling algorithms, such as resource bandwidth and processing capability, and deadline.

\section{Methodology}
The project mainly consists of the following 2 phases:
\subsection{Phase 1: Analyzing different scheduling policies and improving some scheduling algorithms}
In the initial phase, I will study different scheduling policies such as Greedy, Round Robin, Genetic Algorithm, Ant Algorithm, priority scheduling and other heuristic algorithms. Next, I will evaluate these algorithms performance and determine their advantage, disadvantage as well as applicability.

For simulating all these different algorithms, I will adopt CloudSim framework. CloudSim is a new generation and extensible simulation platform which enables seamless modeling, simulation, and experimentation of emerging Cloud computing infrastructures and management services to be accomplished \cite{buyya2009cloud}\cite{calheiros2009cloudsim}. CloudSim is used to verify the correctness of the proposed algorithm. CloudSim toolkit is used to simulate heterogeneous resource environment and the communication environment. The layered CloudSim architecture mainly contains 4 layers: SimJava, GridSim, CloudSim and User code \cite{calheiros2009cloudsim}.
Through conducting an empirical experiment to examine various different scheduling algorithms that are important to the development of software for the cloud platforms, we will propose an improved scheduling algorithm and test as well as verify the performance of proposed algorithm in CloudSim. We also need to extend CloudSim simulator for our experimentation which enables some parameters and data to be varied easily.

\emph{\textbf{Research Progress and revenant work:}}
\begin{itemize}
\item Evaluate and analyze existing algorithms
\end{itemize}
We have evaluated some existing scheduling algorithms, such as job grouping-based scheduling, bandwidth-aware job grouping-based
scheduling, greedy scheduling algorithm, and sequential algorithm (FIFO). Fig.\ref{processingdiff} shows the total processing time of different algorithms. We can obviously see that sequential scheduling algorithm processing time is largest and processing time of greedy algorithm is smallest when the number of cloudlets is changed.
\begin{figure}[h]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
\includegraphics[width= 7.5 cm]{compareexist.pdf}
\caption{Total processing time of different scheduling algorithms}
\label{processingdiff}
\end{figure}

%According to TABLE \ref{improvementexist}, compared with FIFO algorithm, greedy algorithm processing time is decreased by 22.31\%. Processing time of job grouping and grouping with bandwidth aware is cut down by 12.79\%.
\begin{itemize}
\item The proposed algorithm
\end{itemize}

We have proposed an algorithm which integrates with the three algorithms: task grouping, prioritization (bandwidth-aware) and SJF (Shortest-Job-First).
Compared with traditional grouping-based scheduling algorithm, the proposed algorithm is suitable for both very lightweight jobs and the tasks with random and unpredictable processing requirements. It also considers the communication and transmission rate of resources to reduce the transmission latency. Integrated SJF can reduce tasks waiting time.
%The proposed algorithm maximizes the utilization of cloud resources, and aims to reduce task waiting time.
Fig.\ref{processing} shows the total processing time of different number of tasks.
Fig.\ref{waiting} presents average waiting time of different number of tasks.
In comparison with existing task grouping algorithms, results show that the proposed algorithm waiting time  significantly decreased over 30\% and processing time decreased by 7.25\%.
%In comparison with existing task grouping algorithms, results show that the proposed algorithm waiting time  (decreased over 30\%) and processing time decreased significantly (decreased by 7.25\%).
%The proposed algorithm turnaround time is smallest and decreased by 7.25\% on average.
The proposed method effectively minimizes waiting time and processing time and reduces processing cost to achieve optimum resources utilization and minimum overhead, as well as to reduce influence of bandwidth bottleneck in communication.

\begin{figure}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
\includegraphics[width= 7.0 cm]{processingtimemsmafi.pdf}
\caption{Total processing time with different number of tasks}
\label{processing}
\end{figure}

\begin{figure}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
\includegraphics[width= 7.0 cm]{averwait.pdf}
\caption{Average waiting time with different number of tasks}
\label{waiting}
\end{figure}
%\begin{table}[h]
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
%\centering
%\caption { improvement ratio of processing time}
%\label{improvementexist}
%\begin{tabular}{|p{1.9cm}<{\centering}|p{1.4cm}<{\centering}|p{2.4cm}<{\centering}|p{1.3cm}<{\centering}|}
%\hline
%\multicolumn{4}{|c|}{\textbf{Improvement ratio of processing time compared with FIFO}} \\
%\hline
%\textbf{No. of cloudlets (tasks)}	&	\textbf{Job Grouping}	&	\textbf{Grouping with bandwidth-aware}	 &	\textbf{Greedy}\\	\hline
%7000	&	11.62\%	&	11.06\%	&	20.17\%	\\	\hline
%6000	&	14.69\%	&	14.12\%	&	24.52\%	\\	\hline
%5000	&	11.16\%	&	10.82\%	&	17.91\%	\\	\hline
%4000	&	16.94\%	&	16.39\%	&	24.82\%	\\	\hline
%3000	&	11.94\%	&	12.77\%	&	23.30\%	\\	\hline
%2000	&	10.39\%	&	11.59\%	&	23.16\%	\\	\hline
%\end{tabular}
%\end{table}

\subsection{Phase 2: Analyzing MapReduce scheduling algorithms and improving or optimizing MapReduce scheduling policies}
%\subsection{Phase 2: Analyzing MapReduce framework and improving the framework}
In this phase, for investigating  MapReduce framework and grasping its operating principles, we will conduct some empirical experiments on Hadoop platform. Hadoop is open source of Google MapReduce \cite{white2012hadoop}. A Hadoop cluster consists of normal PCs as computing nodes and owns great horizontal scalability and high expandability. Hadoop architecture mainly contains 4 components: Client, JobTracker, TaskTracker and Hadoop Distributed File System (HDFS).
%One hadoop cluster which processes a MapReduce task usually contains 4 components. Fig.\ref{Hadoop} shows Hadoop MapReduce architecture.
%\begin{figure}[h]
%\setlength{\abovecaptionskip}{0pt}
%\setlength{\belowcaptionskip}{-10pt}
%\centering
%\includegraphics[width= 9cm]{hadoopmapreduce.pdf}
%\caption{Chart of Hadoop MapReduce architecture}
%\label{Hadoop}
%\end{figure}
%\begin{itemize}
%  \item  Client: It is an interactive interface between users and clusters.
%  \item  JobTracker: It is responsible for scheduling the entire job to execute. Each cluster can only have one JobTracker.
%  \item TaskTracker: It is a real task executer which can implement map tasks and reduce tasks. Each cluster can have a great deal of TaskTrackers.
%  \item  Hadoop Distributed File System (HDFS): It is used to store input/output data.
%\end{itemize}
We will use 6 PCs to set up a cluster, where one PC is as a master and the rest are as slaves and clients. Afterwards, we will run scheduling algorithms in Hadoop and evaluate different scheduling deployment performance as well as analyze these scheduling policies' advantage and drawbacks. Finally, we will propose or optimize an efficient MapReduce scheduling algorithm.
 %to improve MapReduce framework.
%Finally, we will improve Hadoop MapReduce framework such as proposing or optimizing an efficient scheduling algorithm.
\section{Discussion and Expected Results}
The proposed algorithm is an attempt to reduce network delay, maximize the utilization of cloud resources and simultaneously, achieve a minimum waiting time. In addition, it is able to determine deadline constraints in providing Quality of Service (QoS). The improved MapReduce scheduling policy should increase the availability and reduce nodes load as well as optimize resource management.

CloudSim is able to quantify the performance of scheduling on a cloud infrastructure for different application and service models \cite{calheiros2009cloudsim}. In Phase 1, I focus on analyzing different scheduling algorithms. Consequently, CloudSim is a appropriate benchmark platform to do simulated experiments through simulation of different cloud deployments. It is necessary to testify my proposed scheduling algorithm and adjust the performance bottleneck before deploying into real cloud-based system. MapReduce model is applied widely for Cloud Computing in IT industries, such as Google, Yahoo, Amazon and Microsoft. Therefore, in Phase 2, I emphasize scheduling policies on MapReduce. Optimization of scheduling algorithms from the requirements for the most modern and popular paradigm to perform multitasking and multiplexing is more rational, practical and applicable in industries. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy.
%The proposed MapReduce scheduling algorithm may be used in IT enterprises.
%To enhance the scheduling capability on cloud-based software systems, simulations are used to facilitate the evaluations on different approaches under various run-time scenarios in a cloud environment. In Phase 1, we focus on simulated experiments, and all the experiments including the proposed algorithm are operated in a simulated framework. Although all the hypotheses are close to real, the resources and tasks are still simulative and not real data. Therefore, in Phase 2, to gain more scientific and realistic results, we will collect real data sets and adopt the real cloud-based environment - MapReduce platform. Optimizing a real cloud scheduling algorithm is more meaningful for industries. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems.
%For improving the scheduling mechanism, an accurate model to describe clients' demands will be developed.
%Experimental results indicate improvements on the utilisation of resources and it is able to minimise turnaround time, and reduces influence on the bottleneck of bandwidth usage.
%Better yet, the processing time of proposed algorithm has also been reduced to satisfy the real-time demand of clients. The empirical results illustrate that the proposed scheduling policies are a significant improvement and serve an important baseline for benchmarking, and for the future development of scheduling algorithms for cloud-based software applications and systems.

%Further experiments will be carried out on other heuristic scheduling methods such as Genetic algorithm and Ant algorithm. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems.

%and will implement the proposed solution in a real run-time environment for taking into account QoS (Quality of Service) of clients and load balance demands.

%For improving the scheduling mechanism, an accurate model to describe clients' demands will be developed.
%\cite{calheiros2011cloudsim}

\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,myreferencepro}
\end{document}




