
\documentclass[conference]{IEEEtran}

% *** CITATION PACKAGES ***
%
\usepackage{cite}
\usepackage{multirow}
\usepackage{array}
\usepackage{tabularx}
\usepackage{color}
%\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{algpseudocode}
% *** GRAPHICS RELATED PACKAGES ***
\ifCLASSINFOpdf
\usepackage[pdftex]{graphicx}
\else
\usepackage[dvips]{graphicx}
\usepackage{graphicx}
\fi

\begin{document}

\title{An Investigation on Scheduling Policies for Cloud-based Software Systems}

\author{\IEEEauthorblockN{Jia Ru}
\IEEEauthorblockA{\emph{Department of Computing}\\
\emph{The Hong Kong Polytechnic University}\\
\emph{Hong Kong SAR}\\
\emph{Email:csrjia@comp.polyu.edu.hk}}
}

\maketitle
\begin{abstract}
\newline
\textit{Background}: Since the appearance of cloud computing, with the booming development of IT science and technology both in academic fields and industries, the applications of cloud computing are going on developing, and cloud computing is applying theory to practical applications. When researching cloud computing, some important problems appear and need to be solved such as data centre network structure expansibility, energy conservation, replica policies, scheduling mechanisms and so on.
%Since the appearance of cloud computing, with the booming development of IT science and technology and advance by academic and industry, the applications of cloud computing are going on developing, and cloud computing is moving from theory to practice.When researching cloud computing, some important problems appear and need to be solved such as data centre network structure expansibility, energy conservation, replica policies, scheduling mechanism and so on.
\newline
\textit{Aim}: The main objective in this project is: (1) to optimize MapReduce framework, such as proposing a novel scheduling algorithm that can be used effectively on MapReduce,
%data centre structure such as designing routing protocol and algorithms between nodes and
(2) to improve scheduling strategies to maximize the cloud resource utilization, and improve the computation ratio as well as reduce the makespan, overhead and delay in the cloud-based software system.
\newline
\textit{Method}: (1) Analyzing different scheduling algorithms which can be adopted in cloud-based systems, such as FCFS (First-Come-First-Service), SJF (Shortest-Job-First) scheduling, priority scheduling, RR (Round-Robin) scheduling, random, greedy, Genetic Algorithm, Ant Algorithm and simulating these different algorithms in CloudSim-a cloud-based simulated framework. (2) Evaluating these algorithms performance and summarizing advantage and disadvantage. (3) Proposing an improved scheduling algorithm or policy and verifying the proposed algorithm in CloudSim as well as extending CloudSim. (4) Studying MapReduce framework and grasping its operating principles. (5) Analyzing MapReduce scheduling algorithms and proposing or optimizing an efficient scheduling algorithm on MapReduce.
%(4) Testing the proposed scheduling policy in a real cloud environment such as Amazon EC2 or Google. (5) Studying MapReduece framework and grasping its operating principle. (6) Analysing MapReduce scheduling algorithms and proposing or rewriting an efficient scheduling algorithm to MapReduce.
%(5) Analyzing the different existing data center structure and evaluating their performance, advantage and disadvantages. (6) For reference existing and famous data centres structure, proposing a new data centre structure.
\newline
\textit{Conclusion}: Proposed scheduling policies should effectively improve the number of completed tasks, increase profit of service party, reduce cost which is undertaken by service party when accepting tasks, and promote the development of scheduling environment. In the premise of processing accuracy, improved MapReduce framework can improve the utilization rate of resources and reduce nodes load as well as optimize resource management.
%In the premise of lower proportion between switches and servers, new data centre structure can reduce network cost and energy consuming.
\newline
\end{abstract}
\begin{keywords}
Cloud Computing, MapReduce, Scheduling, Software Metrics
\end{keywords}
\IEEEpeerreviewmaketitle

\section{Introduction}
Cloud computing is a new software system technology, which allows dynamic resource allocation on consolidated resources using a combination of techniques from parallel computing, distributed computing, as well as platform virtualization technologies \cite{selvarani2010improved}\cite{fox2009above}. Cloud computing has been a primary focus in both the research community and the industry over recent years because of its flexibility in software deployments, and of its elasticity capability on resource consolidation. With further development and research on cloud computing, there are some hot research challenges to be overcome. Data centre network structure  expansibility, energy conservation, replica policies and scheduling mechanism are very important problems in cloud computing \cite{fox2009above}\cite{foster2008cloud}\cite{germain2009convergence}\cite{leiba2009having}.
%\cite{erickson2009content}\cite{jensen2009technical}
Software engineering for cloud-based software systems is a new domain of research, requiring careful considerations on its characteristics with respect to traditional software development paradigms. More importantly the area of effective scheduling run time tasks becomes one of the research focuses. The aim of cloud computing is to realize cooperation work and resource sharing, but different kinds of resources reflect isomerism, dynamic nature and a diversity of user demands. This makes resource management very complex. Therefore, scheduling problem is an important research area in this regard. The utilization rate of huge resources in data centre is related to scheduling mechanisms applied. Adopting what scheduling mechanism and what kind of data centre structure to improve utilization rate of resources is a significant challenge due to a number of factors. The fundamental mechanism of this new kind software system is to schedule the applications to the resources pool which is consisted of hugely distributed computers \cite{xu2011job}.

Nowadays, MapReduce \cite{dean2008mapreduce} as a distributed processing model and GFS (Google File System) as a distributed file system are designed and developed by Google, which are  very useful tools to process data-intensive jobs. Hadoop is an open source model of MapReduce and GFS, which draws more attention both in industries and academic fields. Hadoop clusters own great horizontal scalability, and simultaneously normal PCs can be adopted on computing nodes in clusters. Therefore, it can significantly reduce the cost of hardware on Hadoop clusters. Moreover, Hadoop has better fault tolerance and usability. Hadoop is also a platform to process large data sets and analyze big data much accurately. However, Hadoop default job scheduling algorithms are not very efficient, and hence Hadoop scheduling algorithms as a pluggable component should be improved to enhance performance of the entire system and framework. It is the proposed work in this project.
%Nowadays, data centre is not only a site which manages and repairs servers, but also becomes a centre of numerous high performance computers which can compute and store massive data. Cloud computing centre contains hundreds or thousands, even millions of servers, or PCs. It means that many virtual machines are distributed in different regions as clusters in
%cloud computing as well as all the resources are allocated in these virtual machines and its physical structure is scalable. In addition, all these resources are of high heterogeneity. Therefore, designing and developing a new data centre network structure which can satisfies cloud computing unique characteristic-multiple nodes is a considerable problem. A well-structured data centre can promise high scalability and utilization with low cost. Hence, designing a clouding computing data centre structure should focus on expansibility and energy conservation as well as integrate with characteristics of new generation data centre. As reference to existing famous data centres structure, a new data centre structure which will be proposed in this project should meet the characters mentioned above. The data centre structure which we will design and develop should support ten thousand even a million servers, and so it is necessary and feasible to design routing protocol and algorithms between nodes to reduce links among nodes and decrease dependence of high-end switches.

Scheduling is a decision process, and its content is deploying resources to applications of different clients at a suitable time, or during a specific period of time. The target of optimizing scheduling considers one or two factors that include cost, task completion time, task priority, profit and so on. In the premise of guaranteed resource utilization rate, scheduling policies mainly focus on allocation management of resources and satisfy all the resource demands of users. Eventually, scheduling policies that we will propose should effectively improve the number of completed applications, increase profit of service party, reduce cost which is undertaken by service party when accepting applications and guarantees QoS (Quality of Service) demand of clients.

\section{Literature review}
\subsection{Concepts of Cloud Computing}
Cloud computing is based on the concept of infrastructure convergence and sharing services, in an attempt to provide unique types of services through provisioning of dynamically scalable and virtualized resources \cite{vaquero2008break}\cite{buyya2009cloud}.
Cloud computing is both a combination and commercial implementations of parallel computing, distributed computing, and grid computing, as well as a integrated evolution of virtualization, utility computing, IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service) and so on \cite{buyya2009modeling}.
%Cloud computing is a combination of parallel computing, distributed computing, and grid computing, which is also commercial implementations of all these concepts as well as a combined evolution of virtualization, utility computing, IaaS(Infrastructure as a Service), PaaS(Platform as a Service), SaaS(Software as a Service) and so on \cite{buyya2009modeling}. 
%Cloud computing is composed of clusters so its unique characteristic is multiple nodes.
%SaaS provides a fully functional software system ready-to-use for its end-users such as Google Doc, SalesForge. PaaS provides the development platforms for developers, for example using Google App Engine API. IaaS is one of the most popular deployment models and offers the flexibility for developers providing on-demand virtual machines, users are able to deploy their software as in local servers, and a typical example is the cloud offering by Amazon EC2 which is essentially an IaaS cloud service.
\subsection{Scheduling model in Cloud Computing}
In traditional distributed environment, the aim of optimizing scheduling is mainly focusing on system performance, such as system throughput, CPU utilization rate and almost never considering QoS. In cloud computing environment, we are not only emphasizing resource utilization rate and system performance, but also requiring a guaranteed QoS of users based on different demands. Users can choose the resource in the cloud by themselves and according to their own requirements.
\subsubsection{\textbf{Cloud computing scheduling model}}
Cloud computing scheduling model is mainly constructed by Client, Broker, Resources, Resources supporter and Information Service. Fig.\ref{scheduling} shows the scheduling model structure \cite{buyya2000economy}. The tasks that users need to implement usually can be divided into serial application, parallel application, parameter scan application, cooperation application and so on. System allows users to set up resource demand and parameter preference. Different clients use resources at different prices, which may vary from time to time. Broker is a middle interface between clients and resources as well as used to find resources, choose resources, accept tasks, return scheduling results, and exchange information between clients and resources. Broker supports different scheduling policies, which can allocate resources and scheduling tasks in accordance to the demands of clients. Broker is constituted by Job Control Agent, Schedule Advisor, Explorer, Trade Manager and Deployment Agent.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
\includegraphics[width= 11cm]{schedulingmodel4.pdf}
\caption{Chart of scheduling model structure}
\label{scheduling}
\end{figure*}
\begin{itemize}
  \item Job Control Agent: It is responsible for monitoring jobs in the software system, such as schedule generation, jobs creation, status of jobs and communicating with clients and schedule advisor.
  \item Schedule Advisor: It is used to determine resources, allocate available resources which satisfy the demands of clients such as deadline and cost, and to allocate jobs.
  \item Cloud Explorer: It is a tool that communicates with cloud information service to find resources and identifies the list of authorized machines as well as records resources status information.
  \item Trade Manager: It determines resources access cost and tries to communicate with resources at a low cost under the guidance of schedule advisor.
  \item Deployment Agent: It uses scheduler instruction to activate the execution of tasks as well as to update the status of execution sending back to Job Control Agent in regular intervals.
\end{itemize}
%Information service is mainly used to available resource information. If brokers want to find out appropriate resource, they must make enquiry in information service and gain the resource information which satisfies condition of clients. After this, brokers can only interact with resource providers. If resource providers have new resource to lease, they must register in information service firstly, and then brokers are able to find the resource.
%During the transaction between clients and service providers, service providers register resource information at first. After clients submit tasks to brokers, brokers research resources in information service and deploy tasks to appropriate resources in accordance with corresponding scheduling algorithms. Before execution of tasks, brokers evaluate completion time and cost of tasks. If the time exceeds deadline or the cost is higher than budget of clients, the brokers will deny tasks. If the execution of tasks is accomplished, brokers will return the deployment results to clients and gain relevant profits, otherwise, send error message back to clients.
\subsubsection{\textbf{Basic scheduling methods}}
%Scheduling problem is a NP-Complete problem \cite{fernandez1989allocating}.
Scheduling methods always consider two aspects: one is characteristics of tasks, and the other is characteristics of %datacenter resources \cite{korkhov2009dynamic}\cite{gomathi2011adaptive}. 
datacenter resources \cite{korkhov2009dynamic}. Tasks submit on the resources which are free and where the input data is available or on the other hand, tasks submit on some specific resources based on some criteria \cite{singh2011greedy}.
\subsubsection{\textbf{Resource allocation}}
The resources in the cloud computing can be allocated in many different ways. Traditional and simple method of task scheduling in cloud environment uses the clients tasks as the overhead application base \cite{selvarani2010improved}. The allocation of resources that need to consider maximum utilization rate of resources are FCFS (First-Come-First-Service), SJF (Shortest-Job-First) scheduling, priority scheduling, RR (Round-Robin) scheduling, random, greedy, Genetic Algorithm \cite{sivanandam2007introduction}, Ant Algorithm and other heuristic scheduling methods. The scheduling of tasks can also be FCFS, SJF, priority-based, RR, job grouping and so on \cite{choudharydynamic}.
%\textcolor{red}{Scheduling algorithms choose a task to be performed and corresponding resource in which the task will be executed. According to different characters of resource such as bandwidth, processing capabilities, cost,load balancing, and so on, as well as base on clients requirements of deployment, for example, deadline, profit, cost, priority and so forth, choosing a suitable resource to a task is very significant.}
\subsection{Scheduling mechanism}
\subsubsection{\textbf{Scheduling policy based on replica}}
Replica is a hot topic in cloud computing which makes up single-point failure of storage object, bad fault tolerance, bad access performance and so on. K-means algorithm \cite{beckmann1990r} is a typical dynamic clustering algorithm which modifies iteration point-to-point. Its principle is base on quadratic sum function.
\subsubsection{\textbf{MapReduce}}
MapReduce is not only a programming model but also a high-efficiency scheduling model for processing, excavating and analyzing large data sets, and the name of an implementation of the model by Google \cite{dean2008mapreduce}. MapReduce is typically adopted on clusters of computers in distributed computing. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. The functional model with user-specified map and reduce operations allows us to parallelize large computations easily and to use re-execution as the primary mechanism for fault tolerance.
\subsubsection{\textbf{Online scheduling problem}}
According to different scheduling time, task scheduling can be divided into two models-batch scheduling and online scheduling. In online scheduling, as long as one task arrives, it is scheduled to a free time resource to perform immediately. If all the resources are busy, the task needs to wait. In existing cloud computing environment, online scheduling policy mainly focuses on distribution management of resources, and satisfies various demands of clients. 
However, it does not pay enough attention to supporting service.
%For on-line scheduling the most important classification of the on-line problems according to which part of the problem is given on-line \cite{sgall1998line}: (1) Scheduling jobs one by one, (2) Unknown running time, (3) Jobs arrive over time (4) Interval scheduling.
\subsubsection{\textbf{Batch scheduling problem}}
In batch scheduling, when a task arrives, the task does not gain service immediately and is collected in a task set. During some special time or after an event, all the tasks can be scheduled to resources. Min-min is a typical batch scheduling algorithm. The concept of Min-min is allocating the best resource to the task which computing capabilities are minimum. It can improve system throughout and enable task sets to gain minimum completion time.
\subsection{Open Source Framework-Hadoop}
Hadoop is open source of Google MapReduce \cite{white2012hadoop}, and as a basic distributed framework it is used to develop distributed or parallel program, even if bottom layer of distributed infrastructure is not known in detail. Hadoop clusters which are compose of normal PCs as computing nodes own great horizontal scalability, high expandability, low cost, high efficiency, well reliability, high-performance computing capability and storage.
%as follow:
%\begin{itemize}
%  \item high expandability: reliably store and process big data which size is petabytes-level
%  \item low cost: through clusters constituted by normal PCs to distribute and process data
%  \item high efficiency: through distributing data to different nodes, Hadoop can effectively process data in parallel
%  \item reliability: automatically maintain and repair data replications; automatically redeploy tasks to nodes after tasks fail to be executed
%\end{itemize}
\subsubsection{\textbf{Chart of Hadoop MapReduce cluster architecture}}
One hadoop cluster which processes a MapReduce task usually contains 4 components. Fig.\ref{Hadoop} shows Hadoop MapReduce architecture.
\begin{figure}
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{-10pt}
\centering
\includegraphics[width= 9cm]{hadoopmapreduce.pdf}
\caption{Chart of Hadoop MapReduce architecture}
\label{Hadoop}
\end{figure}
\begin{itemize}
  \item  Client: It is an interactive interface between users and clusters.
  \item  JobTracker: It is responsible for scheduling the entire job to execute. Each cluster can only have one JobTracker.
  \item TaskTracker: It is a real task executer which can implement map tasks and reduce tasks. Each cluster can have a great deal of TaskTrackers.
  \item  Hadoop Distributed File System (HDFS): It is used to store input/output data.
\end{itemize}
\subsubsection{\textbf{Hadoop default scheduling algorithm}}
\begin{itemize}
  \item Map Reduce tolerant scheduling algorithm in Hadoop cluster: Hadoop uses First In First Out (FIFO) scheduling algorithm to schedule MapReduce tasks of clients, and then puts all the MapReduce tasks into a queue. Finally, according to tasks submission time order, tasks can be chose to be preformed.
  \item Fair Scheduling algorithm: Fair Scheduling is firstly proposed by Facebook, and its aim is to enable Hadoop cluster MapReduce programming model to perform different types of tasks in parallel. Users can fairly use resources on clusters. Fair Scheduling puts all the submitted tasks together and forms a user pool in which each client can share resources fairly.
\end{itemize}

%\subsection{data centre in Cloud Computing}
%"Data centre" is a place where the many companies(mainly websites) store equipments and data, which is extend of studio renting in internet field. It is significant invention in IT field last century and denotes standardization and systematization in IT's applications. With the development of data centre, especially the appearance of cloud computing, data centre becomes a high performance's roost with huge computation amount and storage. Every IT enterprises changed the server model whose unit is a PC to  a new cluster model. Base on the new model, a series of function applications such as virtualization, cloud computing, cloud storage and so on is developed. Therefore, efficiency of servers in unit quantity can be improved.
%
%cloud computing data centre must satisfy some characteristics \cite{buyya2010energy}:
%(1) standardised infrastructure.
%(2) virtualized resource and environment.
%(3) good expansibility.
%(4) well fault tolerance.
%(5) high communication performance among servers.
%(6) position-independent address structure.
%(7) economic energy and space.

However, the development of Hadoop scheduling algorithms is not very well. Hadoop scheduling algorithms are simple and therefore the entire system performance can be influenced. The weakness of Hadoop scheduling is only existing one JobTracker to schedule jobs. Once a great number of jobs are submitted by hundreds of clients and a lot of TaskTrackers are distributed in clusters, JobTacker will face heavy execution pressure. Once JobTracker fails or crashes, the whole cluster cannot work. Therefore, in this project, we want to propose a new scheduling policy which can be operated and cooperated by some JobTrackers, not only one. Therefore relevant distributed JobTracker scheduling and resource management algorithm will be our future research. We also both focus on resource allocation and task scheduling, as well as take into account some specific criteria or priorities of tasks and resources to optimize scheduling algorithms, such as resource bandwidth and processing capability, and deadline.

\section{Methodology}
The project mainly consists of the following 2 phases:
\subsection{Phase 1. Analyzing different scheduling policies and improving some scheduling algorithms}
In the initial phase, I will study different scheduling policies such as Greedy, Round Robin, Genetic Algorithm, Ant Algorithm priority scheduling and other heuristic algorithms and analyze these algorithms performance, advantage, disadvantage as well as applicability.

For simulating all these different algorithms, I will adopt CloudSim framework. CloudSim is a new generation and extensible simulation platform which enables seamless modeling, simulation, and experimentation of emerging Cloud computing infrastructures and management services to be accomplished \cite{buyya2009cloud}\cite{calheiros2009cloudsim}. CloudSim is used to verify the correctness of the proposed algorithm. CloudSim toolkit is used to simulate heterogeneous resource environment and the communication environment. The layered CloudSim architecture mainly contains 4 layers: SimJava, GridSim, CloudSim and User code \cite{calheiros2009cloudsim}. Through conducting an empirical experiment to examine various different scheduling algorithms that are important to the development of software for the cloud platforms, we will propose an improved scheduling algorithm and test as well as verify the performance of proposed algorithm in CloudSim. We also need to extend CloudSim simulator for our experimentation which enables some parameters and data to be varied easily. 
\subsection{Phase 2. Analyzing MapReduce framework and improving the framework}
In this phase, we will investigate  MapReduce framework and grasp its operating principles. We will use 6 PCs to set up a Hadoop cluster, where one PC is as a master and the rest are as slaves and clients. Run scheduling algorithms in Hadoop and evaluate different scheduling deployment performance as well as analyze these scheduling policies advantage and drawbacks. Finally, we will improve Hadoop MapReduce framework such as proposing or optimizing an efficient scheduling algorithm.
\section{Discussion and Conclusion}
The proposed algorithm should reduce network delay, maximize the utilization of cloud resources and simultaneously, achieve a minimum waiting time which is also able to determine deadline constraints in providing Quality of Service (QoS). Better yet, the improved Hadoop framework should increase the availability and reduce nodes load as well as optimize resource management.

To enhance the scheduling capability on cloud computing based software systems, simulations are used to facilitate the evaluations on different approaches under various run-time scenarios in a cloud environment. In this project, we focus on simulated experiments, and therefore all the experiments are operated in a simulated framework. Although all the hypotheses are close to real, the resources and tasks are still simulative and not real data. In the future, to gain more scientific and realistic results, we will collect real data sets and adopt the real cloud-based environment such as Amazon EC2 or Google. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems. 
%For improving the scheduling mechanism, an accurate model to describe clients' demands will be developed.
%Experimental results indicate improvements on the utilisation of resources and it is able to minimise turnaround time, and reduces influence on the bottleneck of bandwidth usage.
%Better yet, the processing time of proposed algorithm has also been reduced to satisfy the real-time demand of clients. The empirical results illustrate that the proposed scheduling policies are a significant improvement and serve an important baseline for benchmarking, and for the future development of scheduling algorithms for cloud-based software applications and systems.

%Further experiments will be carried out on other heuristic scheduling methods such as Genetic algorithm and Ant algorithm. It is also important to evaluate the proposed solution in a physical real run-time environment to observe its true efficacy for cloud-based systems.

%and will implement the proposed solution in a real run-time environment for taking into account QoS (Quality of Service) of clients and load balance demands.

%For improving the scheduling mechanism, an accurate model to describe clients' demands will be developed.
%\cite{calheiros2011cloudsim}

\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,myreferencepro}
\end{document}




