\documentclass[a4paper,11pt]{article}
\usepackage{fullpage}
\usepackage{a4wide}
\usepackage{graphicx}

%opening
\title{Experimentation project: TU/e - Advanced Algorithms. Analysis of algorithms for the \textit{load-balancing} problem.}
%\date{}
\author{E.Demirta\c{s}\footnote{student 0786060, e.demirtas@student.tue.nl}, S.Pai\footnote{student 0788401, s.pai@student.tue.nl},  W.Ram\'{\i}rez-Monta\~{n}o\footnote{student 0787997, w.ramirez.montano@student.tue.nl}}

\begin{document}
\maketitle 

\begin{abstract}

One of the hardest challenges in computer science is to find a polynomial-time algorithm that can solve a
NP-hard problem. Well founded theoretical results state that such an algorithm does not exist unless P=NP.
Since it is not yet known whether a proof exists for P=NP, researchers have focused their attention on designing
algorithms for NP-hard problems that either employ heuristics to find a solution that is intuitively close to optimal or
by designing approximation algorithms that guarantee to solve the problem by providing a solution that is at most a
factor of $k$ from the optimal solution. This paper discusses a particular NP-hard problem, called the LOAD-BALANCING,
or the MULTI-PROCESSOR SCHEDULING problem, which aims to minimize the \textit{makespan} of a set of \textit{jobs} that
need to be scheduled on a set of \textit{machines}. We discuss some of the existing algorithms and analyze their
approximation ratio --- the factor by which the obtained solution can deviate from the ideal solution.
We provide a quantitative evaluation of these algorithms with respect to several criteria, including their performance (running time)
and their approximation ratio. We also propose the comparison of the state-of-the-art algorithms that exist in the literature,
with an algorithm which employs randomization techniques to apply load balancing. Finally, we contribute with
an indication of the input cases for which the algorithm gives a close-to-optimal solution and mention its applicability in
a particular real-life scenario.
\end{abstract}

\section{Introduction}
The \textit{Load-Balancing} or \textit{Multi-processor scheduling} problem has been deeply analyzed (\cite{ABKU99},
\cite{SSS08},\cite{SW99}) as it finds its application in a myriad of domains,
including computer networks and distributed computing (e.g. \cite{WSP06}). Precisely, the load balancing problem
is generally described as follows: given a set of $n$ jobs $J=\{j_1,j_2,...,j_n\}$ which require $t_j=\{t_1,t_2,...,t_n\}$
time to be completed over a set of $m$ machines $M=\{M_1,M_2,...,M_m\}$, the goal is to find a suitable assignment of these jobs
to these machines, such that the time at which all jobs are completed is minimized --- the so called \textit{makespan}.
Let $A(i)$ be the set of jobs assigned to machine $M_i$ and let $T_i=\{T_1,T_2,...,T_m\}$ be the resulting time load for machine $M_i$.
Then, the Load-Balancing problem aims to minimize $\mathrm{\textbf{max}}(T_i)$.
In order to concentrate on the central aspects of the problem, we make a few assumptions:

\begin{enumerate}
 \item \textit{The jobs cannot be pre-empted}. This implies that once a job is assigned to a machine, neither it can be reassigned
 to another machine nor it can be splitted between several machines.
 \item \textit{At any given time, all jobs are ready for execution}. This implies that the schedule is static.
 Jobs are not assumed to arrive dynamically\footnote{The coverage of the dynamic arrival of jobs, is briefly discussed on the comparison
 of the algorithms and conclusions (Section 7).}. Jobs do not get blocked at any given time.
 \item \textit{Each machine can execute only one job at a time}. We consider only reliable machines (i.e. machines do not decompose).
\end{enumerate}

A well-known result in scheduling theory is that the load-balancing problem is an NP-hard problem \cite{GJ79}. In fact, it has been
proved that load balancing for 2 machines is NP-hard \cite{EKS08}. Several approximation algorithms have been designed such that they
run in polynomial time and find a solution that is within a factor $\rho$ from the optimal load balancing $OPT$. Such algorithms are called
$\rho-approximation$ algorithms. Another class of algorithms that try to solve NP-hard problems approximately are called \textit{PTAS}
(Polynomial Time Approximation Scheme) algorithms \cite{SW07}. These algorithms find solutions that have an approximation ratio of
$\rho=(1 + \varepsilon)$ for a fixed $\varepsilon$.
Several approximation algorithms for the load balancing problem have been proposed in the literature. One popular strategy is the
greedy method within priority algorithms \cite{Da03} and we use a greedy approach within three algorithms, considering fixed offline sets
of jobs. Nevertheless, we noticed the possibility of considering online sets of jobs in two of those algorithms. The first algorithm is the
\textit{Greedy Scheduling} algorithm, which assigns jobs to the machines having the lightest load. The second algorithm is the
\textit{Ordered Scheduling} algorithm, which relies on ordering the input job set $J$ in a particular manner that makes it more amenable
to an efficient schedule. The third algorithm is the \textit{Power of K Choices}, which from random set of $k\leq m$ machines, it assigns
each job to the machine with the lightest load. \cite{FVC95}, \cite{ZOM01} employ genetic programming techniques to solve this problem approximately
in polynomial time. \cite{HL90} approached load balancing as a graph-coloring problem and used graph theory techniques in
order to arrive to a solution. \cite{Mi96} gives detailed analysis in the particular and surprising case of $k=2$ for the third
algorithm in this discussion, as it opens a fast greedy solution for online load balancing. The rest of the paper is structured as follows:
\textbf{Section 2} gives a brief description of the Greedy Scheduling and the Ordered Scheduling Algorithms. \textbf{Section 3}
introduces the concept of randomization and presents the advantages of introducing randomness to get the load balancing approximation.
\textbf{Section 4} discusses the Power of K Choices Algorithm, followed by a brief analysis of its approximation ratio.
\textbf{Section 5} explains the experimental setup, highlighting the hypotheses that will be tested. \textbf{Section 6}
talks about the obtained results and provides a qualitative and quantitative evaluation of the algorithms discussed in the paper.
\textbf{Section 7} discusses about future work and conclusions to this paper.


\section{Greedy Scheduling and the Ordered Scheduling algorithms}

\textit{Greedy Algorithm}: As its name suggests, it is a straightforward approximation algorithm which allocates jobs to machines greedily, where
the greedy criteria is that we always allocate the next job to the machine which has the lightest time load at that moment.
The time load is defined as the total time for which the machine has been busy. The algorithm is as follows:
\begin{tabbing}
 \hspace{16.0pt}\=\kill
\textbf{Algorithm} \textit{GreedyScheduling}($J,m$)  \> \\ 
1.  \> Initialize $Q\leftarrow \emptyset$, $T_i \leftarrow 0$ and $A(i) \leftarrow \emptyset$ for $1\leq i \leq m$. \\ 
2.  \> \textbf{for} $j \leftarrow 1$ \textbf{to} $n$\\
3.  \>   \hspace{16.0pt}\textbf{do} Assign job $j$ to the machine $M_k$ with minimum load, located in top of the heap $Q$. \\
4.  \>     \hspace{32.0pt}Locate the first machine at the heap $Q$ and assign it the $j$ job.\\
5.  \>     \hspace{32.0pt}$A(k)\leftarrow A(k) \cup \{j\}; T_k\leftarrow T_k + t_j$; \\
6.  \>     \hspace{32.0pt}Update the heap $Q$.\\
\end{tabbing}

To reason about its approximation ratio, we need to specify the upper bound i.e. the maximum deviation that this algorithm can take from
the optimal solution. Since we do not know the optimal solution beforehand (otherwise, there would be no need to develop an approximation
algorithm), we devise a lower bound for the solution and find the approximation ratio of our algorithm with respect to the lower bound.
In this case, the lower bound is $\frac{1}{m}\sum_{j=1}^{n} t_j$, where $t_j$ is the runtime of each job. If there are a large number
of small jobs against one very large job, then a stronger lower bound can occur: $\mathrm{\textbf{max}}(t_j)$. It has been proved in the literature
that the greedy algorithm is a $2-approximation$ algorithm with respect to the given lower bound. However, the actual approximation ratio
depends on the number of $m$ machines \cite{YOS98}. So, when taken into account the number of machines, the tight approximation ratio,
is $\rho=(2-\frac{1}{m})$.\\
\textit{Ordered Scheduling or List-Scheduling Algorithm}: This algorithm depends on arranging the input jobs in a particular way that makes
it more amenable to an efficient schedule. In this paper, we discuss an ordering of the jobs based on the LPT rule: Sort the jobs in the
decreasing order of their runtimes $t_j$. The algorithm is as follows:
\begin{tabbing}
 \hspace{16.0pt}\=\kill
\textbf{Algorithm} \textit{OrderedScheduling}($J,m$)  \> \\ 
1.  \> Sort the jobs $J$ according to decreasing processing times $t_j$. \\ 
2.  \> \textbf{do} \textit{GreedyScheduling}($J,m$);\\
\end{tabbing}
It has been shown in the literature that the ordered-scheduling algorithm boasts of an approximation ratio of $\rho=3/2$. In fact,
this algorithm provides an optimal solution if the number of $n$ jobs is at most the number of $m$ machines.\\

\section{Randomization in Load Balancing}
For the Load Balancing Algorithm, a potential application of randomization is to select a machine randomly from the complete set or
a subset of machines, for the incoming jobs. Nevertheless, random selection is very inefficient from the point of view of reducing the makespan.
One phenomenon that has been observed is that for any offline (or online) incoming job, if we select two machines at random and assign the job to the
machine having the least load among the two machines, then we achieve an exponential reduction in the makespan. However, selecting three or more
machines at random will reduce the makespan by just a constant factor and it decreases the performance (i.e. the algorithm requires more time per job).
This technique of picking $k=2$ random choices is described in the literature as “The Power of Two Choices”.
Below, we present a generalized case of the Power of 2 choices algorithm, namely the Power of k choices algorithm.

\section{Power of k Random Choices}
Since the \textit{Ordered-Scheduling} algorithm provides us with a better approximation ratio than the unordered version,
it is evident that the desired makespan approximation to the ideal solution depends on the way that the jobs are ordered.
Moreover, one way to remove the dependencies to the input ordering is to introduce a degree of randomness in the machine selection,
to stir or reduce the input bias. In this paper, we discuss the technique of using $k$ random machines and use them as candidates to
receive each one of the incoming jobs. It has been proved that the approximation ratio of this algorithm is
$O(\mathrm{log} m/\mathrm{log log} m)$ with high probability\cite{AYY07}. To prove this approximation
ratio, it has been assumed that there is a bound on both the maximum runtime of any job and the expected runtimes of the jobs assigned to
any one of the machines.
It has been shown in \cite{ABKU99} that when each one of the n balls (i.e. jobs) is distributed over one of the n bins (i.e. machines),
which is picked at random, then the maximum load on any bin is $O(\mathrm{ln } n/\mathrm{ln } k) + O(1)$ with high probability.
By intuitive analysis of \cite{Mi96}, the greedy distribution of $m$ balls over $n$ bins considers the least loaded bin from 
$d$ random possible destinations for each ball, which causes that the maximum load is less than $O(\mathrm{log log } n/\mathrm{log } d) + O(1)$.
We recall that with $k=2$, we get the best possible approximation ratio, because it is an exponential load reduction with respect
to $k=1$, whereas $k>2$ gives a constant factor of load reduction but it decreases performance. In fact, when considering $k=m$,
this algorithm finishes being an inefficient representation of the greedy algorithm, as the randomness inclusion can be worthless.
\begin{tabbing}
 \hspace{16.0pt}\=\kill
\textbf{Algorithm} \textit{PowerOfKChoices}($J,m$)  \> \\ 
1.  \> Initialize $T_i \leftarrow 0$ and $A(i) \leftarrow \emptyset$ for $1\leq i \leq m$. \\ 
2.  \> \textbf{for} $j \leftarrow 1$ \textbf{to} $n$\\
3.  \>   \hspace{16.0pt}\textbf{do} Select $k$ machines uniformly at random. \\
4.  \>     \hspace{32.0pt}Assign job to machine $M_d$, which has the lightest load amongst the $k$ machines.\\
5.  \>     \hspace{32.0pt}$A(d)\leftarrow A(d) \cup \{j\}; T_d\leftarrow T_d + t_j$; \\
\end{tabbing}

\section{Experimental setup}
All algorithms are implemented in Java (OpenJDK $1.6.0_23$) using Netbeans under Linux Kubuntu 11.10. For for all experiments
a Lenovo W520 (Intel i7-2630QM, 4GB RAM) is used as a test machine. We use the mergesort implementation of the standard
Java \textit{Collections} class to sort the jobs for the \textit{OrderedScheduling} algorithm, as well as the \textit{PriorityQueue}
class to implement the heap used in both, the \textit{GreedyScheduling} and \textit{OrderedScheduling} algorithms. On the
other hand, we use the standard \textit{Random} class to implement the random selection of a $k$ machine in the \textit{PowerOfKChoices}.
Furthermore, we also use the \textit{Random} class to generate most of the time values for the jobs.
Experiments are divided into two categories: the ones about the running time, and others about the quality of results.
In order to conduct these experiments we generate data sets consisting of different number (i.e. 10k, 100k, 1M and 10M)
of random-sized jobs. As an extreme case to measure the quality of results, we also generate one special data set which
consist of 1000 jobs in total, but 999 of them are random sized in the range of [1:1000] and it has one enormous big job
with size of 1000000 in the middle of the data set. Note that the sum of all the random small jobs cannot exceed the size
of the enormous job, so whatever algorithm we apply there will be a gap between the optimal and the result we will have.
We consider the optimal load balance as the corresponding lower bounds in the data set, either the total time of all jobs
divided by number of machines or the maximum time of one job.
We make some observations about the \textit{PowerOfKChoices} Algorithm:
\begin{enumerate}
 \item Because of the randomness introduced in the way we select machines to run our jobs, in the long run, the Power Of k Choices
 Algorithm manages to distribute the jobs uniformly among the machines, i.e. all machines are allocated more or less the same number
 of jobs. If the runtimes of all jobs are the same, then this translates into a makespan that is optimal or very close to optimal.
 However, if the runtimes of the jobs can be arbitrary, then the makespan generated by this algorithm may deviate from the optimal solution
 by a large factor. To analyze the change in the approximation ratio of the algorithm as the differences in the runtimes of the jobs increases,
 we run the algorithm for input sets by changing the standard deviation and we find the relation between the standard deviation of the
 runtimes of the input jobs and the approximation ratio of the algorithm.
 \item One more factor that we aim to compare is the number of machines. A general result that holds good for most algorithms is that the
 approximation ratio of the algorithm takes a beating as the number of machines increases. In our case, two random choices out of three machines is
 always a better case than two random choices out of a thousand machines. We plot the approximation ratio of the algorithm as a function of the
 number of machines and explore the dependency of the number of machines on the approximation ratio of the algorithm.
\end{enumerate}

\textit{Comparison of the run times of the algorithm.} For the Greedy Algorithm, to assign every job, we need to select the machine
having the minimum load. If the information about the loads of the machines is represented in a heap, then this operation takes $O(1)$ time
and our implementation uses a heap query approach to achieve it. So, updating the loads of a machine after a job is assigned to it
takes $O(\mathrm{log} m)$ time. Therefore, if there are $n$ jobs, then the time complexity is $O(n\mathrm{log} m)$.\\
For the Ordered Scheduling Algorithm, we need to sort the jobs in a particular order before starting to build the schedule. Sorting
can be a $O(n\mathrm{log} n)$ operation and our implementation uses a merge sort practice to accomplish it. Next, the Ordered-Scheduling
follows the same procedure as the Greedy algorithm does, and thus, it incurs a time complexity of $O(n\mathrm{log} m)$. Hence, the
time complexity to consider is $O(n\mathrm{log} n) + O(n\mathrm{log} m)$ which can be simplified to $O(n\mathrm{log}(mn))$.\\
For the Power of k choices algorithm, we take the process of selecting $k=2$ machines at random as a $O(1)$ operation. Then, assigning
$n$ jobs to $m$ machines is a $O(n)$ operation. We note that the dependency of the running time on the number of machines is removed.
Thus, we suppose that the Power Of Two Choices is more efficient in terms of running time, when considering a significant amount
of machines. As a remark for this algorithm, if we consider $2<k\leq m$, the time complexity does increase and the influence
of the $k$ amount of  machines is reflected on the results, as we shall show.

\section{Results and Evaluations}
As seen in Figure~\ref{f1}, when number of machines increases the running time of power of two choices algorithm does not
change since regardless of how many machines we have, in every job distribution the algorithm chooses two random machines.
Thus its running time does not depend on number of machines. Hence, when we have some time limitations and huge system consists
of enormous number of machines, power of two choices algorithm gives us a very rapid overview about the load balancing.
However, when we choose $k$ relative to $m$ (i.e. $k = m/2$) for generalized version of this algorithm we can see its effects
on running time clearly in Figure~\ref{f2}. The power of k choices algorithm's run time does not depend directly on $m$, although
as $k$ is changing relative to $m$ in our experiment, we obtain a run time setting which is dependant on $m$ as well.
Disadvantages of this design choice is clear in Figure~\ref{f1}, as we have slower algorithm, however it has advantages in
approximation ratio over power of two choices but it is not enough to consider it as a novel approach in load balancing problem
because greedy achieves almost same approximation ratio in much less time.
Another result is evident in Figure~\ref{f1}: while running time of greedy increases along with number of machines, time for
ordered scheduling seems to be stable. Here one of the interesting revelations is that since ordered scheduling algorithm needs
$O(n \mathrm{log }n)$ for sorting jobs before starting to distribute them, and then the jobs distribution takes $O(n \mathrm{log }m)$,
when the number of jobs is far more than number of machines the effect of changes in number of machines is not reflected in run time analysis.
It simply means that the time for sorting dominates total running time of the algorithm when number of jobs are much more than number
of machines. In the case of more machines available than number of jobs, both greedy and ordered scheduling algorithms would reflect the
changes in number of jobs to their running times.
\begin{figure}[]
 \centering
 \includegraphics[scale=0.60,keepaspectratio=true]{./experiment_files/figure1.png}
 \caption{Running time comparison with 1,000,000 jobs}
 \label{f1}
\end{figure}

\begin{figure}[]
 \centering
 \includegraphics[scale=0.60,keepaspectratio=true]{./experiment_files/figure2.png}
 \caption{Running time comparison with 1,000,000 jobs. Performance decrease with $k=m/2$}
 \label{f2}
\end{figure}

Figure~\ref{f3} compares the running time of three algorithms (greedy, ordered scheduling, and p2c) according to changes in number of jobs.
For this experiment we have four random data sets with sizes ranging from 10k to 10m jobs in hand and we used 5 machines in all runs.
Experiments with these data sets show that the most sensitive algorithm to changes in number of jobs is ordered scheduling, since it has an
inherent $O(n \mathrm{log }n)$. When we have 10 million jobs greedy approach runs almost 7 times faster than ordered scheduling.
One can also notice that while the number of jobs is small power of two choices algorithm runs faster than greedy, yet when it increases
greedy outperforms the power of two choices approach in terms of running time.
As an another experiment set, we want know how close are the algorithms we studied in this paper to optimal solutions. For this experiment,
we construct two different data sets: one with full random jobs, and another with all small jobs except one enormous job in the middle.
In all settings, the experiments were run 5 times to depict the random behaviour of the power of two choices algorithm.
In Figure~\ref{f4}, results form first data set are shown. Ordered scheduling algorithm gives almost the optimal results regardless
of how many jobs we have, so it is considerably successful than other algorithms on accuracy. While number of machines increases,
greedy and power of two choices algorithms perform worse. Moreover, increasing behaviour in approximation ratio of greedy approach confirms
the theoretical analysis. Recall that its approximation ratio is $(2-1/m)$, therefore increase in number of machines affects the approximation
negatively. Similarly even in the best case, power of two choices algorithm cannot approximate very close to optimal results since it
cannot have better approximation than 1.25 when we have more than 1000 machines in the experiment.
Hence, when the ratio of number of jobs to number of machines in the system is below than certain range it is not wise to use power of two
choices algorithm if one concerns about the quality of the solution for load balancing problem.
\begin{figure}[]
 \centering
 \includegraphics[scale=0.55,keepaspectratio=true]{./experiment_files/figure3.png}
 \caption{Running time comparison with distinct job sets and 5 machines}
 \label{f3}
\end{figure}

\begin{figure}[]
 \centering
 \includegraphics[scale=0.55,keepaspectratio=true]{./experiment_files/figure4.png}
 \caption{Approximation ratio comparison}
 \label{f4}
\end{figure}

One last observation from this experiment is that when we use $k=m/2$ in power of k choices algorithm, it can only get approximation
rate nearly to the rate of greedy. After these experiments it becomes more clear that why we should choose $k=2$ in power of k choices
algorithm, because we sacrifice awful time just to get the approximation ratio similar to the one that greedy has. Hence, the advantage
of using power of two choices algorithm increases while the one for $k>2$ choices decreases.\\
In Figure~\ref{f5}, results from the extreme data set with one big job in the middle are shown. In this graph we can see there is a
knee point, after that all three algorithm performs reasonably well. Therefore, when we have really small number of machines comparing
to number of jobs and have one enormous job so that sum of all other jobs cannot reach the time required for it, whatever algorithm we 
choose there is no way to get close approximation rates to optimal since we define the optimal solution as total number of times of jobs 
divided by number of machines. This means that if we were to split one job to multiple machines, then this enormous job problem would not
affect the approximation ratio significantly.
\begin{figure}[]
 \centering
 \includegraphics[scale=0.55,keepaspectratio=true]{./experiment_files/figure5.png}
 \caption{Approximation ratio comparison with a particular job set}
 \label{f5}
\end{figure}

\newpage
\section{Conclusions}
Starting from the fact that load balancing problem is NP-hard, in this paper we focused on algorithms that can give approximate results
in polynomial time and experiment with them while the approximation ratio and run time concerns are in mind. Intuitive greedy approach,
ordered scheduling, and a randomized algorithm Power of k Choices were investigated in the scope of this paper. There exists influence
in the amount of machines for the comparison between the results of these algorithms. Even though, the experimental results followed the
theoretical analysis of algorithms, where ordered scheduling surpass others in terms of approximation ratio, as expected.
On the other hand, by using $k=2$ in the Power of k Choices, it achieves reasonable approximation rates with clear advantage of its
running time. Nevertheless, incresing the value of $k$ creates a small increment in its approximation ratio, but it decreases its
performance, yielding it as a non convincent strategy. Finally, although Ordered Scheduling has the best approximation ratio studied in this
paper, it has the disadvantage of not supporting online job sets, whereas the other two algorithms can support online job sets. In this case,
the best alternative is the Power of 2 Choices, as its running time tends to be better than the Greedy algorithm, specially when the amount
of machines increases. Therefore, we conclude that the convinient algorithm to implement, completelly depends on the problem's paradigm:
the amount of machines and the type of job sets (either offline or online) should be always considered before choosing and implementing
any one of the three algorithms discussed in this paper.

\begin{thebibliography}{9}

\bibitem{ABKU99} Y. AZAR, A.Z. BRODER, A.R. KARLIN, E. UPFAL.
 \emph{“Ballanced Allocations”},
 in SIAM J. COMPUT. Vol. 29, No. 1 (1999), pp. 180–200.
\bibitem{AYY07} J. ASPNES, Y.R. YANG, Y.YIN.
 \emph{“Path-Independent Load Balancing with Unreliable Machines”},
 in Proc. 18th ACM-SIAM Symposium on Discrete Algorithms (SODA),
 2007, pp 814-823.
\bibitem{Da03} S. DAVIS.
 \emph{“Priority algorithms”},
 Research exam, 2003.
\bibitem{EKS08} T. EBENLENDR, M. KRCAL, J. SGALL.
 \emph{“Graph Balancing : A special case of scheduling unrelated parallel machines”},
 in Proc. Symposium of Discrete Algorithms (SODA), 2008.
\bibitem{FVC95} T.C. FOGARTY, F. VAVAK, P. CHENG.
 \emph{“Use of the genetic algorithm for the loab balancing of sugar beet presses”},
 in Proc. of the 6th international conference of Genetic Algorithms,
 San Fransisco, USA, 1995.
\bibitem{GJ79} M.R.GAREY, D.S. JOHNSON.
 \emph{“Computers and the theory of intractability- A guide to the theory of NP-Completeness”},
 W.H Freeman and Co, 1979.
\bibitem{HL90} S.H. HOSSEINI, B. LITOW.
 \emph{“Analysis of a graph coloring based distributed load balancing algorithm”},
 Journal of Parallel and Distributed Computing,
 October 1990, pp 160-166.
\bibitem{Mi96} M. MITZENMACHER.
 \emph{“The Power of Two Choices in Randomized Load Balancing”},
 PhD Thesis, Chapters 1, 2. UC Berkeley, 1996.
\bibitem{SSS08} S. SHARMA, S. SINGH, and M. SHARMA.
 \emph{“Performance Analysis of Load Balancing Algorithms”},
 World Academy of Science, Engineering and Technology 38, 2008.
\bibitem{SW99} P. SCHUURMAN and G.J. WOEGINGER.
 \emph{“Polynomial time approximation algorithms for machine scheduling: Ten open problems”},
 Journal of Scheduling 2, 1999, pp 203-213.
\bibitem{SW07} P. SCHUURMAN and G.J. WOEGINGER.
 \emph{“Aproximation Schemes - A tutorial”},
 Chapter in Lectures on Scheduling, R.H. Moehring,
 C.N. Potts, A.S. Schulz, G.J. Woeginger and L.A. Wolsey (eds.), to appear.
\bibitem{WSP06} X. WU, R. SRIKANT, and J.R. PERKINS.
 \emph{“Scheduling Efficiency of Distributed Greedy Scheduling Algorithms in Wireless Networks”},
 in Proc. IEEE INFOCOM, Barcelona, Spain, 2006, pp.1-12.
\bibitem{YOS98} Y. AZHAR.
 \emph{“Online Load Balancing”}, Lecture Notes in Computer Science,
 Volume 1442/1998, pp 178-195.
\bibitem{ZOM01} A.Y ZOMAYA, Y. TEH.
 \emph{“Observations of using genetic algorithms for dynamic loab balancing”}
\end{thebibliography}

\end{document}
