\documentclass[11pt,a4paper,3p,authoryear]{elsarticle}
\usepackage[latin1]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{makeidx}
\usepackage{graphicx}
\usepackage{float}
\usepackage[colorlinks,linkcolor=red,anchorcolor=blue,citecolor=green]{hyperref}
%\usepackage{bbding}
\usepackage{marvosym}

\begin{document}
\bibliographystyle{elsarticle-harv}
\begin{frontmatter}
\title{Two-Machine Flow Shop Scheduling with Sequence-Dependent Setup Times and a Common Due Window}
\author[rao]{Yun-Qing RAO}
\author[au1]{Meng-Chang WANG\corref{c1}}
\ead{wangmengchang@gmail.com}
\author[kp]{Kun-Peng WANG}
\address[rao,au1,kp]{The State Key Lab of Digital Manufacturing Equipment and Technology, Wuhan, China, 430074}
\address[rao,au1,kp]{The School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China, 430074}
\cortext[c1]{(\Letter) Corresponding author.}
\begin{abstract}
This paper presents a two-machine flow shop scheduling problem with sequence-dependent setup times and a common due window, in which any job finished within the window is acceptable but will cost penalties if outside. The objective is minimizing the weighted sum of earliness and tardiness according to the common due window. The problem is found to be NP-hard. And a niche genetic algorithm (NGA) with sharing as the population diversity mechanism is developed for it , in which the distance of two chromosomes is defined to measure the similarity between them, which can prevent pre-mature of standard genetic algorithm. Computational experiments in different scales show the effective and efficiency of the algorithm.
\end{abstract}
\begin{keyword}
Flow shop scheduling \sep sequence-dependent setup time \sep common due window \sep niche genetic algorithm
\end{keyword}
\end{frontmatter}


\section{Introduction and Literature Review}

There are many production lines (flow shops) yielding multi-products, where the setup times are sequence dependent and can not be ignored. These setup times may influence jobs' delivery time significantly. Sequence-dependent setup time means that the setup time of a coming job at a machine is determined by the type of the last processed job \citep{key-3}. The first known study on two-machine flow shop with sequence dependent setup times was done by \citet{1st-study-setups}. They proposed a dynamic programming approach with the objective minimizing the makespan of the permutation schedule. \citet{SetupReview} reviewed flowshop scheduling with setup times, and they point out that most previous works focused on makespan minimization. A recent research proposed an integer programming model for $m$-machine flowshop scheduling in order to minimize the weighted sum of total completion time and makespan \citep{key-1}. These studies used makespan as the criteria. But makespan implies that manufacturers aim at improving the production speed or capacity and ignoring market demands on delivery times, which may be against the just-in-time (JIT) philosophy and result in high inventory holding cost or other extra loss. Actually,  manufacturers and customers are willing to deliver or receive products at right time, because earliness may cause extra storage, maintenance and capital tie-up, while tardiness may bring contractual penalty. Baker and Scudder hold the view that ``JIT encompasses a much broader set of principles than just those relating to due dates, but scheduling models with both earliness and tardiness penalties seem to capture the scheduling dimension of a JIT approach.''\citep{JIT_1}

Rather than a single due date, sales representatives may try to gain looser delivery restriction, so a due window maybe more practical for products delivery \citep{due_window}, where the acceptable delivery time is no earlier than a time $b$ but no later than a time $e$, and the interval $[b,e]$ is called a common due window (CDW). Another typical example of a CDW is that a bundle of goods is dispatched or transported in a bulk delivery, where the arrival and departure time of the truck are specified by the company, during which the goods must be dispatched \citep{Biskup2005740}.

%CDW review
If let $b=e$ and ignore setup times, the problem turns into the well-known common due date (CDD) problem, which was first introduced by \citet{Kanet1981}. And \citet{JIT_1} reviewed different versions of scheduling researches considering CDD. \citet{due_date_setup_sd_2004} first considered a single machine CDD problem including sequence-dependent setup times. They developed a branch-and-bound algorithm (B\&B) to solve the problem in scales from 10 to 25 jobs within reasonable times (from 0.00 to 18.83 minutes on average). \citet{SAD} pointed out the deference between the conception Mean Absolute Lateness (MAL) and Mean Lateness (ML) according to a CDD. He proposed a mixed integer programming model for minimizing the Sum of Absolute Deviations (SAD) about a CDD for two-machine flow shop problem. He also found that the SAD criterion is not generally compatible with other criteria, which means algorithms for makespan may not suitable for SAD. And this study was extended by \citet{due_window} into two CDW problems minimizing the weighted number of early and tardy jobs in a two-machine flow shop, where the window size is externally determined. They shown that the problems are NP-hard in the ordinary sense, and developed pseudo-polynomial dynamic programming algorithms.

Some other works on CDW in single machine environments have been done in \citet{Azizoglu1997}, \citet{Biskup2005740}, \citet{Wan2002}, \citet{Yeung2004}, and so on. \citet{Chen2002} extended the problem into a parallel machine environment. There are few works on CDW in flow shop environments can be found except \citet{due_window}. But  \citet{due_window} didn't consider sequence-dependent setup times.

This paper consider a CDW scheduling problem in a two-machine flows hop environment including sequence-dependent setup times. The objective of the problem is to minimize the weighted sum of earliness and tardiness according to the CDW.  The problem definition is given in section 2 and it was proven to be NP-hard. An algorithm improved from genetic algorithm (GA) using niche techniques is provided in section 3. Section 4 shows some computational examples, and a conclusion is given in Section 5.


\section{Problem Definition}

Considering a 2-machine flow shop consisted of machine $M_{1}$ and $M_{2}$ , there are a job set $N=\{J_{1},\, J_{2}, \ldots,\, J_{n}\}$ of $n$ jobs to be processed in the shop, in which $n$ jobs are of $K$ types (families), and $k_{j}$ represents the type of job $j$. The subset $N_{i}=\{J_{i,1},J_{i,2},\ldots, J_{i,Q_{i}}\}$ includes $Q_{i}$ jobs of type $i$ $(i=1,2,\ldots,K)$, where ${ \sum_{i=1}^{K}Q_{i}}=n$. We assume that the processing times of the same type on the same machine are equal, and the processing time of type $i$ on machine $m$ is $p_{i,m}$. The notation $s_{k,l,m}$ refers to the setup time on machine $m$ when the last job is of type $k$ and the coming job is of type $l$, and $s_{k,k,m}=0$. The completion time of job $j$ on machine $M_{2}$ is noted as $C_{j}$.  All these jobs share a common due window $[b,e]$. The earliness and tardiness of job $j$ are noted as $E_{j}$ and $T_{j}$. The weighted sum of all jobs' earliness and tardiness of a permutation schedule $\sigma$ is noted as $W(\sigma)$.

The problem can be described as

\begin{equation}
\min W(\sigma)=\sum_{j=1}^{n}(\alpha_{j}E_{j}+\beta_{j}T_{j})
\end{equation}


\noindent where $\alpha_{j}$ is the penalty per earliness unit for job $j$, and $\beta_{j}$ is the penalty per tardiness unit for job
$j$ ($\alpha_{j}, \beta_{j} \geq 0$).  $E_{j}$ and $T_{j}$ are defined as Eq. (\ref{eq:E_j}) and Eq. (\ref{eq:T_j}), in which $C_{j}$ can be calculated by Eq. (\ref{eq:C_j}). 

\begin{equation}
E_{j}=\begin{cases}
b-C_{j}, & C_{j}<b\\
0, & else\end{cases}
\label{eq:E_j}
\end{equation}


\begin{equation}
T_{j}=\begin{cases}
C_{j}-e, & e<C_{j}\\
0, & else\end{cases}
\label{eq:T_j}
\end{equation}


\begin{equation}
C_{j}=
\begin{cases}
p_{k_{1},1}+p_{k_{1},2}, & (j=1)\\
\max[C_{j-1},\; p_{k_{1},1}+ \displaystyle \sum _{r=2}^{j}(s_{k_{r-1},k_{r},1}+p_{k_{r},1})]+p_{k_{j},2}, & (j>1)
\end{cases}
\label{eq:C_j}
\end{equation}

According to the standard classification scheme for scheduling problems \citep{Three-Field},  a scheduling problem can be described by a triplet $\alpha | \beta | \gamma$. The $\alpha$ field describes the machine environment and contains a single entry. The $\beta$ field provides details of processing characteristics and constraints. The $\gamma$ field describes the objective to be minimized \citep{The-Schedule-Book}. Following this scheme, the problem in this paper is represented as $F2|s_{k,l,m}|W(\sigma)$.

%\medskip{}

\newtheorem{thm}{Theorem}
\begin{thm}
The problem $F2|s_{k,l,m}|W(\sigma)$ is NP-hard.
\end{thm}
\newproof{pf}{Proof}
\begin{pf}
Let $e=\alpha_{j}=s_{k,l,m}=0$, and $\beta_{j}=1$,
the problem is transformed into the 2-machine flow shop problem with
total tardiness as the scheduling criterion, which has been proved
to be NP-complete\citep{key-2}. So the problem $F2|s_{k,l,m}|W(\sigma)$
here is NP-hard. 
\end{pf}


\section{Niche Genetic Algorithm for $F2|s_{k,l,m}|W(\sigma)$}

Theorem 1 implies that there is no polynomial algorithm to solve the problem $F2|s_{k,l,m}|W(\sigma)$. Actually, the searching space scale $S$ of the problem can be calculated by eq. (\ref{eq:Space}), where $P_{n}$ refers to the full permutation of $n$ elements. And the proof is omitted for the simplicity. 

\begin{equation}
S=\frac{P_{n}}{{\displaystyle \prod_{i=1}^{K}P_{Q_{i}}}}
\label{eq:Space}
\end{equation}

It is unpractical to use an exhaustion method to find the optimal solution, when $n$ is large enough. Then a Niche Genetic Algorithm(NGA) is developed for the problem.

NGA is an improvement of Standard Genetic Algorithm (SGA). A SGA starts from a set of feasible solutions which was called \textit{population}, where every solution is properly expressed and coded as a \textit{chromosome}. There are three basic operations in a GA: \textit{selection}, \textit{crossover} and \textit{mutation}, by which a new \textit{generation} is formed. In \textit{selection }operation the fitness of every chromosome is calculated, which is relative to the value of the objective function. Larger fitness means that the chromosome is better and has more probability to be \textit{selected} into the next \textit{generation}. After \textit{selection }operation \textit{chromosomes} are paired randomly, and each pair proceeds the \textit{crossover }operation at a given probability $P_{c}$. And each \textit{chromosome }proceeds the \textit{mutation }operation at another given probability $P_{m}$. Then repeat these operations until a given number of \textit{generations }have been formed or other conditions are satisfied. But SGAs tend to premature converge and to fall into local extremes\citep{key-4}. NGA implements the population diversity mechanisms, which enable the algorithm to identify local as well as global optimum\citep{key-5}.

Sharing is a method implementing the population diversity mechanisms for an NGA, which was first introduced by Holland and expanded by Goldberg and Richardson. It reduces the fitness of individuals that have highly similar member within the population, by which it reduces the probability of individuals with high similarity with other better individuals to be selected into successive generation\citep{key-5}.

The main procedure of the algorithm is this paper described as follows.

 
\begin{description}
\item [Step 1] Generate an initial population randomly with a size $POP\_SIZE$ and set the $generation$ to 0.
\item [Step 2] Sort the $POP\_SIZE$ chromosomes by
$fitness$ in decreasing order, and record the first $Niche\_N$ chromosomes, record the first chromosome into another chromosome $Global\_Best$.
\item [Step 3] Selection.
\item [Step 4] Crossover.
\item [Step 5] Mutation.
\item [Step 6] For the $POP\_SIZE + Niche\_N$ chromosomes, calculate the $distances$ between each two chromosomes, and adjust the worse $fitness$ to more worse, when the $distance$ of two chromosomes is less than $L$.
\item [Step 7] Sort the $POP\_SIZE + Niche\_N$ chromosomes by $fitness$ in decreasing order, record the first $Niche\_N$ chromosomes, and keep the first $POP\_SIZE$ chromosomes as the population. If the first chromosome is better than the $Global\_Best$, let $Global\_Best$ record the first one.
\item [Step 8] Set $generation=generation+1$. If $generation$ reaches the maximum generation $G$, output the $Global\_Best$ and end, otherwise, go to \emph{Step 3}.
\end{description}

The details of the NGA in this paper are shown as follows.


\subsection{Encoding}

Schedules for $F2|s_{k,l,m}|W(\sigma)$ can be expressed as a sequence $(k_{1},k_{2},\ldots,k_{n})$ and its permutations, where $k_{i}\in\{1,2,\ldots,K\}$ and represents the type or family of the job at the \textit{i-th} position in the sequence. A feasible chromosome example is shown in Fig. \ref{fig:A-feasible-chromosome}, where four types of products are demanded.

%
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{ChromosomeExample.png} 
\par\end{centering}

\caption{\label{fig:A-feasible-chromosome}A feasible chromosome for 10 products of 4 types}

\end{figure}

The $fitness$ of chromosome $c$ in generation $g$ is
defined as 

\begin{equation}
fitness_{g}(c)=F\times\max_{1\leq h\leq POP\_SIZE}(W_{g}(h))-W_{g}(c)
\end{equation}


\noindent where $W_{g}(h)$ refers to the objective value of solution $h$ in generation $g$, and $F>1$.

\subsection{Population Initializing}

Chromosomes in the population are generated randomly in order to gain a diversity of initial solutions. The procedure is as follows.
\begin{description}
\item [Step 1] Prepare $n$ empty positions
\item [Step 2] Coding the original demand as shown in Fig. \ref{fig:A-feasible-chromosome}, and set $i=1$
\item [Step 3] Generate a random integer $r$ between 1 and $n$, While the \textit{r-th} position prepared in \emph{Step 1} is occupied, repeat this step until the position is empty 
\item [Step 4] Get the \textit{i-th} number in of the demand, and place it at the \textit{r-th} position
\item [Step 5] if $i=n$, end. Otherwise, set $i=i+1$ and go to \emph{Step 3}
\end{description}
These steps randomly generates a feasible solution by restrictively keeping the total numbers of each type equal to $Q_{i}$ according to the original demand. In the algorithm this procedure repeats $POP\_SIZE$ times to form the initial population.

\subsection{Selection}

For each generation, selection operation selected \emph{POP\_SIZE}chromosomes for further operations, where the chromosomes with better fitness has more chance to be selected, and can be selected more than once. The roulette wheel method is adopted in the algorithm, which was developed by \citet{Holland}. The probability of chromosome \emph{c }in generation \emph{g} is calculated by 

\begin{equation}
P_{g}(c)=\frac{fitness_{g}(c)}{\displaystyle \sum_{h=1}^{POP\_SIZE}fitness_{g}(h)}
\end{equation}



\subsection{Crossover}

A crossover operation carried on a pair of chromosomes (parents) produces 2 new chromosomes (offsprings). There are several famous crossover operations for scheduling, such as \emph{LOX}, \emph{PMX}, \emph{NAX} etc. \citep{Famous_crossover}. The crossover operation here is implemented as following steps.
\begin{description}
\item [Step 1] Choose two chromosomes randomly from the population.
\item [Step 2] Generate a random number between 0 and 1. If the number is between 0 and the given crossover probability $P_{c}$, go to \emph{Step 3}; else, end.
\item [Step 3] Generate two random numbers $i$ and $j$ between 1 and $n$.
\item [Step 4] Copy the sections between the \textit{i-th} and \textit{j-th} position to offsprings' \textit{i-th} to \textit{j-th} position directly (see Fig. \ref{fig:Crossover-operation-on} (a)).
\item [Step 5] For the 1st offspring, copy the other genes in the 2nd parent at the same relative order; and similar for 2nd offspring (see Fig. \ref{fig:Crossover-operation-on} (b)).
\item [Step 6] From the 4 chromosomes, that is, 2 parents and 2 offsprings, keep the best two chromosomes into the population.
\end{description}
This procedure repeats for $\frac{POP\_SIZE}{2}$ times in the algorithm.

%
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{CrossOverOperation1.png}
\par\end{centering}

\caption{\label{fig:Crossover-operation-on}Producing offsprings from 2 chromosomes}

\end{figure}



\subsection{Mutation}

Mutation operations improve the population's diversity, and is performed on each chromosome that selected at the mutation probability. The inverse mutation operator (\emph{INV}) is employed, which was introduced by  \citet{INV}. Fig. \ref{fig:Inverse-mutaion-operation} shows an example.
\begin{description}
\item [Step 1] For each chromosome, generate a random number between 0 and 1. If the number is between 0 and the given mutation probability $P_{m}$, go to \emph{Step 2}; else, end.
\item [Step 2] Generate two random numbers $i$ and $j$ between 1 and $n$.
\item [Step 3] Inverse the positions between the \textit{i-th} and \textit{j-th} position.
\end{description}
%
\begin{figure}[H]
\smallskip{}
\begin{centering}
\includegraphics[scale=0.5]{InverseExample.png} 
\par\end{centering}

\begin{centering}
\medskip{}

\par\end{centering}

\caption{\label{fig:Inverse-mutaion-operation}Inverse mutaion operation}

\end{figure}



\subsection{Sharing}

Sharing is a method implementing the population diversity mechanism, in which the worse chromosome's fitness is reduced sharply and have much less probability to be kept into next generation, when the two chromosomes have high similarity. The measure of the similarity is determined by calculating the \textit{distance} between two chromosomes as shown below.

\begin{equation}
dis(k_{c,i},k_{h,j})=\begin{cases}
1, & if\, k_{c,i}\neq k_{h,j}\\
0, & otherwise\end{cases}\end{equation}


\begin{equation}
distance(c,h)=\sum_{i=1}^{n}dis(k_{c,i},k_{h,i})\end{equation}


\noindent where $k_{c,i}$ refers to the type of the \textit{i-th }position in chromosome \textit{c}. Obviously the larger \textit{distance} implies that the two chromosome has less similarity. An example is shown in Fig. \ref{fig:An-example-of-distance}, in which there are 6 different positions between the two chromosomes.

%
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{DistanceExample.png} 

\par\end{centering}

%\medskip{}
\begin{centering}
\medskip{}

\par\end{centering}

\caption{An example of distance\label{fig:An-example-of-distance}}

\end{figure}


In the algorithm, the \emph{distance }of every two chromosomes are
calculated. If it is less than \emph{L}, the \emph{fitness }of the
worse one is divided by a large numbers as in equation (\ref{eq:new-fitness}),
where $D\geq1$ .

\begin{equation}
fitness^{'}=\frac{fitness}{10^{D}}\label{eq:new-fitness}\end{equation}



\section{Computational Examples}

The algorithm is implemented in C++, and the raw data are shown in
Tab. \ref{tab:Processing-times} and Tab. \ref{tab:Setup-times},
where the processing times ($p_{i,m}$) are randomly generated from
a uniform distribution $U(1,8)$, and the setup times ($s_{k,l,m}$)
are from $U(1,3)$ and $s_{k,k,m}=0$.

%
\begin{table}[H]
\caption{Processing times\label{tab:Processing-times}}


\medskip{}


\begin{centering}
\begin{tabular}{ccccc}
\hline 
$p_{i,m}$ & $i=1$ & $i=2$ & $i=3$ & $i=4$ \tabularnewline
\hline 
$m=1$ & 5 & 4 & 6 & 8\tabularnewline
$m=2$ & 3 & 7 & 2 & 5\tabularnewline
\hline
\end{tabular}
\par\end{centering}


\end{table}


%
\begin{table}[H]
\caption{Setup times\label{tab:Setup-times}}


\medskip{}


\centering{}\begin{tabular}{cccccc}
\hline 
\multicolumn{2}{c}{$s_{k,l,m}$} 		& $l=1$  & $l=2$ & $l=3$ & $l=4$ \tabularnewline
\hline 
    $m=1$  			& $k=1$ & 0 & 2 & 3 & 1\tabularnewline
                          	& $k=2$ & 1 & 0 & 2 & 1\tabularnewline
						  	& $k=3$ & 3 & 1 & 0 & 1\tabularnewline
						  	& $k=4$ & 1 & 2 & 1 & 0\tabularnewline
\hline 
	$m=2$		 		& $k=1$ & 0 & 1 & 2 & 1\tabularnewline
 							& $k=2$ & 1 & 0 & 1 & 1\tabularnewline
 							& $k=3$ & 1 & 3 & 0 & 1\tabularnewline
 							& $k=4$ & 1 & 2 & 1 & 0\tabularnewline
\hline
\end{tabular}
\end{table}



\subsection{Case with Small Scale}


\subsubsection{$n=10$}

There are 10 jobs to be processed, and the quantities of each type are shown in Tab. \ref{tab:Original-demand-10}. The common due window is set to be {[}50, 80{]}, and $\alpha=0.4$, $\beta=0.6$. The searching space scale is 25,200.

%
\begin{table}[H]


\caption{Original demand ($n=10$)\label{tab:Original-demand-10}}


\begin{centering}
\medskip{}

\par\end{centering}

\centering{}\begin{tabular}{ccccc}
\hline 
 & $i=1$ & $i=2$ & $i=3$ & $i=4$ \tabularnewline
\hline 
$Q_{i}$ & 2 & 2 & 3 & 3\tabularnewline
\hline
\end{tabular}
\end{table}


Parameters and results of the algorithm are illustrated in Fig. \ref{fig:minimum-W-of}. The $W(\sigma)$ of the best individual in each generation converges to 44.8 after about 75 generations, which consumed about 1.5 seconds on a computer with a 2G CPU. And an exhaustion experiment also found several global optimal solutions with the same $W(\sigma)$, 44.8, which consumed less than 1 seconds on the same computer that can judge about 25,600 candidates in this scale per second.

%
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.7]{NicheGA_10_500G_44_8.png}
\par\end{centering}

%\caption{minimum\emph{ W}(\emph{\textgreek{sv}}) of each generation ($n=10$,$K=4$)\label{fig:minimum-W-of}}
\caption{minimum $W(\sigma)$ of each generation ($n=10$,$K=4$)\label{fig:minimum-W-of}}


\begin{centering}
{\footnotesize $b=50$, $e=80$, $\alpha=0.4$, $\beta=0.6$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $G=500$, }\emph{\footnotesize POP\_SIZE=}{\footnotesize 200,
$P_{c}=0.9$, $P_{m}=0.2$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $L=6$, $D=3$, $Niche\_N=10$,}
\par\end{centering}{\footnotesize \par}

\centering{}{\footnotesize $\min W(\sigma)=44.8$, global optimal
$\sigma=3,4,3,2,2,3,1,1,4,4$ }
\end{figure}



\subsubsection{$n=12$}

A 12-job instance with quantities shown in Tab.\ref{tab:Original-demand-(12)} is solved in a similar way, whose searching space is up to 369,600. About 17 seconds are consumed in an exhaustion method, by which the optimal solution is found with the minimum $W(\sigma)=44.4$. Meanwhile, the best solution is converged to 44.4 after about 85 generations with the NGA, where 2 seconds are consumed. Parameters and results are shown in Fig. \ref{fig:minimum-W-of_12}.

%
\begin{table}[H]
\caption{Original demand ($n=12$)\label{tab:Original-demand-(12)}}


\medskip{}


\begin{centering}
\begin{tabular}{ccccc}
\hline 
 & $i=1$ & $i=2$ & $i=3$ & $i=4$\tabularnewline
\hline 
$Q_{i}$ & 3 & 3 & 3 & 3\tabularnewline
\hline
\end{tabular}
\par\end{centering}


\end{table}


%
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{NGA_12_200_500_44.png}
\par\end{centering}

\caption{minimum $W(\sigma)$ of each generation ($n=12$, $K=4$)\label{fig:minimum-W-of_12}}


\begin{centering}
{\footnotesize $b=70$, $e=80$, $\alpha=0.4$, $\beta=0.6$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $G=500$, $POP\_SIZE=200$, $P_{c}=0.9$, $P_{m}=0.2$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $L=8$, $D=3$, $Niche\_N=10$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $\min W(\sigma)=44.4$, global optimal $\sigma=3,4,3,2,2,2,3,1,4,4,1,1$}
\par\end{centering}{\footnotesize \par}


\end{figure}



\subsection{Case with Medium Scale}

20 jobs are to be processed, and the quantities are shown in Tab.\ref{tab:Original-demand-(20)}. The scale of the searching space is up to $6.5\times10^{9}$ (6,518,191,680), which would consume about 7.5 days on the computer mentioned above when using exhaustion, where 10,000 candidates in this scale can be judged in 1 second.

%
\begin{table}[H]


\caption{Original demand ($n=20$)\label{tab:Original-demand-(20)}}


\begin{centering}
\begin{tabular}{ccccc}
\hline 
 & $i=1$ & $i=2$ & $i=3$ & $i=4$\tabularnewline
\hline 
$Q_{i}$ & 6 & 6 & 5 & 3\tabularnewline
\hline
\end{tabular}
\par\end{centering}


\end{table}


Parameters and results are shown in Fig. \ref{fig:minimum-W-20}. Minimum $W(\sigma)$ converges to 151.8 after about 160 generations, where 75 seconds are consumed.

%
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{NGA_20_1500.png}
\par\end{centering}

\caption{minimum $W(\sigma)$ of generations ($n=20$, $K=4$)\label{fig:minimum-W-20}}


\begin{centering}
{\footnotesize $b=80$, $e=90$, $\alpha=0.4$, $\beta=0.6$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $G=1500$,\_$POP\_SIZE=1200$, $P_{c}=0.9$, $P_{m}=0.1$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $L=13$, $D=3$, $Niche\_N=60$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $\min W(\sigma)=151.8$,}
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $\sigma=$3,4,3,2,3,2,3,2,2,2,2,4,4,1,1,1,1,1,1,3}
\par\end{centering}{\footnotesize \par}


\end{figure}



\subsection{Case with Large Scale}

50 jobs are to be processed, and the quantities are shown in Tab.\ref{tab:Original-demand-50}. The common due window is set to be {[}70, 100{]}, and $\alpha=0.4$, $\beta=0.6$. The searching space scale is up to $1.35\times10^{27}$, which would spend $7.65\times10^{15}$ years on the computer mentioned above, if using the exhaustion method, where about 5500 candidates in this scale can be judged in 1 second.

%
\begin{table}[H]
\caption{Original demand ($n=50$)\label{tab:Original-demand-50}}


\begin{centering}
\medskip{}

\par\end{centering}

\centering{}\begin{tabular}{ccccc}
\hline 
 & $i=1$ & $i=2$ & $i=3$ & $i=4$ \tabularnewline
\hline 
$Q_{i}$ & 15 & 15 & 10 & 10\tabularnewline
\hline
\end{tabular}
\end{table}


Parameters and results of the algorithm are shown in Fig. \ref{fig:minimum-W-of-50}. The $W(\sigma)$ of the best individual in each generation converges to 936 after about 450 generations, which spent about 9 minutes on the computer.

%
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.7]{NGA_50_1500_930.png}
\par\end{centering}

\caption{minimum $W(\sigma)$ of each generation ($n=50$, $K=4$)\label{fig:minimum-W-of-50}}


\begin{centering}
{\footnotesize $b=70$, $e=100$, $\alpha=0.4$, $\beta=0.6$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $G=1500$,\_$POP\_SIZE=1200$, $P_{c}=0.9$, $P_{m}=0.1$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $L=30$, $D=3$, $Niche\_N=60$, }
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $\min W(\sigma)=936$,}
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize $\sigma=$2,2,2,2,2,2,2,2,2,2,2,4,4,4,4,1,}
\par\end{centering}{\footnotesize \par}

\begin{centering}
{\footnotesize 1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,}
\par\end{centering}{\footnotesize \par}

\centering{}{\footnotesize 4,3,3,3,3,3,3,3,3,3,2,4,4,4,4,3,4}
\end{figure}



\subsection{Discussion}

Computation results shows that NGA successfully found the global optimal solutions for cases in less scale. A comparison between the times consumed by exhaustion and NGA is shown in Tab. \ref{Flo:Tab.Comparison}, where $T_{E}$ refers to the time consumed by exhaustion method, $T_{NGA}$ is by NGA, and $\eta=\frac{T_{E}}{T_{NGA}}$ represents the efficiency relative to exhaustion method. The efficiency of NGA is not quite obvious for cases in very small scale. But for cases in medium or large scale, NGA shows its efficiency distinctly.

%
\begin{table}[H]
\caption{Efficiency Comparison between NGA and exhaustion}
\label{Flo:Tab.Comparison}

\medskip{}


\centering{}\begin{tabular}{cccc}
\hline 
Scale & $T_{E}$/s & $T_{NGA}$/s & $\eta$\tabularnewline
\hline
$n=10$ & 1 & 1.5 & 0.67\tabularnewline
$n=12$ & 17 & 2 & 8.5\tabularnewline
$n=20$ & $6.5\times10^{5}$ & 75 & 8700\tabularnewline
$n=50$ & $1.23\times10^{24}$ & 540 & $2.3\times10^{21}$\tabularnewline
\hline
\end{tabular}
\end{table}



\section{Conclusion}

In this paper we consider the problem two-machine flow shop scheduling with sequence-dependent setup times to minimize the weighted sum of earliness and tardiness according to a common due window, noted as $F2|s_{k,l,m}|W(\sigma)$, and prove that it is NP-hard. A niche genetic algorithm is developed for it based on a definition of the distance between two solutions, and computational results shows that the algorithm is efficient, especially for problems in medium scale and large scale.

But the influence of the ratio between setup times and processing times on $W(\sigma)$ is not discussed in this paper, and so is the influence of the common window size and location. Future researches can analyse the influence of these factors on $W(\sigma)$, and propose detail policies according to specific cases. Another aspect of future works may be improving this NGA or developing more efficient algorithms for $F2|s_{k,l,m}|W(\sigma)$.


\section*{Acknowledgement}

This study is supported by 863 High Technology Plan Foundation of China under grant No. 2007AA04Z186.


\bibliography{Reference.bib}
\end{document}