
%% bare_conf.tex
%% V1.3
%% 2007/01/11
%% by Michael Shell
%% See:
%% http://www.michaelshell.org/
%% for current contact information.
%%
%% This is a skeleton file demonstrating the use of IEEEtran.cls
%% (requires IEEEtran.cls version 1.7 or later) with an IEEE conference paper.
%%
%% Support sites:
%% http://www.michaelshell.org/tex/ieeetran/
%% http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/
%% and
%% http://www.ieee.org/

%%*************************************************************************
%% Legal Notice:
%% This code is offered as-is without any warranty either expressed or
%% implied; without even the implied warranty of MERCHANTABILITY or
%% FITNESS FOR A PARTICULAR PURPOSE! 
%% User assumes all risk.
%% In no event shall IEEE or any contributor to this code be liable for
%% any damages or losses, including, but not limited to, incidental,
%% consequential, or any other damages, resulting from the use or misuse
%% of any information contained here.
%%
%% All comments are the opinions of their respective authors and are not
%% necessarily endorsed by the IEEE.
%%
%% This work is distributed under the LaTeX Project Public License (LPPL)
%% ( http://www.latex-project.org/ ) version 1.3, and may be freely used,
%% distributed and modified. A copy of the LPPL, version 1.3, is included
%% in the base LaTeX documentation of all distributions of LaTeX released
%% 2003/12/01 or later.
%% Retain all contribution notices and credits.
%% ** Modified files should be clearly indicated as such, including  **
%% ** renaming them and changing author support contact information. **
%%
%% File list of work: IEEEtran.cls, IEEEtran_HOWTO.pdf, bare_adv.tex,
%%                    bare_conf.tex, bare_jrnl.tex, bare_jrnl_compsoc.tex
%%*************************************************************************

% *** Authors should verify (and, if needed, correct) their LaTeX system  ***
% *** with the testflow diagnostic prior to trusting their LaTeX platform ***
% *** with production work. IEEE's font choices can trigger bugs that do  ***
% *** not appear when using other class files.                            ***
% The testflow support page is at:
% http://www.michaelshell.org/tex/testflow/



% Note that the a4paper option is mainly intended so that authors in
% countries using A4 can easily print to A4 and see how their papers will
% look in print - the typesetting of the document will not typically be
% affected with changes in paper size (but the bottom and side margins will).
% Use the testflow package mentioned above to verify correct handling of
% both paper sizes by the user's LaTeX system.
%
% Also note that the "draftcls" or "draftclsnofoot", not "draft", option
% should be used if it is desired that the figures are to be displayed in
% draft mode.
%
\documentclass[10pt, conference]{IEEEtran}
% Add the compsoc option for Computer Society conferences.
%
% If IEEEtran.cls has not been installed into the LaTeX system files,
% manually specify the path to it like:
% \documentclass[conference]{../sty/IEEEtran}





% Some very useful LaTeX packages include:
% (uncomment the ones you want to load)


% *** MISC UTILITY PACKAGES ***
%
%\usepackage{ifpdf}
% Heiko Oberdiek's ifpdf.sty is very useful if you need conditional
% compilation based on whether the output is pdf or dvi.
% usage:
% \ifpdf
%   % pdf code
% \else
%   % dvi code
% \fi
% The latest version of ifpdf.sty can be obtained from:
% http://www.ctan.org/tex-archive/macros/latex/contrib/oberdiek/
% Also, note that IEEEtran.cls V1.7 and later provides a builtin
% \ifCLASSINFOpdf conditional that works the same way.
% When switching from latex to pdflatex and vice-versa, the compiler may
% have to be run twice to clear warning/error messages.






% *** CITATION PACKAGES ***
%
%\usepackage{cite}
% cite.sty was written by Donald Arseneau
% V1.6 and later of IEEEtran pre-defines the format of the cite.sty package
% \cite{} output to follow that of IEEE. Loading the cite package will
% result in citation numbers being automatically sorted and properly
% "compressed/ranged". e.g., [1], [9], [2], [7], [5], [6] without using
% cite.sty will become [1], [2], [5]--[7], [9] using cite.sty. cite.sty's
% \cite will automatically add leading space, if needed. Use cite.sty's
% noadjust option (cite.sty V3.8 and later) if you want to turn this off.
% cite.sty is already installed on most LaTeX systems. Be sure and use
% version 4.0 (2003-05-27) and later if using hyperref.sty. cite.sty does
% not currently provide for hyperlinked citations.
% The latest version can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/cite/
% The documentation is contained in the cite.sty file itself.






% *** GRAPHICS RELATED PACKAGES ***
%
\ifCLASSINFOpdf
  \usepackage[pdftex]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../pdf/}{../jpeg/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.pdf,.jpeg,.png}
\else
  % or other class option (dvipsone, dvipdf, if not using dvips). graphicx
  % will default to the driver specified in the system graphics.cfg if no
  % driver is specified.
  % \usepackage[dvips]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../eps/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.eps}
\fi
% graphicx was written by David Carlisle and Sebastian Rahtz. It is
% required if you want graphics, photos, etc. graphicx.sty is already
% installed on most LaTeX systems. The latest version and documentation can
% be obtained at: 
% http://www.ctan.org/tex-archive/macros/latex/required/graphics/
% Another good source of documentation is "Using Imported Graphics in
% LaTeX2e" by Keith Reckdahl which can be found as epslatex.ps or
% epslatex.pdf at: http://www.ctan.org/tex-archive/info/
%
% latex, and pdflatex in dvi mode, support graphics in encapsulated
% postscript (.eps) format. pdflatex in pdf mode supports graphics
% in .pdf, .jpeg, .png and .mps (metapost) formats. Users should ensure
% that all non-photo figures use a vector format (.eps, .pdf, .mps) and
% not a bitmapped formats (.jpeg, .png). IEEE frowns on bitmapped formats
% which can result in "jaggedy"/blurry rendering of lines and letters as
% well as large increases in file sizes.
%
% You can find documentation about the pdfTeX application at:
% http://www.tug.org/applications/pdftex


\usepackage{textcomp}


% *** MATH PACKAGES ***
%
%\usepackage[cmex10]{amsmath}
% A popular package from the American Mathematical Society that provides
% many useful and powerful commands for dealing with mathematics. If using
% it, be sure to load this package with the cmex10 option to ensure that
% only type 1 fonts will utilized at all point sizes. Without this option,
% it is possible that some math symbols, particularly those within
% footnotes, will be rendered in bitmap form which will result in a
% document that can not be IEEE Xplore compliant!
%
% Also, note that the amsmath package sets \interdisplaylinepenalty to 10000
% thus preventing page breaks from occurring within multiline equations. Use:
%\interdisplaylinepenalty=2500
% after loading amsmath to restore such page breaks as IEEEtran.cls normally
% does. amsmath.sty is already installed on most LaTeX systems. The latest
% version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/required/amslatex/math/





% *** SPECIALIZED LIST PACKAGES ***
%
%\usepackage{algorithmic}
% algorithmic.sty was written by Peter Williams and Rogerio Brito.
% This package provides an algorithmic environment fo describing algorithms.
% You can use the algorithmic environment in-text or within a figure
% environment to provide for a floating algorithm. Do NOT use the algorithm
% floating environment provided by algorithm.sty (by the same authors) or
% algorithm2e.sty (by Christophe Fiorio) as IEEE does not use dedicated
% algorithm float types and packages that provide these will not provide
% correct IEEE style captions. The latest version and documentation of
% algorithmic.sty can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/algorithms/
% There is also a support site at:
% http://algorithms.berlios.de/index.html
% Also of interest may be the (relatively newer and more customizable)
% algorithmicx.sty package by Szasz Janos:
% http://www.ctan.org/tex-archive/macros/latex/contrib/algorithmicx/




% *** ALIGNMENT PACKAGES ***
%
%\usepackage{array}
% Frank Mittelbach's and David Carlisle's array.sty patches and improves
% the standard LaTeX2e array and tabular environments to provide better
% appearance and additional user controls. As the default LaTeX2e table
% generation code is lacking to the point of almost being broken with
% respect to the quality of the end results, all users are strongly
% advised to use an enhanced (at the very least that provided by array.sty)
% set of table tools. array.sty is already installed on most systems. The
% latest version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/required/tools/


%\usepackage{mdwmath}
%\usepackage{mdwtab}
% Also highly recommended is Mark Wooding's extremely powerful MDW tools,
% especially mdwmath.sty and mdwtab.sty which are used to format equations
% and tables, respectively. The MDWtools set is already installed on most
% LaTeX systems. The lastest version and documentation is available at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/mdwtools/


% IEEEtran contains the IEEEeqnarray family of commands that can be used to
% generate multiline equations as well as matrices, tables, etc., of high
% quality.


%\usepackage{eqparbox}
% Also of notable interest is Scott Pakin's eqparbox package for creating
% (automatically sized) equal width boxes - aka "natural width parboxes".
% Available at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/eqparbox/





% *** SUBFIGURE PACKAGES ***
%\usepackage[tight,footnotesize]{subfigure}
% subfigure.sty was written by Steven Douglas Cochran. This package makes it
% easy to put subfigures in your figures. e.g., "Figure 1a and 1b". For IEEE
% work, it is a good idea to load it with the tight package option to reduce
% the amount of white space around the subfigures. subfigure.sty is already
% installed on most LaTeX systems. The latest version and documentation can
% be obtained at:
% http://www.ctan.org/tex-archive/obsolete/macros/latex/contrib/subfigure/
% subfigure.sty has been superceeded by subfig.sty.



%\usepackage[caption=false]{caption}
%\usepackage[font=footnotesize]{subfig}
% subfig.sty, also written by Steven Douglas Cochran, is the modern
% replacement for subfigure.sty. However, subfig.sty requires and
% automatically loads Axel Sommerfeldt's caption.sty which will override
% IEEEtran.cls handling of captions and this will result in nonIEEE style
% figure/table captions. To prevent this problem, be sure and preload
% caption.sty with its "caption=false" package option. This is will preserve
% IEEEtran.cls handing of captions. Version 1.3 (2005/06/28) and later 
% (recommended due to many improvements over 1.2) of subfig.sty supports
% the caption=false option directly:
%\usepackage[caption=false,font=footnotesize]{subfig}
%
% The latest version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/subfig/
% The latest version and documentation of caption.sty can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/caption/




% *** FLOAT PACKAGES ***
%
%\usepackage{fixltx2e}
% fixltx2e, the successor to the earlier fix2col.sty, was written by
% Frank Mittelbach and David Carlisle. This package corrects a few problems
% in the LaTeX2e kernel, the most notable of which is that in current
% LaTeX2e releases, the ordering of single and double column floats is not
% guaranteed to be preserved. Thus, an unpatched LaTeX2e can allow a
% single column figure to be placed prior to an earlier double column
% figure. The latest version and documentation can be found at:
% http://www.ctan.org/tex-archive/macros/latex/base/



%\usepackage{stfloats}
% stfloats.sty was written by Sigitas Tolusis. This package gives LaTeX2e
% the ability to do double column floats at the bottom of the page as well
% as the top. (e.g., "\begin{figure*}[!b]" is not normally possible in
% LaTeX2e). It also provides a command:
%\fnbelowfloat
% to enable the placement of footnotes below bottom floats (the standard
% LaTeX2e kernel puts them above bottom floats). This is an invasive package
% which rewrites many portions of the LaTeX2e float routines. It may not work
% with other packages that modify the LaTeX2e float routines. The latest
% version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/sttools/
% Documentation is contained in the stfloats.sty comments as well as in the
% presfull.pdf file. Do not use the stfloats baselinefloat ability as IEEE
% does not allow \baselineskip to stretch. Authors submitting work to the
% IEEE should note that IEEE rarely uses double column equations and
% that authors should try to avoid such use. Do not be tempted to use the
% cuted.sty or midfloat.sty packages (also by Sigitas Tolusis) as IEEE does
% not format its papers in such ways.





% *** PDF, URL AND HYPERLINK PACKAGES ***
%
%\usepackage{url}
% url.sty was written by Donald Arseneau. It provides better support for
% handling and breaking URLs. url.sty is already installed on most LaTeX
% systems. The latest version can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/misc/
% Read the url.sty source comments for usage information. Basically,
% \url{my_url_here}.





% *** Do not adjust lengths that control margins, column widths, etc. ***
% *** Do not use packages that alter fonts (such as pslatex).         ***
% There should be no need to do such things with IEEEtran.cls V1.6 and later.
% (Unless specifically asked to do so by the journal or conference you plan
% to submit to, of course. )


% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}


\begin{document}
\title{Parallelism vs Speculation: Exploiting the Genetic Algorithm on GPU}


% author names and affiliations
% use a multiple column layout for up to two different
% affiliations

% \author{\IEEEauthorblockN{Authors Name/s per 1st Affiliation (Author)}
% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\
% line 2: name of organization, acronyms acceptable\\
% line 3: City, Country\\
% line 4: Email: name@xyz.com}
% \and
% \IEEEauthorblockN{Authors Name/s per 2nd Affiliation (Author)}
% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\
% line 2: name of organization, acronyms acceptable\\
% line 3: City, Country\\
% line 4: Email: name@xyz.com}
% }

% conference papers do not typically use \thanks and this command
% is locked out in conference mode. If really needed, such as for
% the acknowledgment of grants, issue a \IEEEoverridecommandlockouts
% after \documentclass

% for over three affiliations, or if they all won't fit within the width
% of the page, use this alternative format:
% 
%\author{\IEEEauthorblockN{Michael Shell\IEEEauthorrefmark{1},
%Homer Simpson\IEEEauthorrefmark{2},
%James Kirk\IEEEauthorrefmark{3}, 
%Montgomery Scott\IEEEauthorrefmark{3} and
%Eldon Tyrell\IEEEauthorrefmark{4}}
%\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Electrical and Computer Engineering\\
%Georgia Institute of Technology,
%Atlanta, Georgia 30332--0250\\ Email: see http://www.michaelshell.org/contact.html}
%\IEEEauthorblockA{\IEEEauthorrefmark{2}Twentieth Century Fox, Springfield, USA\\
%Email: homer@thesimpsons.com}
%\IEEEauthorblockA{\IEEEauthorrefmark{3}Starfleet Academy, San Francisco, California 96678-2391\\
%Telephone: (800) 555--1212, Fax: (888) 555--1212}
%\IEEEauthorblockA{\IEEEauthorrefmark{4}Tyrell Inc., 123 Replicant Street, Los Angeles, California 90210--4321}}

\author{\IEEEauthorblockN{Long Zheng\IEEEauthorrefmark{1},
Yanchao Lu\IEEEauthorrefmark{2},
Jingyu Zhou\IEEEauthorrefmark{2},
Minyi Guo\IEEEauthorrefmark{2} and
Song Guo\IEEEauthorrefmark{1},}
\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Computer Science and Engineering\\
The University of Aizu, Aizu-wakamatsu, 965-8580, Japan}
\IEEEauthorblockA{\IEEEauthorrefmark{2}Department of Computer Science and Engineering\\
Shanghai Jiao Tong University, Shanghai, 200240, China}
Email:d8112104@u-aizu.ac.jp
}



% use for special paper notices
%\IEEEspecialpapernotice{(Invited Paper)}




% make the title area
\maketitle


\begin{abstract}
As GPU's stunning computation capacity attracts many attentions from industry and academics, the number of cores integrated increases very rapidly.
The newset GTX590 has up to 1024 cores \cite{gtx590}. The huge number of cores means high parallelism and also the powerful computation capacity. Nowadays, applications on GPU always focus on the performance improvement provided by massive parallelism, however, with the redundant computation resource, some specific algorithm, such as Genetic Algorithm (GA) may have an alternative way to further improve their performance. In our work, based on the philosophy and fundamental of GA, we propose a speculation approach to using of the redundant cores to improve the performance of parallel GA applications on GPU. Our theoretical analysis indicates that the speculation approach should improve the performance of GA applications. Experimental results proves our analysis, which is that the speculation approach outperforms the traditional parallelism approach both in speed up and result precision of GA.



%  From GTX 8800 that has 128 cores to the newest GTX590 that has 1024 cores, it only takes about four years. In the predictable future, more cores will be integrated. More cores mean the  
% 
% Nowadays, more and more cores are integrated into a chip, for example, Intel has launched the 8-core Xeon processor and GTX590 GPU from Nvidia has 1024 cores.
% 
% that is based on the newest Fermi architecture is composed of 512 cores. A many-core system offers users increasing computing resource as well as the computation capacity. Mostly, cores are used to increase the parallelism to achieve the speed up the execution. However in this paper, we find that the random based algorithms such as Genetic Algorithm (GA) have alternative ways-speculation to use these cores to improve their performance, because with our observations, the performance of random based algorithms not only depend on the execution time, also on the random numbers used. With the conventional parallelism, All cores are collaborated to solve a problem together. In our work, we use speculation to obtain more results of the same problem concurrently, so that the opportunities of obtaining the better results increase. With more specification, we divide cores into several groups. Each group of cores solves the same problem separately. We take the respective random based algorithm-Genetic Algorithm (GA) as the case study to evaluate the effect of speculation approach in random based algorithms. Our experimental results show that the speculation outperforms the conventional parallelism and propose an effective way to use CPU resource for GA in the many-core system.

\end{abstract}

\begin{IEEEkeywords}
GPU; speculation; genetic algorithm; performance evaluation;

\end{IEEEkeywords}


% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle



\section{Introduction}
Nowadays compared to the traditional multi-core processors, GPUs offer dramatic computation power because of their highly massive parallel architecture. The number of cores integrated into a GPU is the key factor that affects the performance of GPUs. From GeForce 8800 GT that has 112 cores, to the newest GTX590 that has 1024 cores, it only takes about four years. Besides the number of cores, the GPU architecture evolves quickly. All effect GPU manufactures make is try to hide more and more hardware specifications so that eventually programmers can write their GPU codes more easily. 

Compared to the predecessor architecture-Tesla, Fermi architecture introduces a L2 Cache shared by Streaming Multiprocessors (SMs), and an elastic L1 Caceh shared by cores in a SM. As a result, although programmers may not know the details of GPU architecture, such as the relationship between threads and registers, wrap occupancy and tile, Fermi GPUs also offers a good performance, sometimes even better, compared to the Tesla GPU with fine optimization.  
%With our program experience, most of time, the performance of an application running on Fermi GPU with 48KB L1 Cache, 16KB Shared Memory is better than the one with 16KB L1 Cache, 48KB Shared Memory, also better than the 

In the predictable future, more cores will be integrated. More cores mean supporting higher parallelism. However, the GPU hardware now is a little ahead the computation needs, that is, the GPU hardware offers the redundant computation resource. The latest CUDA 4.0 that simplifies the multiple GPUs management makes the redundancy of computation resource more. The newest GPU architecture-Fermi architecture allows multiple kernels running on the GPU simultaneously, that is, the Fermi GPU allow different applications to share the GPU computation resource. It can be considered as one of solutions to use of the redundant computation resource. Therefore, how to use of so many cores of GPUs efficiently is very critical for GPU computing instead of the skills of fine optimization based on the GPU architecture.
\begin{table*}[t]
\caption{An example of a GA application}
\label{table:example}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
 & 1st & 2nd & 3rd & 4th & 5th & 6th & 7th & 8th & 9th & 10th \\
\hline
Optimal Result &7052.47 & 7054.28 & 7051.95 & 7051.92 & 7054.81 & 7051.94 & 7053.22 & 7052.79 & 7053.95 & 7052.17 \\
%\hline
%Generation  & 26797 & 37451 & 48764 & 30495 & 49156 & 40957 & 31371 & 46231 & 43445 & 42131\\
\hline
Execution Time  & 7858.97 & 10933.39 & 14267.79 & 8943.80 & 14337.15 & 11986.93 & 9177.28 & 13527.16 & 12718.70 & 12301.07 \\ 
\hline
\end{tabular}
\end{table*}

A Genetic Algorithm (GA) is a search heuristic that inspired by natural evolutionary biology such as inheritance, mutation,selection and crossover \cite{ipdpsZden}. The GA can be effectively used to find approximate solutions to optimization and search problems in a acceptable mount of time, so that it is successfully used in business, engineering and science \cite{ipdpsBeham, infocomMarkham, AAAAILahiri,cad}. However, as the GA uses huge numbers of individuals that composes a population to search probable solutions with enough generation of evolution, the GA applications cost lots of computation capacity. The result accuracy and execution speed increment badly depends on the development of computing parallelism. With the emergence of GPUs, the GA researchers focus on the new massive parallel architecture immediately. Many applications are transplanted from clusters to GPUs and get tens or hundreds of speedup. 

The previous research mainly concentrates to use of the massive parallelism of GPUs with traditional parallel GA approach \cite{GECCOLuong,CECVidal,CECArora}. However, the redundancy of computation resource provided by GPU is not considered seriously enough. In our paper, we begin with a normal fact that is what we notice in our experiences of implementing GA applications. Inspired by the fact, we propose a novel speculation methodology to use of the redundant computation resource of GPUs more effectively, followed by analyzing and predicting its performance theoretically. We take a classic engineering problem solved by the GA as case study to evaluate our speculation methodology. Experimental results shows that the speculation methodology can use GPU computation resource better than traditional parallelism methodology. The GA with speculation methodology on GPUs improves both result accuracy and execution speedup.

Our work offers an alternative method to use GPU's computation resource-speculation rather than parallelism. The similar think is not limited only in GA field. The speculation thinking should be effective for the algorithm based on the search with random candidates, for example, evolution algorithm, neural network, machine learning and so on. We exploit a new perspective to use GPU's powerful computation capacity, so that we can further get improvement with GPUs.

The remainder of this paper is structured as follows. Section \ref{sec:2} is the motivation of our work, which presents a fact of GAs we notice in our experiences of implementing GA applications and discuss the philosophy of GAs behind the fact. We describe our speculation approach and make a theoretical analysis in Section \ref{sec:3}. Experiments and results in Section \ref{sec:4}. Section \ref{sec:5} summaries our finding and points our future work.


\section{A Fact: Our Motivation}
\label{sec:2}
In the nature, the lifetime of individuals is a procedure in which they compete with each other and fit the environment. Only the strongest ones can survive from the tough environment. The survivor individuals mate randomly and reproduce the next generation. During reproducing the next generation, mutation always exists, which makes the generation individuals stronger for the tough environment. 

GAs are heuristic search algorithms that mimic natural species selection and evolution described above. The problem that a GA tends to solve is the tough environment. Each individual in population of a GA is a candidate solution for the problem. 

A generation of GAs is composed of the following steps-{\em fitness computation}, {\em selection}, {\em crossover} and {\em mutation}. The {\em fitness computation} is the species individual competition, which can tell which individual good for the constraints of GAs; the {\em selection} chooses the good individuals to survive and estimates the bad ones; the {\em crossover} is to mate two individuals to reproduce the next generation individuals; and the {\em mutation} also exists after {\em crossover}, so that the next generation can have the biological diversity. With enough generations, GAs can evolve the best individual that is the optimal solution to the problem.

During our experiences of implementing GA applications, we find a fact that is if we use the GA to find results of a problem for several times, we can hardly get the exactly the same results with the same configure. Table \ref{table:example} shows the performance of an example GA application that solve the \ldots. In this problem, the smaller result means the higher accuracy. We run this GA application 10 times with maximum of 50000 generations, and get the best results as well as the time that the GA application take to get the best results. 

From Table \ref{table:example}, we can easily find that the results of GAs are unstable. For example, the best results of the 3rd and 4th run are almost the same, however, the time they take to reach the best results are quite different. Moreover, although the time of the 3rd and 5th run are almost the same, the 3rd run gets the highest accuracy, rather than the 5th run gets the lowest accuracy.

There are two reasons for the result instability. Reason I is that in the evolution progress in the nature, mating and mutation are highly random, which is full of random operations. A little difference in mating or mutation progress will lead to a total different evolution track. Reason II is that the population may evolve into a trap that is hard to jump out,  which leads GAs get the bad results. All above is exactly the same as the species evolutions in nature. There are millions of species because of they evolves into different evolution track, meanwhile, lots of species extinguished because of they trapped and evolved into the dead end. 

%The reason is that during the population evolve, sometime the population may trap in the bad solution and hard to jump out.  



 Although we can set some rules for mating and mutation to improve GAs performance, we can not manipulate the progress of mating and mutation in GAs, because this progress highly depends on randomness, so that what we can do is accept the results that GAs obtain. Actually, this is also the fundamental of GAs. Lots of biologists, Philosophers and even religionists have been discussing whether Darwin's theory is right, one of which is ``if the all species on earth evolve again, our world may be totally different.''. Though, in GA applications, Dawrin is obvious right.

\section{Our Novel Speculation Methodology}
\label{sec:3}
The traditional GAs on GPU use the Parallel GA (PGA) model to use cores of GPUs. In the PGA model, the population of GAs is divided into islands which can be considered as sub-populations. The individuals in a island evolve. Every a particular number of generations, individuals in different islands will exchanges, which is called {\em migration}. The island mechanism is designed to dispatch individuals of population to parallel computing devices easily and reduce the communication overhead between parallel computing devices. However, compared to the original GAs, PGAs reduce the execution time so that lots of problem can be solved in an acceptable time, at the cost of decreasing the result accuracy with the same generations, which leads to increasing the number of generations. 
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\linewidth]{./img/illustration.pdf}
\caption{The Speculation Methodology that Implements GAs on a GPU.}
\label{fig:method}
\end{figure*}

However, the PGA model is perfectly fit for GPU architecture. Although GPU has hundreds of cores, they are organized into Streaming Multiprocessors (SMs). The threads running on cores in a SM can share the shared memory, have the synchronization mechanism. The communication between SMs are expensive, so that we should reduce it. The classic implementation of PGAs on GPU is that each block of threads manages a island, and the block is further assigned to a particular SM. PGA try to reduce the communication between islands by the {\em migration}, such that the communication between SMs is not much. 

Because of GPU's powerful computation capacity, GPU offers tow or even three orders of magnitude speedup compared to the multi-core processors or even clusters. The Fermi architecture which partially frees the fine optimization for GPU programs leads to lots of GA applications translating to GPUs.

The implementation of GAs on GPUs mostly follow the PGA model \cite{GECCOLuong,CECVidal,CECArora}. This model is used to increase the size of population. However GPU has too powerful computation capacity. In the PGA model on GPUs, each thread represents a individual. The newest GTX 590 GPU can support over 20,000 threads, which means the size of population on a single GPU can exceed 20,000 individuals. The essential number of individuals is related to the number of variables of the problem the GA tries to solve. Generally, hundreds or thousands of individuals are enough to get a good result in an reasonable generations \cite{1998survey}. The computation capacity of GPUs for GAs now is obviously redundant.

Therefore, how to use of the redundant computation resource of GPUs for GAs efficiently now is very critical. As we find and analyze in Section \ref{sec:2}, the GA results are not stable since the two reasons exist. Each running the GA application, we wish our GA application can get benefits rather than suffer from the two reasons. However, we cannot control the effect of the two reasons. 

We propose a novel methodology that is based on the speculation thinking, so that the GA applications assure to gain the benefits from the two reason, which leads to improve the result accuracy and execution speed. We split GPU SMs into groups. Each group maintains a population of the GA, which is independent with each other for the same problem. The island of each population still depends on the number of SMs in a group. The more groups SMs of a GPU are spitted, the more opportunities we can try to get a better results.


Fig. \ref{fig:method} illustrates the basic idea of our speculation approach. In Fig. \ref{fig:method}, the GPU has 16 SMs which is spitted into four groups. Therefore, four populations can evolve simultaneously, which indicates that we have can get four speculative results. After the four populations evolve to a particular generations, we choose the best results among the results of eight populations offer. Actually, we also can split the 16 SMs into various number of groups. We introduce the Configure Parameter (CP) to represent the different group configure. In this situation, the CP is 4, which also implies the number of speculative results we can get. In the new Fermi architecture, a SM consists of 32 cores while in the predecessor-Tesla architecture a SM only has eight cores. Therefore, we set four islands on a SM, so that we can eliminate the positive GA impact provided by the number of cores in a SM with Fermi architecture. With this set, we put the Fermi and Tesla architecture in a fair game with the perspective of GA design. When CP is 1, it is the traditional PGA on GPUs. The candidate CP can be set to $2^n$, which means CP can be 1, 2, 4, till the maximum number of blocks a GPU can support. As the value of CP increases, we can get more speculative results, but the size of each population decreases. 

The size of population is very important for GAs. If the population is too small, the number of candidate solutions is too small so that GAs evolve very slowly, and the individuals easily evolve to the dead end leading to a bad result. On the contrary if the population is too big, it won't offer the corresponding performance improvement, which is a waste of computation resource. Besides, with the speculation methodology, the number of speculative results is another factor that affects the performance of GAs. The more speculative results means we have high probability to obtain a better results. 

Therefore, if the CP is big, we can get enough speculative results to get the benefits from the instability of GA results, but the size of each population may be too small which leads to the individuals in the population traps into the bad results. Oppositely, if the CP is small, the size of population can be guaranteed, but the effect of speculation is weak, additionally the size of population perhaps is so big that the precious computing resource is wasted. With the analysis above, keeping the total number of individuals of all populations the same, adjusting the CP should affect the performance of GAs.







%When we use GAs to solve problem, we always wish we can be lucky enough, so that the results is accurate enough and the execution time is as short as possible. The previous work on GAs on GPU always concentrate on how to assign more individuals to GPU cores, and how to optimize the codes to fit GPU architecture, so that the more speedup compared to CPU can be got. The instability of GA results is considered as a side effect.

%In our work, we try to take advantage of the instabliliy of GAs. We split GPU cores into groups. Each group maintains a population of the GA, which is independent with each other for the same problem. The more groups the cores of GPU are spitted, the more opportunities we can try to get the lucky results. We hope we have enough opportunities to get the best results in the shortest time.

%The performance of GAs mainly depend on the size of population and the 

\section{Experiments}
\label{sec:4}
\subsection{Experiment Setup}
In order to evaluate our speculation methodology, we choose an engineering optimization problem-the Welded Beam Design (WBD) problem \cite{CMAMEDeb} as our benchmark. The WBD problem is widely used to evaluate the performance of GAs. The description of WBD problem can be found in \cite{}. It has four variables and five inequality constraints. The results $f^\ast$ of WBD problem with the best solution reported is $f^\ast = 2.38116$. With WBD, the smaller $f^\ast$ is, the better solution it is.
\begin{table}[b]
\centering
\caption{The Information of Populations}
\label{table:info}
\begin{tabular}{c||c|c|c|c|c}
\hline
{\em CP} & {\em 1} & {\em 2} & {\em 4} & {\em 8} & {\em 16} \\
\hline
Islands per Population & 64 & 32 & 16 & 8 & 4 \\
\hline
Individuals per Island & 64 &64 &64 &64 &64 \\

\hline
Individuals per Population & 4096 & 2048 & 1024 & 512 & 256 \\
\hline
Populations & 1 & 2 & 4 & 8 & 16 \\
\hline
Total Individuals & 4096 &4096 &4096 &4096 &4096 \\
\hline
\end{tabular}
\end{table}

We use the GTX580 GPU that is the Fermi architecture to evaluate our speculation methodology. The GTX580 GPU consists of 512 cores that are organized into 16 SMs, which means each SM has 32 cores. The Fermi architecture allows programmers to set the configuration of L1 Cache and Shared Memory of a SM. In our environment, with tuning our program, we shrink the size of Shared Memory, so that we set the configuration is 48KB/16KB, which means L1 Cache size is 48KB and Share Memory is 16KB. Compared to the configuration that is 16KB/16KB, we found the larger L1 Cache can improve decrease the execution time in our experiments.

With the implementation of the WBD problem on the GPU, each island consists of 64 individuals, four islands are organized into a block, and we initialize 16 blocks in total. Hence the CP can be 1, 2, 4, 8 and 16. With different CP, we have different number of populations. However we keep the number of individuals of all populations the same which is 4096. The information of populations of the GA with different CP is shown in Table \ref{table:info}. 




% \begin{table}
% \centering
% \caption{The Speed of Speculation Methodology with Different CPs}
% \label{table:speed}
% \begin{tabular}{|c|c|c|c|c|c|}
% \hline
% CP & 1 & 2 & 4 & 8 & 16 \\
% \hline
% Generation (Avg.) & 27324 & 23230 & 20302 & 17050 & 18179 \\
% \hline
% Time
% \end{tabular}
% \end{table}

\subsection{Experimental Results and Analysis}
In experiments, we evaluate the performance of our speculation methodology, and also compare it with the traditional parallelism one. In order to assure to get the precise results, 100 runs are performed for each value of CP. The performance of GAs is measured by execution time and result accuracy. With the design of our speculation methodology above, when the CP is 1, only one population exists on GPU, which is the traditional parallelism methodology.
\begin{figure*}[t]
\centering
\begin{minipage}[t]{0.43\linewidth}
\includegraphics[width=1\linewidth]{./img/time.pdf}
\caption{The Execution Time of Reaching a Specific Accuracy.}
\label{fig:time}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[t]{0.46\linewidth}
\includegraphics[width=1\linewidth]{./img/accuracy.pdf}
\caption{The Accuracy with Fixed Generations.}
\label{fig:accuracy}
\end{minipage}
\end{figure*}

As the best result of WBD problem is 2.38116, we set our acceptable result is \ldots, which is 1\textperthousand close to the best result. We evaluate the execution time that the GA application can get the acceptable result. Fig. \ref{fig:time} shows the execution time of 100 runs, which is a combination of scatter and line chart. Each $+$ of the scatter chart represents an execution time of each run, and the solid circle symbols on lines shows the average of execution time of 100 runs varies as the CP is from 1 to 16. As the value of CP varies from 1 to 8, the execution time decrease significantly, because as the value of CP increases, more populations evolve simultaneously, so that we have more opportunities to reach the acceptable result shorter. However, the execution time when CP is 16 is greater than the one when CP is 8, because as the value of CP increases, the size of each population shrinks. Although when CP is 16, we can get more speculative results, the size of population is so small that the speculation effect cannot compete the negative effect of small size of population. In a word, when CP is 8, the GA can reach the acceptable result fastest, which means when there are eight populations on GPU evolving for the WBD problem, the GA is the fastest.

Most applications of GAs are set the fixed generations to evolve for the optimal results. Therefore, we also make experiments in which we set 50,000 generations to solve the WBD problem, so that we evaluate the accuracy of results with different CPs. Fig. \ref{fig:accuracy} shows the results of WBD with different CPs after 50,000-generation evolution. Similar to Fig. \ref{fig:time}, the combination of scatter and line chart are also used to express the accuracy of each run and the average accuracy, respectively. Similarly, when CP is 8, we can get the best results because of the benefit of speculation. And the result accuracy suffers from the small size of population when CP is 16. 
\begin{table}
\centering
\caption{{\scriptsize The Comparison of Speed between Speculation and Parallelism}}
\label{table:time}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Methodology & Parallelism & \multicolumn{4}{c|}{Speculation}\\
\hline
CP & 1 & 2 & 4 & 8 & 16 \\
\hline
Generation & 27324 & 23230 & 20302 & 17050 & 18179 \\
\hline
Time (ms) & 4805 & 4086 & 3585 & 3010 & 3210 \\
\hline
Speedup & 1 & 1.17 & 1.34 & 1.60 & 1.50 \\
\hline
\end{tabular}
\end{table}

%The results above show speculaiton methodology can improve the performance of GAs both in execution time and accurary. And when CP is 8, the speculation offers the best benefit for the WBD algorithm. Now we compare the specua
Table \ref{table:time} and \ref{table:accuracy} show the comparison between speculation that when CP is 2, 4, till 16, and parallelism that is when CP is 1. All data in the tables are the average of 100 runs. 

From Table \ref{table:time}, we can find that compared to the parallelism methodology, the speculation one can save up to 10274 generations that is 1795 ms to reach the acceptable result, and the speedup is up to 1.6x. 

Table \ref{table:accuracy} indicates that the accuracy also improves when the speculation methodology is used, which is only $+4.7\times 10^{-5}$ away from the best result 2.38116. However, the result of parallelism methodology is $+10.4\times 10^{-5}$ away from the best result. We also notice that, except for when CP is 16, the total execution time of 50,000 generations of speculation and parallelism is the same, which means our speculation methodology does not introduce any overhead. Regarding to that the execution time when CP is 16 is one millisecond shorter that others, it is because, when CP is 16, all islands of a population are in a block, so that all data exchange of migrations is in the share memory of the GPU. Nevertheless, when CP is not 16, islands of population are in two blocks at least, so that some migrations need to access the global memory. The global memory access is 100 times slower than the share memory one. However, the number of threads on GPU is big enough to hide most of the global memory access, so there is only one millisecond difference, which we can omit reasonably. 


\begin{table}
\centering
\caption{{\scriptsize The Comparison of Accuracy between Speculation and Parallelism}}
\label{table:accuracy}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Methodology & Parallelism & \multicolumn{4}{c|}{Speculation}\\
\hline
CP & 1 & 2 & 4 & 8 & 16 \\
\hline
Result & {\scriptsize 2.38126} &{\scriptsize 2.38124}& {\scriptsize 2.38122} & {\scriptsize 2.38121} & {\scriptsize 2.38121} \\
\hline
Accuracy & +10.4 & +8.0 & +5.8 & +4.7 & +5.0 \\
\hline
Time (ms) & 8845 & 8845 & 8845 & 8845 & 8844 \\
\hline
\end{tabular}
\end{table}

\section{Conclusion}
\label{sec:5}
Nowadays, existed GA applications on GPU mostly use of the massive parallelism of GPUs to improve their performance. However GPUs can offer more and more computation capacity with GPUs' fast development. How to manage it is considered more critical than the previous hot topic on fine optimization of programs. In our paper we introduce the fundamental of GAs, and point that the results of GAs are unstable, which is caused by the philosophy behind GAs. Different from the traditional parallelism methodology, we propose a novel speculation one to get benefits from the instability of GA results. With the analysis, the speculation methodology can use of the redundant resource of GPUs more efficiently, so that the performance of GAs can further improved. Experimental results show that the speculation methodology outperforms the traditional parallelism one both in execution speed and the result accuracy.

Our future work will focus on the relationship between performance and CP and then model it, so that we can help researchers and engineers to use our speculation methodology to archive te best performance. 

% conference papers do not normally have an append
% use section* for acknowledgement
%\section*{Acknowledgment}



% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
% adjust value as needed - may need to be readjusted if
% the document is modified later
%\IEEEtriggeratref{7}
% The "triggered" command can be changed if desired:
%\IEEEtriggercmd{\enlargethispage{-5in}}

% references section

% can use a bibliography generated by BibTeX as a .bbl file
% BibTeX documentation can be easily obtained at:
% http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/
% The IEEEtran BibTeX style support page is at:
% http://www.michaelshell.org/tex/ieeetran/bibtex/
\bibliographystyle{IEEEtran}
% argument is your BibTeX string definitions and bibliography database(s)
\bibliography{ref}
\end{document}


