\section{Conclusion}\label{sec:con}
%The silicon industry has chosen multicore as new
%direction. However, diverging multicore architectures enlarge the gap between algorithm-centric programmers and
%computer system developers.  Conventional C/C++ programming language
%can not reflect hardware.  Existing ad-hoc techniques
%or platform-dependent programming language pose issues of generality
%and portability. 
%Source-to-source transformation can meet the challenge
%and help tailor programs to specific multicore architectures.

%Not only the more processor cores but also
%elaborated memory hierarchy and exposed communication are adopted by
%new multicore architectures. More worse, 

We have presented a template-based programming model to support parallel
programs for multicores. Our approach performs source-to-source
transformations using C++ template metaprogramming. All functionalities
are achieved within the ISO C++ and organized as a template library. The
library is flexible enough to apply more than one parallel
pattern. In addition, programmers can extend the library to
exploit new architectural
features or apply customized parallel patterns for applications. Our approach
supports different multicores because we use building
block classes to abstract executions on physical targets. In our
prototype library, we implement the corresponding versions for CPU
and GPU.

%.  Experiments show
%that our template approach can transform algorithms into SPMD threads
%with competitive performance. These transformations are available for
%both CPU and GPU, while the cost of migration is manageable. Besides, we
%can apply hierarchical division for programs on CPU. We also
%transform a group of standalone functions into a
%pipeline using our template library. It demonstrates that template
%metaprogramming is powerful enough to support more than one way to
%parallelize for multicore.

%Our programming model bridges algorithm experts and diverging multicore
%architectures. Domain-specific experts focus on algorithms in form of
%conventional programming languages. They wrap functions to template
%classes and then pass them to \emph{TF class} as template parameter. Template
%mechniasm takes responsibility to transform source code according to
%their targets.

%Streaming is an important computation model for innovative
%multicore architectures~\cite{imagine, cellbe, larrabee, cuda}. We partially exploit GPU functionality in this
%paper, however, the transformations for GPU are quite
%straightforward.  It is still unclear how many efforts need to
%pay for a full-blown template library, which supports
%streaming computation.

% Libvina can only deal with regular
%data. Future work on view class  will concentrate on supporting
%general operations like gather and scatter etc.  
%On CPU, source-to-source transformation should go on improving data
%locality of programs. We plan to explore template approach to  generalize
%blocking and tiling techniques.  It is also possible to re-structure
%or prefetch data using template metaprogramming accompanying with
%runtime library.
Currently, we are working on further improving libvina.
On CPU, we plan to develop methods to re-structure and
prefetch data with runtime support based on metaprogramming techniques.
On GPU, we plan to explore source
transformations for strip-mined memory accesses in metaprogramming,
because modern GPUs provide memory coalescence to optimize memory. 
Moreover,  we are investigating how to use static information
to parallelize programs with irregular memory
footprints, where traditional static analysis
conducted by compilers could be helpful for our source-level transformations.

%Currently, kernel functions in GPUs prohibit recursion. We believe that
%it would be beneficial to introduce template recursion for
%GPUs.

%The source code for this work is available at \url{http://code.google.com/p/libvina/}.
