%----------------------------------------------------------------------------------------
%	PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------

\documentclass[11pt, a4paper, notitlepage]{article}
\usepackage{geometry} % Required to change the page size to A4
\geometry{a4paper} % Set the page size to be A4 as opposed to the default US Letter
\usepackage{graphicx} % Required for including pictures
\usepackage{float} 
\usepackage{wrapfig} % Allows in-line images 
\usepackage{caption}
\usepackage{subcaption}
\usepackage{tabularx}
%\usepackage{subfiles}
\usepackage{amsmath}
%% for source code
\usepackage{listings}
\usepackage{color}
\usepackage{xcolor}

%packages to rotate header of table
%\usepackage{adjustbox}
%\usepackage{array}
%\usepackage{booktabs}
%\usepackage{multirow}
%
%\usepackage{pdfpages}
%
%
%\newcolumntype{R}[2]{%
%    >{\adjustbox{angle=#1,lap=\width-(#2)}\bgroup}%
%    l%
%    <{\egroup}%
%}
%\newcommand*\rot{\multicolumn{1}{R{90}{1em}}}% no optional argument here, please!

%----------------------------------------------------------------------------------------
%	TITLE PAGE
%----------------------------------------------------------------------------------------
\newcommand*{\plogo}{\fbox{$\mathcal{PL}$}} % Generic publisher logo

\newcommand*{\titleGP}{\begingroup % Create the command for including the title page in the document
\centering % Center all text
\vspace*{\baselineskip} % White space at the top of the page

\rule{\textwidth}{1.6pt}\vspace*{-\baselineskip}\vspace*{2pt} % Thick horizontal line
\rule{\textwidth}{0.4pt}\\[\baselineskip] % Thin horizontal line

{
	\LARGE Parallel Programming and Many-Core Architectures\\[0.3\baselineskip] \text Where are the Challenges?
}\\[0.1\baselineskip] % Title

\rule{\textwidth}{0.4pt}\vspace*{-\baselineskip}\vspace{3.2pt} % Thin horizontal line
\rule{\textwidth}{1.6pt}\\[\baselineskip] % Thick horizontal line

\scshape % Small caps
ET4074: Modern Computer Architectures\\ % Tagline(s) or further description
%Delft University of Technology\\[\baselineskip] % Tagline(s) or further description
Delft University of Technology\par % Location and year

%\textsc{\LARGE Delft University of Technology}\\[1.5cm] % Name of your university/college
%\textsc{\Large ET4074: Modern Computer Architecture}\\[0.5cm] % Major heading such as course name
%
%


\vspace*{6\baselineskip} % Whitespace between location/year and editors


Group 20 \\[\baselineskip]
\begin{minipage}{1\textwidth}
\begin{flushleft}
	\parbox{0.45\textwidth}{\center\textbf{A. Jimenez}\\
		 4245369\\
		a.a.jimenezluna@student.tudelft.nl}
		\hspace{0.02\textwidth}	
	\parbox{0.45\textwidth}{\center\textbf{J. James}\\
		4170253\\
		j.a.james@student.tudelft.nl}\\
	
%	
\end{flushleft}
\begin{center}

{\large \today}\\[3cm]
\end{center} 
\end{minipage}



\vfill % Whitespace between editor names and publisher logo

%\plogo \\[0.3\baselineskip] % Publisher logo
%{\scshape 2012} \\[0.3\baselineskip] % Year published

\endgroup}


\newcommand{\head}[1]{\textnormal{\textbf{#1}}} % putting a header for the tables

\begin{document}

\bibliographystyle{plain}

%\pagestyle{empty} % Removes page numbers

\titleGP % This command includes the title page

\begin{abstract}

As part of the course ET-4074 Modern Computer Architectures we wrote this document to discuss the topic “Parallel Programming and Many Core Architectures”. For this purpose we made a selection of four papers published by the IEEE which addresses this topic in diverse applications: Paper 1 \& 2 focus on software solutions and paper 3 \& 4 on hardware solutions.

Each paper was analyzed and compared in the way each paper covered the research topic. We discussed what contributions were made and the results obtained by the experiments.

Derived from these works we found that parallelism on software and hardware is a major topic of interest in the research community as the options to maximize the use of architectures are running out given physical properties of current semiconductors.

\end{abstract}



\clearpage



%
%\vspace{2em}
%

\tableofcontents
\clearpage



%----------------------------------------------------------------------------------------
%	Introduction
%----------------------------------------------------------------------------------------




\section{Introduction}
In this section we discuss the objective of this paper, and provide background information on the two main topics of the research.  The organization of this section is as follows: motivation for the document, background information about parallel programming multi and many-core architectures. 

The skeleton of the rest of the paper is as follows: Section 2, discusses four research papers on the research topic. Sections 3, 4, \& 5, provides our comparison of the papers on metrics which we have chosen. Section 6, concludes this paper. 

\subsection{Motivation for topic selection
}
We find Parallel Programming and Many-Core Architectures interesting, because it is closely related to the current approach in the evolution of computing systems: the necessity to manufacture faster and smaller devices each time to supply a hungry market.

However, as we have learned according to Amdahl's Law, sequential instructions result in the bottleneck on multiprocessor architectures. This fact leads efforts to find software and hardware solutions to speed-up parallel tasks.

\subsection{Background on parallel programming
}
The basic idea of parallelism in the computing field states that operations can be performed concurrently and therefore produce better execution times. This can be done in software and/or hardware.

On the software side, parallel programming focuses on the research of new high level programming models or the add-on of embedded functions in existing programming models which perform operations in parallel. The main idea is to maximize the harnessing of  multiprocessor hardware architectures.
 

%Add pipeline picture .......................................... 

\subsection{Background on multi and many core architectures
}
On the hardware side research is being done, since the 90's, over architectures where multiple processors are running in parallel to accomplish certain tasks. For this reason,  a speed-up is able to be accomplished. However, multiprocessing is a mature topic which was first addressed in the mid 60's when Flynn categorized computers based on their level of parallelism at the instruction and data streams levels:

\begin{itemize}
\item Single instruction stream, single data stream (SISD) – Uniprocessor.
\item Single instruction stream, multiple data streams (SIMD) – Multiprocessors
\item Multiple instruction streams, single data stream (MISD) – Non existing
\item Multiple instruction streams, multiple data streams (MIMD) – Multiple threads
\end{itemize}


Finally, in the latest years, a new classification on multiprocessing has emerged: multi-core and many-core architectures. Multi-core refers to the design of boards with two or four processor in a single chip while many-core refers to integration of more than four (maybe hundred or more) processors in a single board.

One main difference is the fact that many-core architectures are intended to process only parallel tasks. Therefore hardware and software needs to be designed and implemented with this parallel approach.

\subsection{Challenges in parallelism and multiple core processing
}

According to our course literature, there are mainly two challenges in multiprocessing. Firstly, the limited amount of parallelism in programs. Secondly,the high cost of communications among processors (long latency). These challenges are effects of running independent tasks with no communications or running parallel programs with frequent communication of threads for a correct execution.

Regarding the amount of parallelism in programs, it can be addressed in software by implementing new algorithms with better parallel performance and re-education of programming techniques.

On the other hand, the long remote latency can be treated via hardware mechanisms as data caching and software mechanisms data restructuring to make more accesses local.



%----------------------------------------------------------------------------------------
%	Section 2: Discussion of selected papers
%----------------------------------------------------------------------------------------

\section{Discussion of selected papers}
In this section we discuss the important points of each research paper. We will highlight the researches' results, their contribution, and provide our opinion on them. 


\subsection{Paper 1 review (SW): Data Parallel Programming Model for Many-Core Architectures}

\paragraph{Summary}
\subparagraph{Context, reason and problem description}
This work exposes ideas in the context of parallel computing, in particular, about the stream programming model in the context of GPUs architectures. This model proposes streams (sequences of similar data records) and kernels (small programs which operate on and produce input and output streams) as the main tools to execute programs and exploit parallelism. The key element of the streaming paradigm is the gathering of data from memory, computation on streams and returning it back to memory. These actions allow reduction in memory latency, because data is accessed in chunks.

This paper treats the following problems:

\begin{itemize}
\item A necessary change of paradigm in programming techniques to completely harness parallel architectures.
\item CUDA programming model requires a deep understanding of hardware.
\item Optimization of code to achieve speedups over the correspondent CPU code can become an arduous task.
\item Continuous evolving hardware induces tedious and error-prone optimization procedures in order to find the best configuration.

\end{itemize}


\subparagraph{Contribution}
Introduction of Gstream, a general purpose, scalable data streaming framework on GPUs which improves the following items:
\begin{itemize}
\item Provides powerful and concise language abstractions capable of representing conventional algorithms as streaming problems. Gstream claims to be supported by template language abstractions of C++.

\item Projects these abstractions onto GPUs to exploit data parallelism by means of reducing data dependencies.

\item Demonstrates viability of streaming accelerators in particular on clusters of GPUs.

\item Validation of abstraction efficiency is made over sample implementations as data streaming, data parallel problems, numerical codes and text search.
\end{itemize}

Gstream proposes filters and channels as the tools for parallelism. Filters encapsulate computing kernels accelerated by GPUs and data is manipulated in channels (data links between filters) by means of facilitated APIs. Filters have a three stage structure: Start, finish and kernel. Inside the kernel stage the user can decide to run accelerated CUDA kernels.

For the use of Gstream an API for C++ has been developed with templates of generic techniques and object orientation to address parallelism at different levels:
\begin{itemize}
\item CUDA libraries for data parallelism,
\item POSIX thread abstraction for task parallelism in shared memory,
\item Inter-processing comm libraries for data sharing across distributed-memory machines.
\end{itemize}



\subparagraph{Main Results }
Results are presented as a comparison of different domains (data streaming, data parallel problems, numerical codes and text search) for five benchmarks to show that the framework offers flexibility, programmability and performance gains.

Each benchmark show results for four different implementations: single threaded program without streaming, multi-threaded program using Gstream without GPU support, multi-threaded program using Gstream with GPU support and CUDA implementation without streaming. Overall results show that GPU version of GStream increases speedups up to 30x w.r.t. Non Gstream approaches.


\paragraph{Discussion}
The main contribution of this work is the concept of GStreams as support for parallel programming using CUDA for GPUs architectures, therefore, it does not introduce a novel path on parallelism but an optimization of an existing one. 

Despite this, results show an increase in the harnessing of parallelism derived from the use of streaming programming model. Speaking about the results, we found that one of the benchmarks (IS) was rewritten to obtain successful results regarding the GStream techniques which could be taken as unclear.

We found the overall organization of the document good but we some room for improvement:
\begin{itemize}
\item A brief introduction to the basics of stream programming model would be appreciated to let readers connect concepts like kernels with filters,
\item Another important introduction would be to talk about the foundations of GPUs to let readers know that this architecture combined with regular CPUs help to obtain parallelism in specific applications.
\end{itemize}

We also think that some gaps and unaddressed issues on this work are:
\begin{itemize}

\item A more detailed description of the API developed: overall characteristics and useful features.
\item A brief explanation on the reasons to pick the benchmarks for the tests.

\end{itemize}



\paragraph{Conclusion}
We conclude that this work is useful for the efforts to achieve more efficient parallelism in current applications and we rate it with an 8 (in a scale of 10) derived from the fact that a more clear and extended solution of the characteristic of the framework could be stated and taking into account that the claim novelty is derived from an existing platform (CUDA).


\subsection{Paper 2 review (SW): Parallel Patterns for General Purpose Many-Core}

\paragraph{Summary}
\subparagraph{Context, reason and problem description}
This paper proposes the use of parallel design patterns, implemented using algorithmic skeletons, to abstract and hide the problems related to the efficient programming of general purpose many-core accelerators. The main idea is to provide programming frameworks to ease and maximize the use of  general purpose many-core co-processor architectures.

This analysis is aimed for the FastFlow framework (a structured parallel programming environment implemented in C++) on the Tilera TilePro64 architecture (a multicore processor consisting of a cache coherent mesh network of 64 tiles).

Axes of CPU hardware advances:
\begin{enumerate}


\item Limited number of cores (4 to 16) share a standard memory hierarchy highly optimized to support efficient cache coherency mechanisms.
\item Co-processors with a big number of cores (64 to 128) are interconnected using regular networks with mechanisms supporting access to core local caches and inter-core communications (Tilera architecture).

\end{enumerate}

Tilera architecture proposes two programming environments: one thread per core and one program/thread per core.

The programming scenario has two features: it is high level w.r.t. the scenario exposed by GPGPUs, and requires a deep understanding of the architecture and proprietary libraries.








\subparagraph{Contribution}
\begin{enumerate}

\item Structured parallel programming approach based on parallel design patterns and algorithmic skeletons.
\item Porting of FastFlow framework on the Tilera TilePro64 architecture.
\item Demonstration of FastFlow efficiency to run synthetic and real applications on the TilePro64 where the co-processor is used for entire programs (standalone parallel program mode) or for programs offloaded from the host (accelerator mode).
\item Optimizations on memory hierarchy and cache coherence.

\end{enumerate}



\subparagraph{Main Results }
Results claim to demonstrate the efficiency achieved while using patterns on the TilePro64 both to program stand-alone skeleton-based parallel applications and to accelerate existing sequential code.

To port FastFlow on TilePro64 the following steps had to be accomplished:
\begin{enumerate}

\item A change in the instruction set/architecture to complete a memory fence instruction.
\item Usage of mechanisms and libraries implementing synchronization.
\end{enumerate}


The following benchmarks were used in experiments:
\begin{enumerate}
\item Synthetic benchmarks. Speedups were obtained for tasks lasting more than 10 ms and up to 100ms. TilePro64 proved to be a balanced architecture despite a high number of cores running a limited number of instructions. In this case, memory subsystem becomes the bottleneck.
\item Kernel based benchmarks. Speedups of the order of 42x faster were observed when applying 50 worker threads to a matrix multiplication in FastFlow. Memory bandwidth and coherence system becomes a bottleneck in this case.
\item TilePro64 as an accelerator. Tests were conducted in the case of running only a piece of a program. Traffic due to cache contention was reduced, because farm workers receive only one metric.

\end{enumerate}



\paragraph{Discussion}
The contributions of this work are mainly focused on porting a given framework, FastFlow in this case, from one architecture to another and evaluate the level of parallelism that can be achieved through specific techniques.

Demonstrations were sketched to prove that the porting operation can be done in a couple of steps.

In our opinion this paper does introduce a novel technique on parallelism, but appears to be very specific on two particular technologies: FastFlow and TilePro64.

Moreover, results showed that the combination of FastFlow on TilePro64 produces a highly efficient structure to run programs in parallel or as an accelerated section of one program.

We found that the organization and development of the topic was informed sufficient. However, too much information and definition was provided on the FastFlow framework and TilePro64 architecture. We think that a brief introduction on these items would have been enough.


\paragraph{Conclusion}
We conclude that this work approaches the field of parallel computing and many core architectures from a point of view of combinations of different programming frameworks and multicore architectures, but still lacks of a more general solution to find a new path in this field which has a big importance in the evolution of computing.

We rate it with an 7 (in a scale of 10) derived from the fact that future work can be extended to a more detailed classification and combination of frameworks and architectures.
 
 \subsection{Paper 3 review (HW): Parallel Programming and Many-core architectures}
\paragraph{Summary}
\subparagraph{Context, reason and problem description}
This paper contributes ideas in the context of NoC technology which is focused on planar metal interconnection of large numbers of cores in a single die. For this purpose, wireless NoCs (WiNoCs) are introduced to reduce latency and power consumption problems of NoCs, and therefore multicore technology, by replacing wired paths with high-bandwidth long-range wireless links.

Together with WiNoCs research, another two trends on chip interconnection are taking place nowadays: 3D integration and optical interconnects.

WiNoCs are founded on the idea of integrating silicon antennas operating in the mm-wave range up to one hundred gigahertz for inter-chip communications. Current research has produced four main paths on the physical layer of WiNoCs depending on the frequency range of the antennas, because the efficiency of this technology relies on this layer: 
\begin{enumerate}

\item UWB
\item mm-Wave
\item Sub THz
\item THz
\end{enumerate}

WiNoC architectures are sub classified by this work in two categories:
\begin{itemize}

\item Mesh- Topology based NoCs considers tiled multi-cores and proposes access to the network via RF nodes with deliver of packets via single or multiple hops
\item Small-World network based NoCs considers networks with a very short average path length, defined as the number of hops between any pair of nodes. The idea is to interconnect widely separated nodes through long range wireless links.

\end{itemize}

\subparagraph{Challenges/Important issues mentioned}
Communication resource management is very important for WiNoCs to provide an efficient media access. Because of this, MAC mechanisms and routing protocols are analyzed.

WiNoC technology tackles the problems of power and latency walls of traditional wired inter-chip communications.

WiNoC technology face reliability issues (noise on wireless channels) and integration issues which can be addressed by a break of typical architectural designs.

WiNoC technology future research is focused on low power mm-wave transceivers and control over CNT growth.


\subparagraph{Main Results }
Experiments on Mesh- Topology based NoC showed 23.3\% average performance improvement and 65.3\% average latency reduction.

Experiments on Small-World network based NoCs make reference to results on positive gains in achievable bandwidth and improved energy dissipation.

\paragraph{Discussion}

\subparagraph{Analysis on contributions}
We find the topic of this work as relevant and interesting towards a new trend on multicore architectures though its results have been obtained for lab environment, because it is on an early stage of research.

The document gives a clear view of the backgrounds, targets and future research of WiNoCs as a clear effort to use concepts of regular wireless networks to inter-chip communications. In principle, this idea tackles common issues on power and latency of regular wired interconnections.

\subparagraph{ Novelty introduced}
We consider this work as novel for a new path on multi-core technology and could give room for enlarging the current efforts to divide tasks and improve speed-up of hardware architectures. The idea is simple, try to switch to a wireless approach for inter-chip communication to remove the common issues of current wiring.

\subparagraph{Writing structure}
Structure and writing style of the document show a good command of the English language and provides the reader with sufficient background information and a clear explanation on the idea. Graphs show results of the analyzed approaches.

\subparagraph{Gaps and unaddressed issues}
Challenges are mentioned, but a more clear explanation on the future research efforts would be helpful.

Small-World network based NoCs topic is referenced from other works. It would be helpful to have some data on the results of those works.

\paragraph{Conclusion}
Rating and recommendations
Given the novelty and structure of the document we consider this work with an 8 (out of 10) points.  
 
\subparagraph{Suggestions for future research}
We advice that future research moves along with regular wireless networks technologies, but also on new semiconductor technologies in order to remove the idea of routers which can create bottlenecks as in any other regular data network. 



% -------------------------------------------------

 \subsection{Paper 4 review (HW): An efficient distributed memory interface for Many-Core Platform with 3D stacked DRAM }
\paragraph{Summary}
\subparagraph{Context, reason and problem description}
This paper talks about the challenges of many-core architectures from a hardware perspective. Highlighted is the latency in such architectures due to DRAM memory access time in many-core architecture. This memory bottleneck, where the processors' performance is far greater than that of memory, is known to be the memory wall. 

Besides the memory access speed, memory-to-logic interfacing plays a role in the memory wall problem, and is an important issue. Multi-core architectures which require a lot of bandwidth also need efficient power and signal integrity in memory interfaces. 

Architectural interface to a vertically stacked memory must be streamlined, in order to benefit from the bandwidth, speed, and energy efficiency which TSV technology provides in 3D memory integration for many-core platform.

\subparagraph{Challenges/Important issues mentioned}
The memory wall prevents successful transition from multi-core to many-core computing platforms. Thus memory access speeds up the communication cost to access the memory.

For many-core architectures that requires high-bandwidth, having an efficient memory interface has been a problem, while power and signal integrity has been causing bottleneck in the many-core interface. 
		     
\subparagraph{Contribution}
A 3D-stacked DRAM distributed interface has been developed which is efficient and flexible. This interface guarantees ultra-low-latencies access to vertically local memory neighborhoods,.

A specialized 3D-DRAM controller which features a very fast path to its vertical local memory neighborhood. Vertical local memory neighborhood are the memory models on top of each processing element.  Furthermore, this controller provides a standard NoC communication facility for accesses to remote memory neighborhoods.

This controller also receives remote accesses from outside processors, pointing their request to local memory neighborhood. This controller is efficient and flexible.

\subparagraph{Main Results }
The results presented are a performance analysis of the 3D DDR memory interface.

This interface is flexible and efficient, and has a bandwidth improvement between 1.44x to 7.40x as compared to the JEDEC standard. For DMA peaks are achieved of 4.53GB/s and 850MB/s for remote access through the NoC.

Analyzing the physical design cost a 2x2 NoC mesh, four 3D DRAM modules, and four cores were used. They were synthesized for low-latency local accesses with the TSMC 65nm technology library (general purpose process). The table below depicts a summary of the hardware and power cost of the main components of the system:\\

\begin{table}
\centering 
\begin{tabular}{ccc}

 \hline
 \head{Module} & \head{\% used of 127K gates } & \head{\% Relative Power Cost (Total 33mW at 1GHz)}\\
  \hline
  \verb|DDR_CTRL| & \verb|43| &   \verb|10|\\
  \verb|CC| & \verb|5| &   \verb|5|\\  
  \verb|TSV| & \verb|8| &   \verb|5|\\
  \verb|NoC| & \verb|19| &   \verb|51|\\
  \verb|DES| & \verb|11| &   \verb|6|\\
  \verb|SER| & \verb|14| &   \verb|23|\\
  \hline
  
\end{tabular}
\caption { Hardware and power cost}

\end{table}




\paragraph{Discussion}
\subparagraph{Analysis on contributions}
We find the contributions in this paper not so much of  novel. The reason therefore is that besides the interface created, the hardware connecting to the I/O masking n a conventional DRAM, is altered. However, these changes have provided a good improvement for the memory access in many-core architectures. 

\subparagraph{Novelty introduced}
We find that this paper did not introduced much of a novelty. From their research they highlighted that a 3D DDR architecture can help with the memory wall which prevents moving from multi-core to many-core. The controller developed variates little to that of the standard controller used for 3D DDR controller. So we find that the main impact is on the analysis of solving the memory wall issue. 

\subparagraph{Writing structure}
Regarding the structure of the document, we found it clear and understandable to highlight the important aspect of their research and results. There is enough graphs, showing clearly the difference for each experiment and design. 

Minor flaws in the document were mistake in the sentences, for example the absence of a word. 

\subparagraph{Technical flaws}
Their design still has a latency in the horizontal network. We believe improving this will decrease the memory latency in many-core systems. 

\subparagraph{Gaps and unaddressed issues}
We found that this paper did not provide clear quantitative answer to how efficient and flexible the developed interface is.



\paragraph{Conclusion}
\subparagraph{Rating and recommendations}
We find that this paper do contribute to many-core system. Given the fact that a deep and clear analysis of the challenges,and the develop controller not having not much of a contribution  we give this paper a 7.  
 
\subparagraph{Suggestions for future research}
We suggest that the latency in the horizontal NoC be improved, such that remote memory access is even faster in many-core platforms 











%-------------------------------------------------

%----------------------------------------------------------------------------------------
%	Section 3: Comparison of papers 
%----------------------------------------------------------------------------------------
\section{Comparison of papers}
In this section we will compare the four papers previously mentioned. The comparison was made based  on the metrics depicted in Table 2. Each paper was evaluated to see how well the contribution and results of each research, addresses and improves the challenges in parallel programming and many-core architecture. 

\begin{table}


\centering 
\includegraphics[width=\linewidth]{comparison.pdf}
%\includepdf[pages={1}]{comparison.pdf}
\caption{Quantitative and qualitative comparison}
\end{table}

%\begin{table}
%\centering 
%\begin{tabular}{ccccccccc}
%
% \hline
% \head{} & \head{Technique}  & \head{Approach difficulty}   & \head{HW/SW solution} & \head{Introduced novelty} & \head{Future research possibilities}  & \head{Impact on parallelism research }  & \head{Bandwidth/ Speed-up improvement} & \head{Challenges improvement}\\
%  \hline
%  \verb|DDR_CTRL| & \verb|43| &   \verb|10|\\
%  \verb|CC| & \verb|5| &   \verb|5|\\  
%  \verb|TSV| & \verb|8| &   \verb|5|\\
%  \verb|NoC| & \verb|19| &   \verb|51|\\
%  \verb|DES| & \verb|11| &   \verb|6|\\
%  \verb|SER| & \verb|14| &   \verb|23|\\
%  %\verb|\textsf| & \verb|\sffamily| & \sffamily Example text\\
%  %\verb|\texttt| &\verb|\ttfamily| & \ttfamily Example text\\
%  \hline
%  
%\end{tabular}
%\caption { Quantative and qualative comparison}
%
%\end{table}



%----------------------------------------------------------------------------------------
%	Section 4: Definition of evaluation and comparison criteria
%----------------------------------------------------------------------------------------


\section{Definition of evaluation and comparison criteria}
We chose the following criteria to evaluate and compare the research papers and their results.

\subparagraph{Qualitative} 
\begin{itemize}


\item Technique = Describes briefly the method used in the research paper.
\item Approach difficulty = Indicates the level of difficulty of the proposed technique in terms of hardware and/or software efforts (easy, difficult, NA).
\item HW/SW solution = Indicates if the proposal includes a change of hardware or software paradigm (HW, SW, HW/SW).
\item Introduced novelty = Indicates if the solution introduces novelty to the field (yes/no).
\item Future research possibilities = Indicates if there is room for further research (yes/no)
\end{itemize}

\subparagraph{Quantitative}
\begin{itemize}
\item Impact on parallelism research = Rates the impact on the current research of parallelism and multi core architectures (1-3).
\item Speed-up improvement (Sp)= Indicates the metric of improved speed-up w.r.t. the experiments accomplished.
\item Bandwidth improvement (Bw) = Indicates the metric of improved data transfer rate w.r.t the experiments accomplished.
\item Challenges improvement = Indicates how much the challenges were improved. (1–3).

\end{itemize}




%----------------------------------------------------------------------------------------
%	Section 5: Results of comparison including thorough argumentation of pros and cons of the different approaches

%----------------------------------------------------------------------------------------

\section{Results of comparison including thorough argumentation of pros and cons of the different approaches
}
The advantage of paper 1 is that it actually provides a usable solution where programmers can be able to overcome some challenges in parallel programming. 

Eventhough paper 2 does not provide an actually product as paper 1, it suggest a simple solution. A model which provides more abstraction for programmers. However the disadvantage is a model only provides solution to some systems and to some challenges. Therefore must be altered when new many core systems are developed.

Papers 4 \& 5 address the challenges on the hardware point of view. Paper 4 presents the ideas in the context of NoC to show that a clear trend to evolve flat printed circuits into more "intelligent" pieces of engineering. In our personal opinion, these are all efforts to maximize the physical properties of current semiconductors' technology while new ones.  Paper 5, illustrates how the memory latency in 3D DDR memory architectures for many-cores can be improved. This was done by developing an controller that interfaces the memory and the many-core platform.  Even though papers 4 \& 5 talk about hardware, the differ from each other in which aspect they focus on. 




%----------------------------------------------------------------------------------------
%	Section 6: Remarks and final comments

%----------------------------------------------------------------------------------------


\section{Remarks and final comments}
Development of this assignment gave us the opportunity to know how research on parallelism is moving in real life. From these papers and the literature we read, we can  see that multi-core and many core architecture will be moving more and more to the regular user side and we will have the opportunity to use very fast architecture for general purpose and personal computing.

We believe that multi-core architectures will remain for decades but still at some point will have to be replaces for a new technology and materials which will lead us to unimagined speeds, size and performance for computing applications.


%----------------------------------------------------------------------------------------
%	Section 7: References

%-------------------

\section{Literature}
\begin{thebibliography}{100} % 100 is a random guess 
\bibitem{nvidea } http://www.nvidia.com/object/tesla-supercomputing-solutions.html

\bibitem{book}
Computer Architecture: A Quantitative Approach, Fourth Edition, John L. Hennessy and David A. Patterson.

\bibitem{P1} 
Data Parallel Programming Model for Many-Core Architectures (1530-2075/11), Yongpeng Zhang.
2011 IEEE International Parallel \& Distributed Processing Symposium

\bibitem{P2}Parallel Patterns for General Purpose Many-Core (1066-6192/12), Daniele Buono, Marco Danelutto, Silvia Lametti and Massimo Torquati
2013 21st Euromicro International Conference on Parallel, Distributed, and Network-Based Processing

\bibitem{P3}Wireless NoC as Interconnection Backbone for Multicore Chips: Promises and Challenges Sujay Deb, Student Member, IEEE, Amlan Ganguly, Member, IEEE, Partha Pratim Pande, Senior Member, IEEE, Benjamin Belzer, Member, IEEE, and Deukhyoun Heo, Member, IEEE.
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, VOL. 2, NO. 2, JUNE 2012

\bibitem{P4}An efficient distributed memory interface for Many-Core Platform with 3D stacked DRAM Igor Loi, and Luca Benini, DEIS, University of Bologna, Bologna, Italy



\end{thebibliography}

\clearpage

%\bibliography{main}


\end{document}
