\documentclass[conference]{IEEEtran}
%\usepackage{ifpdf}
\usepackage{cite}
\ifCLASSINFOpdf
  \usepackage[pdftex]{graphicx}
  % declare the path(s) where your graphic files are
  %\graphicspath{{../pdf/}{../jpeg/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
   \DeclareGraphicsExtensions{.pdf,.jpeg,.png,.fig}
\else
  % or other class option (dvipsone, dvipdf, if not using dvips). graphicx
  % will default to the driver specified in the system graphics.cfg if no
  % driver is specified.
  % \usepackage[dvips]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../eps/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.eps}
\fi
\usepackage[cmex10]{amsmath}
\usepackage{algorithmic}
\usepackage{array}
\usepackage{multicol}
\usepackage{multirow}
\usepackage{bigstrut}
\usepackage{booktabs}
\usepackage{flushend}
\ifCLASSOPTIONcompsoc
  \usepackage[caption=false,font=normalsize,labelfont=sf,textfont=sf]{subfig}
\else
  \usepackage[caption=false,font=footnotesize]{subfig}
\fi
\usepackage{fixltx2e}
%\usepackage{stfloats}
\usepackage{url}

\newcommand{\todo}[1]{\textcolor{red}{#1}}
\usepackage{color}
\newcommand{\changed}[1]{\textcolor{black}{#1}}
%\newcommand{\changed}[1]{#1}

% Enunciations
\newif\ifitalicenv\italicenvtrue

\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
%
%\setlength{\columnsep}{2pt}
\newtheorem{exam}[theorem]{Example}
\newenvironment{example}{%
\italicenvfalse
\begin{exam}}{\end{exam}\italicenvtrue}
%
\newtheorem{defi}[theorem]{Definition}
\newenvironment{definition}{%
\italicenvfalse
\begin{defi}}{\end{defi}\italicenvtrue}

% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor HM-PSoCs}

\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
% Do not put math or special symbols in the title.
\title{MAMPSx: A Design Framework for Rapid Synthesis of Predictable Heterogeneous MPSoCs}

 \author{
\IEEEauthorblockN{Shakith Fernando${^1}$, Firew Siyoum${^1}$, Yifan He${^1}$, Akash Kumar$^{2}$ and Henk Corporaal${^1}$}
\IEEEauthorblockA{$^1$Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands\\
$^2$Department of Electrical \& Computer Engineering, National University of Singapore, Singapore\\
Corresponding author email: s.fernando@tue.nl}
}

\maketitle

% As a general rule, do not put math, special symbols or citations
% in the abstract
\hspace{-0.3cm}\begin{abstract}
Heterogeneous Multiprocessor System-on-Chips (HMPSoC) are becoming  popular  as a means of meeting energy efficiency requirements of modern embedded systems.  However, as these HMPSoCs run multimedia applications as well, they also need to meet real-time requirements. Designing these predictable HMPSoCs is a key challenge, as the current design methods for these platforms are either semi-automated, non-predictable, or have limited heterogeneity.

%Heterogeneous Multiprocessor System-on-Chips (HMPSoCs) are becoming  popular  as a means of meeting energy efficiency requirements of modern embedded systems.  However, as these HMPSoCs run multimedia applications as well, they also need to meet real-time requirements. Designing these predictable HMPSoCs is a key challenge, as the current design methods for these platforms are manual and error-prone. Furthermore, these current manual methods do not support the rapid synthesis of predictable platforms that meet performance constraints.

In this paper, we propose a design framework to generate and program HMPSoC designs in a rapid and  predictable manner. It takes the application specifications and the architecture model as input and generates the entire HMPSoC, for FPGA prototyping, that meets the throughput constraints. The experimental results show that our framework can provide a conservative bound on the worst-case throughput of the FPGA implementation. We also present results of a case study that computes the area---power trade-offs of an industrial vision application. The entire design space exploration of all configurations was completed in 8 hours. A tool-chain targeting the Xilinx Zynq FPGA is also presented. 
\end{abstract}

% no keywords

% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle

\section{Introduction}
% no \IEEEPARstart
Vision applications on portable embedded systems are becoming ubiquitous (e.g., Google Glass~\cite{google_glass}). However, for these complex system to become truly ubiquitous, they need to meet several design challenges; namely, (1) they need to meet real-time requirements, (2) they need to be energy efficient. For the first issue, it means that design methods need to generate \textit{Predictable} systems that can guarantee analyzed performances. For the second issue, \textit{Heterogeneous} computing becomes very important, as the energy efficiency of hardware accelerators is superior compared to homogeneous multiprocessors (almost $20\times$ gain~\cite{qscores}). Therefore, addressing the design challenges for predictable HMPSoCs is critical. 

%Smart vision embedded systems like Google Glass~\cite{google_glass} will be the future. Design challenges for these embedded systems are two-fold: (1) They need to meet real-time requirements, (2) they need to be very energy  efficient, due to their ubiquitous nature. For the  first issue, it means that design methods need to generate \textit{Predictable} systems that can guarantee analyzed performances. For the second issue, \textit{Heterogeneous} computing becomes very important, as the energy efficiency of hardware accelerators is superior compared to homogeneous multiprocessors (almost 20x gain~\cite{qscores}). Therefore, addressing the design challenges for predictable HMPSoCs are critical. 


%Smart vision embedded systems like Google Glass\cite{google_glass} will be the future. Design challenges for these embedded systems are two folds. (1) They need to meet real-time requirements, (2) also they need to be very energy efficient due to their ubiquitous nature. For the first, it means that design methods needs to generate \textit{Predictable} systems that can guarantee analysed performance. For the second issue, \textit{Heterogeneous} computing becomes very important as the energy efficiency of hardware accelerators are much superior compared to homogeneous multiprocessors(almost 20x gain\cite{qscores}). Therefore, addressing the design challenges for Predictable Heterogeneous MPSoC(HMPSoC) is critical. 

%\todo{fix the flow from challenge to contribution}
In order to better understand the challenges in the current methods for the synthesis of Predictable HMPSoCs, our experience in using Xilinx tools to generate an HMPSoC on Xilinx Zynq~\cite{xilinx_website} is described below. The Zynq FPGA contains a Dual ARM Processor Core together with the FPGA programmable fabric. We used a single accelerator generated through High Level Synthesis (HLS). Even though we managed to easily generate a RTL (Register Transfer Level) accelerator, interfacing it with the processor was non-trivial. A DMA (Direct Memory Access) IP (Intellectual Property) needed to be instantiated manually for data transfer; then, then a FIFO buffer needed to be further generated through a different tool and integrated manually, for each buffer size required. While it took several iterations for functional correctness, analyzing the performance was non-trivial, due to the lack of models of the different components, even for such a  simple non-pipelined example. Therefore, the key challenge is \textit{automatically synthesizing an HMPSoC in a fast and predictable manner}. 

%What are challenges in the current methods for the synthesis of Predictable HMPSoC? As a motivating example, our co-design experience in using Xilinx tools to generate HMPSoC on Xilinx Zynq\cite{xilinx_website} is described. The Zynq FPGA contains a Dual ARM Processor Core together with the FPGA fabric. We used a single accelerator generated through High Level Synthesis (HLS). Even though, we could easily generate a RTL accelerator, interfacing was non-trivial. A DMA IP need to be instantiated manually for data transfer and then FIFO buffer need to be generated through further different tool and integrated manually for each buffer size required. While it took several iterations for functional correctness, analysing performance was non trivial due to lack of models of different components even for this simple non pipelined example. Therefore, the key challenge is how to automatically synthesis a HMPSoC in a fast and predictable method.

%This requirement is further challenged by, as multi-core scaling will soon hit the utilization wall\cite{DarkSilicon}, and force 21\% of the chip to be turned off(\textit{dark silicon}) to maintain power limitations. 

%Heterogeneous computing is a promising solution to this major challenge. , it would be an effective method to battle this utilization wall. But design flows for heterogeneous MPSoC are manual and error prone. High Level Synthesis (HLS)~\cite{coussy2008high} is a promising solution for the programming of hardware accelerators from high level specifications. However, the learning curve for these vendor-specific HLS tools are steep and require the designer to make several design iterations to manually transform the code until quality of generated register-transfer level(RTL) is comparable to manually written RTL~\cite{johanthesis}. Additional, the user still has to  calculate the buffer sizes of the communication fifos and manually integrate the accelerators and required communication interfaces into the SoC.

In this paper, we present MAMPSx --- a design framework that takes application specifications and the architecture model as input and automatically generates the entire HMPSoC, together with corresponding software for processors and hardware accelerators, that meets the throughput constraints (Figure \ref{fig:design_flow}). This work extends the previous work of MAMPS~\cite{mampsTODAES}, where each processing tile was limited to homogeneous general purpose processors. Previously, a Communication Assist (CA) for homogeneous general purpose processors~\cite{shabbir2010mpsoc} and for accelerators~\cite{he2013} had also been introduced, but it was  without a complete framework. %Here, we integrate the above into a complete and automated design framework. % and extend the tool-chain to target the Zynq FPGA.


%In this paper, we present MAMPSx - a design-framework that takes in application(s) specifications and a architecture template and generates the entire HMPSoC, specific to the input application together with corresponding software and hardware accelerators for automated synthesis that meets the throughput constraint. (See Figure \ref{fig:design_flow}.) 
%
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth,height=6cm,keepaspectratio]{images/designflow.pdf}
\caption{MAMPSx Design Framework}
\label{fig:design_flow}
\end{figure}

\begin{table*}[!t]
  \caption{Comparing Various Approaches on Generating Predictable HMPSoCs}
  \label{tab:rwtable}
  \resizebox{\textwidth}{!} {
    \begin{tabular}{|c|c|c|c|c|c|c|c|}
    \hline
          & \textbf{Features}    & \textbf{DaedalusRT\cite{6176632}} & \textbf{System Codesigner\cite{6172642}} & \textbf{Space CoDesign\cite{spacereview}} & \textbf{Corre \textit{et al.}\cite{Corre2013}} & \textbf{MAMPS\cite{mampsTODAES}} &\textbf{ MAMPSx} \bigstrut\\
    \hline
    \multirow{8}{*}{\textbf{General}} & \textbf{Input} & C code & SystemC & KPN and C code & KPN and C code & SDFG and C code & SDFG and C code \bigstrut\\
    \cline{2-8}
          & \textbf{Model of Computation}   & CSDF, KPN & System MoC & KPN   & KPN   & SDFG   & SDFG \bigstrut\\
    \cline{2-8}
          & \textbf{Automated DSE}   & Yes   & Yes   & No    & Yes   & No    & No \bigstrut\\
    \cline{2-8}
          & \textbf{Toolchain} & Yes   & Yes   & Yes   & Yes   & Yes   & Yes \bigstrut\\
    \cline{2-8}
          & \textbf{FPGA Targets} & Virtex6   & Virtex2 & Zynq  & Virtex5 & Virtex6 & Zynq and Virtex6 \bigstrut\\
    \hline
    \multirow{4}{*}{\textbf{Predictability}} & \textbf{Predictable} & Yes   &   Yes    & No    & No    & Yes   & Yes \bigstrut\\
    \cline{2-8}
          & \textbf{Target Performance Propert}y & Worst Case & Worst Case & Average Case & Average Case   & Worst Case & Worst Case \bigstrut\\
    \cline{2-8}
          & \textbf{Measurement of Target Property} & Analysis & Analysis & Simulation & Simulation   & Analysis & Analysis \bigstrut\\
    \hline
    \multirow{10}{*}{\textbf{Heterogeneity}} & \textbf{Model of Architecture}  & Template & Template & Template & Template & Template & Template \bigstrut\\
    \cline{2-8}
          & \textbf{Use of CA} & No    & No    & No    & No    & No    & Yes \bigstrut\\
    \cline{2-8}
          & \textbf{Supported Tile Types} & Microblaze, & Microblaze, & ARM, Microblaze, & Microblaze, & Microblaze & ARM, Microblaze,\bigstrut\\
          &  & Accelerators      & Accelerator & Leon, Accelerators & Microblaze Coprocessor & & Accelerator \bigstrut\\
    \cline{2-8}
          & \textbf{Supported Interconnect Types} &   FIFO    &   FIFO    &      Bus & FIFO, Bus & FIFO   & FIFO, NoC, Bus \bigstrut\\
    \cline{2-8}
          & \textbf{Supported NI Types} &   FSL    &  FSL     &   AXI    & FSL   & FSL   & FSL, AXI Streaming \bigstrut\\
    \cline{2-8}
          & \textbf{Accelerator Support} & Manual IP & HLS (only SystemC) & Manual IP, HLS & HLS   & No    & Manual IP, HLS  \bigstrut\\
    \hline
    \end{tabular}
    }
\end{table*}%

%\todo{Feels a bit out of the blue, I would suggest moving the MAMPS part of Related Work to here to put contributions into perspective}
%Following are the key contributions of this paper:%\todo{Separate into features and contributions}

Following are the key contributions of this paper:
\begin{itemize}
	\item A complete design framework for the rapid synthesis of predictable HMPSoCs that can be used for prototype based Design Space Exploration (DSE). This was done by integrating: (1) the heterogeneous architecture template, with accelerators and different Processing Element (PE) types (e.g., ARM); with (2) the communication assist modules, with corresponding models of computation. 
   \item A demonstrative, automated port of the framework targeting the Xilinx Zynq ZEDboard\cite{zedboard}.	
	%\item A automated tool-chain to generate a predictable HMPSoC on Xilinx Zynq\cite{xilinx_website} FPGA targeting the Digilent ZEDboard\cite{zedboard}, to demonstrate the framework.
	\item A case study on how our methodology can be used for fast design space exploration on the Xilinx Zynq heterogeneous platform, using an industrial vision application for ink-jet printing.
\end{itemize}

%\begin{itemize}
%	\item Support for MAMPSx heterogeneous architecture template with accelerators and different Processing Element (PE) types (e.g. ARM).
%	\item The use of \textit{Communication Assist} modules with corresponding model of computation to enable predictable platform generation.
%	%\item MAMPSx communication model.
%	\item A complete automated tool-chain to generate a predictable HMPSoC on Xilinx Zynq\cite{xilinx_website} FPGA targeting the Digilent ZEDboard\cite{zedboard}.
%	\item A case study on how our methodology can be used for fast Design Space Exploration (DSE) on the Xilinx Zynq heterogeneous platform, using an industrial vision application for ink-jet printing.
%\end{itemize}

The remainder of this paper is organized as follows. Section \ref{sec:RelatedWork} summarizes the related work on rapid synthesis flows for predictable HMPSoCs. Section \ref{sec:sdf} introduces the application and architecture models and the architecture template used in our framework. Section \ref{sec:DesignFlow} gives the details of our design framework. Section \ref{sec:communication} describes the MAMPSx communication model. Section \ref{sec:Experiments} provides the experimental results. Section \ref{sec:Conclusion} concludes the paper and gives a direction for future work.

\section{Related Work}
\label{sec:RelatedWork}
HMPSoC synthesis methodologies are widely studied in literature. Table \ref{tab:rwtable} lists and compares these various approaches that are currently relevant for predictable platform generation. For a list of other approaches for HMPSoC synthesis, readers may refer to \cite{5247153}.

The DaedalusRT \cite{6176632} is an HMPSoC framework that takes C code as input and derives a Kahn Process Network (KPN) and a Cyclo-Static Data Flow (CSDF) model for analyzing. These models give conservative bounds for real time performance requirements. They use the ESPAM as a back-end to synthesize the platform for prototyping. They only support manually written accelerators and the user must derive  both the computation and communication models for these accelerators manually. %Their template model is also limited to the FIFO interconnect type and does not have a CA. Therefore, out-of-order access of data and other forms of communication synchronizations are not synthesizable or analyzable in their flow.   
The System Codesigner \cite{6172642} is an HMPSoC synthesis framework that takes SystemC as input and generates a complete HMPSoC. They use a dataflow model called SystemMoC written in SystemC to analyze and predict the performance. Like our framework, they require good worst case execution time estimates annotated to the model for accurate prediction. They only support applications written in SystemC and only accelerators written in SystemC can be modeled and synthesized. Both of these flows only support the FIFO interconnect type and do not have a CA. Therefore, out-of-order access of data and other forms of communication synchronizations are neither synthesizable nor analyzable in either of these flows.  

Space CoDesign Systems\cite{spacecodesign} is a recent start-up company for electronic system level synthesis. Similarly to ours, it is a rapid synthesis framework for HMPSoC design with support for Zynq targets. However, they target average case design with trace based cycle approximate simulation\cite{spacereview} to verify whether the performance constraints can be met. However, in our framework, we can design and predict the worst case performance bound. Further, they only support bus-based communication, while our framework is able to support FIFO, bus and networks-on-chip. %Similar to ours, they can also integrate HLS generated accelerators seamlessly and have a diverse set of options for tile types.

A template based synthesis framework similar to ours is described by Corre \textit{et al.}\cite{Corre2013}. They use the Daedalus framework as a front end to generate a homogeneous PE platform, after which they explore the design space of functions in each PE to accelerate using HLS. However, each of their accelerators can only be connected as a co-processor to a single PE, which limits the performance benefit. Their experiments show that because of their simplified models for communication and architecture, they may not be able to accurately predict the performance of the generated platform.

In our framework, we support multiple types of accelerators (manual RTL, HLS-C and HLS-SystemC) and PE types for integration while maintaining predictability for all. As the communication is modeled through the CA, the user only has to provide the model of the computation for the accelerators,  We also provide a conservative upper bound of the performance of the generated platform by analysis. Additionally, our heterogeneous framework can support diverse PE, interconnect and network interface types. 

\section{Application Model and Architecture Model}
\label{sec:sdf}
This section provides an overview of the application model, and the architecture template and model, used in the proposed design framework. These formal models are needed for the analysis of the performances, as well as for the computation of the required buffer sizes, when synthesizing predictable HMPSoCs.

\subsection{Application Graph Model}
\label{sec:sec:appmodel}

Synchronous Data Flow Graphs (SDFGs) \cite{lee1987synchronous} are used to model concurrent multimedia applications with timing constraints. The SDFG model of an example application is shown in Figure \ref{fig:example_model}. The nodes model the tasks and are referred to as \textit{actors}, which communicate with \textit{tokens} sent from one actor to another through the edges modeling dependencies. The example application is modeled with three actors \textit{A, B} $\&$ \textit{C} and three edges \textit{D1, D2} $\&$ \textit{D3}. An actor \textit{fires} (executes) when there are sufficient input tokens on all of its input edges and sufficient buffer space on all of its output channels. Every time an actor fires, it consumes a fixed amount of tokens from the input edges and produces a fixed amount of tokens on the output edges. These token amounts are referred to as \textit{rates}. The rates determine how often actors have to fire with respect to each other. The edges may contain \textit{initial tokens}, which is indicated by a bullet point, as in Figure \ref{fig:example_model}.

\begin{figure}[h]
\centering
\includegraphics[width=0.15\textwidth]{images/Example_SDF.pdf}
\caption{Example SDF Model}
\label{fig:example_model}
\end{figure}

\begin{definition}[SDFG]An SDFG (\textit{A, E}) consists of a set \textit{A} of actors and a set \textit{E} of edges. An edge \textit{e = ($a_1$, $a_2$, $t_1$, $t_2$)} represents a dependency of actor \textit{$a_2$} on \textit{$a_1$}. When \textit{$a_1$} fires, it generates \textit{$t_1$} tokens on \textit{e} and when \textit{$a_2$} fires, it consumes \textit{$t_2$} tokens from \textit{e}. Initial tokens on edges are defined as \textit{TokIn}: \textit{E} $\rightarrow$ natural numbers including 0.
\end{definition}

\begin{definition}[Application Graph (AG)]An \textit{AG} is represented as (\textit{A, E, AP, EP}) which is derived from \textit{SDFG (A, E)}. \textit{AP} and \textit{EP} provide the resource requirements of the actors and the edges on the platform  respectively. For each actor \textit{a} $\in$ \textit{A}, \textit{AP} provides a 3-tuple (\textit{$p_{types}$, ET, mem}), where, \textit{$p_{types}$} represents the implementation alternatives of the actor, \textit{ET} and \textit{mem} represent the execution time (in time-units) and memory needed (in bits) on the implementation alternatives respectively. \textit{AP} provides null values for \textit{ET} and \textit{mem} for unsupported implementation alternatives. For each edge \textit{e = ($a_1$, $a_2$, $t_1$, $t_2$)} $\in$ \textit{E}, \textit{EP} provides a 1-tuple (\textit{sz}), where, \textit{sz} is the size of a token (in bits).
\end{definition}

Table \ref{tab:app_graph_prop} shows the values of \textit{AP} and \textit{EP} for actors and edges of the example application.

\begin{table}[h]
\caption{Resource Requirement of Actors and Edges of Running Example\label{tab:app_graph_prop}}{%
\resizebox{0.95\columnwidth}{!} {
\begin{tabular}
{c c c c |c c } \hline
\textbf{Actors} & $\boldsymbol{p_{types}}$ & \textbf{GPP(\textit{ET, mem})} & \textbf{Accel(\textit{ET, mem})} & \textbf{Edges} & \textbf{sz} \\ \hline
$A$ & GPP & (100, 200) & (--, --) & $d_1$ & 512 \\
$B$ & GPP, Accel & (800, 400) & (100, 400)  & $d_2$ & 512  \\
$C$ & GPP & (50, 300) & (--, --) & $d_4$ & 32 \\
\hline
\end{tabular}}
}
\end{table}%

Throughput is an important property of multimedia applications; it describes how fast those applications are able to run, and it is defined as the inverse of the average iteration time of an application. The technique of analyzing throughput of the SDFGs is described in \cite{ghamarian2006throughput}.

\subsection{Architecture Template \& Model}
\label{sec:sec:archmodel}
In this section we first motivate the use of the Communication Assist module based on the C-HEAP \cite{nieuwland2002c} interface for our architecture and then we describe the MAMPSx heterogeneous architecture template. After that, we describe an example architecture model derived from this template.

\subsubsection*{Communication Assist}
The main idea behind the proposed C-HEAP based CA is the decoupling of communication from computation. This is done through a shared circular buffer with only synchronization primitives. Additional data copying is not needed. This circular buffer is shown in Figure \ref{fig:cbuffer} and the synchronization primitives are listed in Table \ref{tab:cheap}. The producer only needs \textit{claim space} to get an empty buffer space and \textit{release space} to release the written buffer. Similarly, the consumer only needs \textit{claim data} and \textit{release data} to get a full data buffer and to release the read data buffer respectively. 

\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{images/CCB.pdf}
\caption{C-HEAP Circular Buffer}
\label{fig:cbuffer}
\end{figure}
\vspace{-0.3cm}
\begin{table}[h]
\caption{C-HEAP CA Synchronization Primitives\label{tab:cheap}}{%
\resizebox{\columnwidth}{!} {\large
\begin{tabular}
{|l |l| } \hline
\textbf{Synchronization}  & \textbf{Description} \\ 
 \textbf{Primitives} &  \\ \hline
Claim Space & Producer claims output space by trying to move the write end pointer \\ \hline
Release Space & Producer releases output data by moving the write start pointer \\ \hline
Claim Data & Consumer claims input data by trying to move the read end pointer \\  \hline
Release Data & Consumer releases the input space by moving the read start pointer \\ \hline
\end{tabular}}
}
\end{table}%

Though the CA increases processor efficiency by off-loading PE communication tasks, the main benefit of a CA is that it allows the independent modeling of different heterogeneous components on the HMPSoC (see Section \ref{sec:communication} for details). Take, for example, an accelerator connected to an interconnect via a CA -- this decouples the complex communication handshaking interactions of the network interface of the interconnect from the accelerator computation. Additional benefits of C-HEAP based CA include: (1) the out-of-order data access for window type kernels; (2) a standardized IP interface which is independent of the network interface; and (3) a simplified accelerator design, as the focus is on computation.

\subsubsection*{Generic MAMPSx Heterogeneous Architecture Template}
The second input to the design framework is the architecture model derived from an architecture template (see Figure \ref{fig:design_flow}). This template (Figure \ref{fig:template}) describes the processing elements of the architecture available in the hardware platform (\textit{Tiles}) and how these components are connected (\textit{Interconnect}).  

\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{images/template.pdf}
\caption{MAMPSx Template}
\label{fig:template}
\end{figure}

As an example, the architecture platform for a vision system is shown in Figure \ref{fig:template}. Tile 1 and 6 show a source interface tile (for camera inputs, e.g., Cameralink) and Sink interface tile (for display outputs), which are commonly used in vision systems. The use of CA allows for the decoupling of the interface from the rest of the system for predictability. Tile 2 shows a simple tile  using a processing element (PE) which is connected to the network interface (NI), a local memory and some optional peripherals. Tile 3 shows a similar tile which has been extended with a CA from the PE. Tile 4 shows an example of hardware accelerators, which are an integral part of our template. Tile 5 is the Memory Tile and is an option for vision applications that require large frame buffers for processing. This can be either be an SRAM or a BRAM based memory tile with a CA or DDR tile with a predictable memory  controller\cite{Akesson:2007:PPS:1289816.1289877}. Finally, different types of interconnects (e.g., FIFO links, bus, and network-on-chips) can be seamlessly integrated as they only have to support the NI interface.

\subsubsection*{Architecture Model}
From this generic architecture template, a specific architecture platform can be modeled through a platform graph.
\begin{definition}[Platform Graph (PG)]A \textit{PG} is represented as (\textit{T, C)} which contains a set \textit{T} of tiles and a set \textit{C} of connections. A tile \textit{t} $\in$ \textit{T} is a 9-tuple (\textit{$pe_{type}$, \textit{$\nu$}, m, \textit{$ca_{type}$}, \textit{$ni_{type}$}, ci, co, i$\omega$, o$\omega$}), where $pe_{type}$ $\in$ \textit{PET} (\textit{PET} is the set of the processing element types), \textit{$\nu$} is the frequency(in MHz), \textit{m} is the memory size (in bits), $ca_{type}$ $\in$ \textit{$CA_T$} (\textit{$CA_T$} is set of the CA types), $ni_{type}$ $\in$ \textit{$NI_T$} (\textit{$NI_T$} is set of the network interface types), \textit{ci} $\&$ \textit{co} are the maximum number of input and output connections supported by the NI and \textit{i$\omega$} $\&$ \textit{o$\omega$} are the maximum incoming and outgoing bandwidth (in bits/time-unit). A connection \textit{c} $\in$ \textit{C} is a 4-tuple (\textit{$c_{type}$}, \textit{L}, \textit{d}, \textit{N}), where $c_{type}$ $\in$ \textit{CT} (\textit{CT} is set of the interconnection types), \textit{L} is latency (in time-units) and \textit{d} $\&$ \textit{N} are the depth (in bits) and width (in bits) of the interconnect respectively.
\end{definition}

\begin{table}[h]%
\centering
\caption{Properties of the Example Platform\label{tab:arch_graph_prop}}
{\resizebox{\columnwidth}{!}{
\begin{tabular}
{c c c c c c c c c c} \hline
$\boldsymbol{tile}$  & \textit{$\boldsymbol{pe_{type}}$} & \textit{$\boldsymbol{\nu}$} & \textit{\textbf{m}} & \textit{$\boldsymbol{ca_{type}}$} & $\boldsymbol{ni_{type}}$ & $\boldsymbol{ci}$ & $\boldsymbol{co}$ & $\boldsymbol{i\omega}$ & $\boldsymbol{o\omega}$ \\ \hline
$tile_0$  & GPP & 667 & 4096 & HW & AXI & 8 & 8 & 12 & 12 \\
$tile_1$  & Accel & 100 & 800 & HW & AXI & 8 & 8 & 12 & 12 \\
\hline\\
\end{tabular}}

\begin{tabular} 
{ c c c c c} \hline
$\boldsymbol{connection}$ & \textit{$\boldsymbol{c_{type}}$} & $\boldsymbol{L}$ & $\boldsymbol{d}$ & $\boldsymbol{N}$\\ \hline
 $interconnect_0$ & FIFO & 3 & 1 & 32\\
 $interconnect_1$ & FIFO & 6 & 1 & 32\\
\hline
\end{tabular}}
\end{table}%

Table \ref{tab:arch_graph_prop} shows the values of \textit{T} and \textit{C} for tiles and connections of an example architecture model for the running example.

\section{Design Framework}
\label{sec:DesignFlow}
In this section we present the details of the proposed design framework. As Figure \ref{fig:design_flow} shows, it consists of two main blocks. The \textit{Analysis and Exploration} block finds a mapping of the application onto the architecture which is capable of achieving the throughput as required for the application. This is input together with the original application and architecture specifications to the \textit{HMPSoC Platform Generation} block, which generates an entire HMPSoC with corresponding software and hardware modules for automated synthesis using out-of-the-box FPGA development software to an FPGA prototype. As the focus of this paper is on system level synthesis, we do not discuss how the accelerators are generated. They can be generated from a manual RTL library, High Level Synthesis (HLS)\cite{coussy2008high}, C-based HLS libraries (e.g., Vivado HLS OpenCV library\cite{xilinx_website}, Open-Source Accelerator Store\cite{ucla_hls_store}) or through our skeleton-based accelerator generation method\cite{johanthesis}.

\subsection{Analysis and Exploration}
\label{sec:sec:sdf3}
This stage utilizes the SDF3\cite{stuijk2007predictable} tool set consisting of several tools that allow automatic mapping of an application described as an SDF graph to a given platform. Buffer distributions, task mapping and static-order schedules are determined and gathered in the mapping output of SDF3. It also provides a worst case bound of the throughput of the application for the given mapping. The MAMPSx virtual platform of the SDF3 tool set was modified with the new communicational model described in Section \ref{sec:communication}.

\begin{figure}[h]
\centering
\includegraphics[width=0.25\textwidth]{images/Example_SDF_Mapped.pdf}
\caption{Mapping Output of the Running Example. Omitted Port Rates are to be Interpreted as 1.}
\label{fig:mapping}
\end{figure}

Figure \ref{fig:mapping} shows the mapping output for the running example. Actor $A$ and $C$ are mapped to $tile_0$ and actor $B$ is mapped to $tile_1$. The generated static order schedule on  $tile_0$ is $(A, C)$ as depicted in red. The calculated buffer sizes (1 token unit in this example) for each channel is represented in blue as well.

%\vspace{-0.1cm}

\subsection{HMPSoC Platform Generation}
\label{sec:sec:MAMPSx}
In this stage, the application model and architecture template together with the mapping output from SDF3 are used to generate the complete HMPSoC platform. The generated platform for the running example is shown in Figure \ref{fig:generated}. Firstly, the tile (e.g., ARM and accelerator) and interconnect (e.g., FIFO) components are instantiated from the specified mapping output with the required the buffer sizes. C-HEAP CA components are also instantiated from the template library. Secondly, software projects are generated for each tile type. This includes the actor wrapper code with C-HEAP primitives and the scheduler that implements the static-order schedule from the mapping output. This is combined with template project that already includes an implementation of the scheduling and communication libraries for each PE type. Additional peripheral driver libraries (e.g., timer, storage) are also added for execution time measurement and automated data collection. As shown, in the case of the ARM PE tile, C-HEAP CA circular buffers can be implemented  either using the scratch-pad memory or within the DDR memory. Caching is disabled on the ARM PE, while program code is placed on the DDR memory. The ARM PE CA was implemented through AXI DMA IP\cite{xilinx_website} while the accelerator CA implementation is from our previous work\cite{he2013}.
\vspace{-0.1cm}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{images/GeneratedPlatform.pdf}
\caption{Generated Platform for the Running Example}
\label{fig:generated}
\end{figure}

Furthermore, this design framework is automated and ported to target the Xilinx Zynq ZEDboard. Additionally, this framework can easily be ported to other FPGA devices and boards.

\section{MAMPSx Communication Model}
\label{sec:communication}
Here we describe the communication models used for our HMPSoC. Figure~\ref{fig:Communication} shows an example parameterized dataflow model of the communication in the channel D1 from the $tile_0$ to $tile_1$. It consists of three blocks: ARM PE and CA communication model for $tile_0$, AXI-streaming FIFO model for $interconnect_0$ and the CA and accelerator communication model for $tile_0$.  

%All port-rates are $1$, unless otherwise indicated. 

\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth,height=3cm,keepaspectratio]{images/model.pdf}
\caption{Parameterized Communication Model of the Channel D1 in the Running Example. Omitted Port Rates are to be Interpreted as 1.}
\label{fig:Communication}
\vspace{-0.4cm}
\end{figure}

The first block, which contains actors $s1 - s5$, models the delay in sending the data via CA the ARM-tile. All actors in the block, except $s3$, have $0$ execution time. Actor $s1$ isolates the computation process of actor $A$ from the CA communication, i.e. once actor $A$ completes a firing, it can carry on with the next firing while the CA takes care of the output data transmission. Actor $s2$
performs token serialization, i.e. it splits each input token into $f$ $32$-bit words. Actor $s3$ models the CA delay associated with sending a word, which is $4008ns$. Actors $s4$ and $s5$ are acknowledgement actors that block actors $s2$ and $s1$ from firing before the previous token fragmentation and output data transmission is completed, respectively.

The second block (actors $r$ and $l$) is a latency-rate model\cite{Wiggers2007} of the AXI-Streaming FIFO for transferring a $32$-bit word. The third block (i.e. actors $wr$, $rd$ and $d_1$) models the CA communication of the accelerator tile. %from our previous work\cite{he2013}. 
Actors $wr$ and $rd$ are for modeling write and read latencies of a word and actor $d1$ is for the de-serialization of words that belong to the same token\cite{he2013}.

\vspace{-0.4cm}
\section{Experiments}
\label{sec:Experiments}
In this section we present some of the results that were obtained by using our design framework to implement a real application. We use an industrial vision application, called Fast Focus on Structures (FFoS), that is used for ink-jet printing on OLED structures as our driving case study. We initially verify our framework by comparing the performances of the generated platforms with their analyzed performances. In addition, we present a case study using the FFoS application; to show how our tool can be used to efficiently explore the design space and how it can reduce the design time. Our implementation platform is the Xilinx Zynq Evaluation Development Board (ZEDboard) with a XC7Z020 FPGA on-board. Xilinx ISE 14.4 was used for synthesis and implementation. 


\begin{figure}[h]
\centering
\includegraphics[width=0.36\textwidth]{images/ffosapplication.pdf}
\caption{FFoS Application}
\label{fig:ffos_app}
\end{figure}

\subsection{FFoS Application}
\label{sec:sec:ffos}
FFoS is an application used, in OLED manufacturing, to accurately detect the centers of organic materials for ink-jet printing. As shown in Figure \ref{fig:ffos_app}, it consists of four main processing blocks\cite{he2011acivs}. The input image, of an OLED wafer section, initially goes through an Otsu (histogram based image thresholding); and then binarization, to differentiate the OLED segment from the background. It is then eroded, to remove the noise in the image. Finally, it is projected into a horizontal and a vertical vector to find the centers. 




The SDF model for this application is shown in Figure \ref{fig:ffos_model} and the corresponding actor implementation alternatives are listed in Table \ref{tab:ffos_graph_prop}. Typically for this application, the values of $W$ and $H$ are 120 and 45 respectively. All actors have software implementations for the ARM PE type. Additionally, actors $Proj$, $Eros$ and $Bin$ also have hardware accelerator implementations\cite{he2013}.
\vspace{-0.2cm}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{images/ffos_model.pdf}
\caption{SDF Model of the FFoS Application}
\label{fig:ffos_model}
\end{figure}

\vspace{-0.4cm}

\begin{table}[h]
\centering
\caption{Implementation Alternatives of Actors and Resource Requirements of the Edges of FFoS Application\label{tab:ffos_graph_prop}}{%
\resizebox{8cm}{!} {\normalsize
\begin{tabular}
{c c c c| c c} \hline
$\boldsymbol{Actors}$ & $\boldsymbol{p_{types}}$ & $\boldsymbol{GPP(ET- \mu s)}$ & $\boldsymbol{Accel(ET- \mu s)}$ & $\boldsymbol{Edges}$ & $\boldsymbol{sz(bits)}$ \\ \hline
$Src$ & ARM & 14,698.32 & - & D1 & 32 \\
$Otsu$ & ARM & 10,018.94 & - & D2 & 32 \\
$Bin$ & ARM, Accel. & 14,284.09 & 54.01 & D3 & 32 \\
$Eros$ & ARM, Accel. & 41,611.63 & 0.97 & D4 & 1\\
$Proj$ & ARM, Accel. & 8,992.43 & 1.60 & D5 & 1 \\
$Sink$ & ARM & 24.15 & - & D6 & 16 \\
\hline
\end{tabular}}
}
\end{table}%

\vspace{-0.4cm}

\begin{table}[h]
  \centering
  \caption{Comparison of Analyzed Throughput with Throughput Obtained on FPGA}
  \resizebox{8cm}{!}{
    \begin{tabular}{|c|>{\centering\arraybackslash}p{2cm}|>{\centering\arraybackslash}p{2.3cm}|>{\centering\arraybackslash}p{1.5cm}|}
    \hline
       & \multicolumn{3}{c|}{\textbf{FFoS Application}} \\
    \hline
    \textbf{Configuration} & \textbf{Analyzed}  & \textbf{FPGA}  & \textbf{Var} \% \bigstrut\\
     &  \textbf{Throughput}  &  \textbf{Throughput}  &  \\    
    \hline
   (0,0,0)  & 11.16	& 11.64	& 4.37 \bigstrut\\     \hline
   (0,0,1)  & 11.48	& 11.85	& 3.16 \bigstrut\\    \hline
   (0,1,0)  & 20.51	& 22.63	& 10.34 \bigstrut\\     \hline
   (0,1,1)  & 21.99	& 23.37	& 6.31 \bigstrut\\     \hline
   (1,0,0)  & 12.29	& 12.91	& 5.06 \bigstrut\\      \hline
   (1,0,1)  & 12.69	& 12.95	& 2.10 \bigstrut\\     \hline
   (1,1,0)  & 25.15	& 28.00	& 11.31 \bigstrut\\     \hline
   (1,1,1)  & 27.40	& 29.34	& 7.07 \bigstrut\\     \hline
     \end{tabular}}
  \label{tab:verification}%
\end{table}%

%\begin{table}[h]
%  \centering
%  \caption{Comparison of throughput for FFoS application obtained on FPGA with analysis}
%  \resizebox{8cm}{!}{
%    \begin{tabular}{|c|>{\centering\arraybackslash}p{2.3cm}|>{\centering\arraybackslash}p{2.3cm}|>{\centering\arraybackslash}p{1.5cm}|}
%    \hline
%       & \multicolumn{3}{c|}{FFoS Application} \\
%    \hline
%    Configuration & Analyzed  & FPGA  & Var \% \bigstrut\\
%     &  Performance (ms) &  Performance (ms) &  \\    
%    \hline
%   (0,0,0)  & 89.63	& 85.88	& 4.37 \bigstrut\\     \hline
%   (0,0,1)  & 87.09	& 84.42	& 3.16 \bigstrut\\    \hline
%   (0,1,0)  & 48.75	& 44.18	& 10.34 \bigstrut\\     \hline
%   (0,1,1)  & 45.48	& 42.78	& 6.31 \bigstrut\\     \hline
%   (1,0,0)  & 81.37	& 77.45	& 5.06 \bigstrut\\      \hline
%   (1,0,1)  & 78.82	& 77.20	& 2.10 \bigstrut\\     \hline
%   (1,1,0)  & 39.76	& 35.72	& 11.31 \bigstrut\\     \hline
%   (1,1,1)  & 36.49	& 34.08	& 7.07 \bigstrut\\     \hline
%     \end{tabular}}
%  \label{tab:verification}%
%\end{table}%

\subsection{Verifying the Framework}
\label{sec:sec:verify}
In order to verify our design framework, we automatically and exhaustively generate all eight possible configurations from the available implementation alternatives. A configuration is defined as a 3-tuple  (\textit{$Proj_{type}$, $Eros_{type}$, $Bin_{type}$}), where each $actor_{type}$ has value $0$ or $1$; for implementations on the ARM PE or as an accelerator, respectively. These configurations are listed in Table \ref{tab:verification} --- each with the analyzed throughput, the FPGA prototype throughput and the variation between them. As the maximum variation is 11.30\%, our framework provides a conservative worst case bound on the throughput of the generated HMPSoCs. Our investigations showed that this variation was due to memory access time variations in the DDR RAM.


\subsection{DSE Case Study}
\label{sec:sec:dse}
Here we present a case study of using our design framework to explore the design space by computing the trade-offs between execution time, area, and power. Figure \ref{fig:area} (top) shows the Pareto optimal front (blue line) of the execution time - area trade-offs generated for all eight configurations. The three dominated configurations ($001$, $101$ and $011$) all have the binarization actor as an accelerator. In our implementation, binarization requires 5400 ($120\times45$) number of words to be transferred to the accelerator; while for the other channels, the required number of words are 180 ($4\times45$) or less (see Table \ref{tab:ffos_graph_prop}). The communication overhead of binarization overshadows the gain by accelerating the actor, except in the case of configuration $111$. Therefore, it is critical to model and predict the communication overheads in HMPSoCs, such that non-interesting points can be pruned away at an early stage of the flow.

%Here we present a case study of doing a design space exploration to compute execution time, area and power trade-offs by using our design framework. 

\vspace{-0.5cm}

\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{images/area.pdf}
\caption{Execution Time vs. Area Trade-offs and Execution Time vs. Power Trade-offs}
\label{fig:area}

\end{figure}
%

Figure \ref{fig:area} (bottom) shows the execution time - power trade-offs. The power consumptions were calculated using the XPower tool available from Xilinx\cite{xilinx_website}. It allows the calculation of the FPGA power consumption as a function of the area and the ARM PE power consumption as a function of the processor's load. Our processor's load is defined as the total sum of the execution times of the actors on the ARM divided by the total execution time of the application. It is interesting to note that the configuration $110$, though a Pareto point in the execution-area trade-off graph, is now dominated here. Likewise, configuration $011$ is a Pareto point here, but is dominated in the execution-area trade-off graph.

%\todo{analysis paragraph}
%\begin{figure}[h]
%\centering
%\includegraphics[width=0.4\textwidth]{images/power.pdf}
%\caption{Execution Time vs. Power Trade-off}
%\label{fig:power}
%\end{figure}
\vspace{-0.1cm}
\subsection{Design Time}
\label{sec:sec:designtime}
The time spent on design space exploration is an important aspect when designing HMPSoCs. Table \ref{tab:dtime} lists the design times required by the various parts of the framework. Compared to the manual design of a single configuration that took 5 days to complete, we were able to complete the entire design space exploration of all eight configurations in 8 hours; this includes the synthesis and implementation time. This assumes a working knowledge of the application and experience with the design framework. 
\vspace{-0.3cm}
% Table generated by Excel2LaTeX from sheet 'Sheet1'
\begin{table}[h]
  \centering
\caption{Design Time\label{tab:dtime}}
\resizebox{\columnwidth}{!}{\normalsize
    \begin{tabular}{|l|>{\centering\arraybackslash}p{1.4cm}|>{\centering\arraybackslash}p{2.1cm}|>{\centering\arraybackslash}p{1.7cm}|}
    \hline
       & \textbf{Manual}  & \textbf{Generating}  & \textbf{Complete}  \\
          &  \textbf{Design} &  \textbf{Single Design} &  \textbf{DSE} \bigstrut\\
    \hline
    Gathering required actor metrics & -  & 4 hours  & 4 hours \bigstrut\\
    \hline    
    Creating Application Model & -  & 1 hour  & 1 hour \bigstrut\\
    \hline
    Generating Architecture Model & -  & 1s  & 8s \bigstrut\\
    \hline
    Generating Mapping & -  & 1s  & 8s \bigstrut\\
    \hline
    Platform Generation & 5 days  & 30s  & 240s \bigstrut\\
    \hline
    FPGA Synthesis & 20 mins  & 20 mins  & 160 mins \bigstrut\\
    \hline
    Total Time & $\sim$ 5 days  & $\sim$ 5 hours  &  $\sim$ 8 hours \bigstrut\\
    \hline

    \end{tabular}%
  }
\end{table}%
\vspace{-0.1cm}
\section{Conclusions}
\label{sec:Conclusion}
In this paper we proposed a design framework to generate predictable HMPSoC designs. Our approach takes the description of the application and the architecture model and produces the corresponding HMPSoC platform, which meets the throughput constraint. The design framework is ported to target the Xilinx Zynq ZEDBoard. A case study is presented to find the trade-offs between area, performance and power; for an industrial vision application to show that our design framework allows for fast and automated design space exploration of predictable HMPSoCs.

%The current available features of this design flow and tools are as follows: 
%\begin{itemize}
%\item Three inbuilt applications (JPEG Decoder, Fast focus on Structures, Producer and Consumer)
%\item Support for several processing tile types including ARM, Microblaze and Hardware Accelerators.
%\item Support for several interconnect standards including Point to Point Network(FSL and AXI Streaming),Bus and Network-on-chip.
%\item Support for C-HEAP\cite{nieuwland2002c} based communication interface standard.
%\item Supported target boards includes Xilinx Zynq~\cite{xilinx_website} and ML605.
%\end{itemize}

%However, next we would like to integrate different accelerator generation methods like HLS, our skeleton method\cite{johanthesis} and compare them. 

Currently, the ARM tile with DDR RAM is not completely predictable; we will use a FPGA BRAM memory and a predictable memory controller\cite{Akesson:2007:PPS:1289816.1289877} with DDR RAM to make it fully predictable in the future.

Also, we would like to develop and automate more ways of exploring the design space of the accelerators at the \textit{Analysis and Exploration} stage, for example: (1) finding which accelerator actor has better performance/area efficiency; (2) trying different parallelism of accelerator actors.

%\vspace{-0.1cm}
% use section* for acknowledgement
\section*{Acknowledgement}
\vspace{-0.05cm}
We appreciate the support by the Dutch Ministry of Economic Affairs (Pieken in de Delta) for this research work within the Embedded Vision Architecture project.
\vspace{-0.05cm}
%This research was supported by the Embedded Vision Architecture (EVA) project and was funded by the Dutch Ministry of Economic Affairs (Pieken in de Delta). We appreciate their support.

% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
% adjust value as needed - may need to be readjusted if
% the document is modified later
%\IEEEtriggeratref{8}
% The "triggered" command can be changed if desired:
%\IEEEtriggercmd{\enlargethispage{-5in}}

% references section

% can use a bibliography generated by BibTeX as a .bbl file
% BibTeX documentation can be easily obtained at:
% http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/
% The IEEEtran BibTeX style support page is at:
% http://www.michaelshell.org/tex/ieeetran/bibtex/
%\bibliographystyle{IEEEtran}
% argument is your BibTeX string definitions and bibliography database(s)
%\bibliography{IEEEabrv,../bib/paper}
%
% <OR> manually copy in the resultant .bbl file
% set second argument of \begin to the number of references
% (used to reserve space for the reference number labels box)



\bibliographystyle{IEEEtran}
\bibliography{rsp2013}

% that's all folks
\end{document}


