\documentclass[conference,a4paper]{IEEEtran}
\usepackage{amsmath}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\usepackage{amsfonts}
\usepackage[T1]{fontenc}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{fullpage}
\usepackage{enumerate}
\usepackage{moreverb}
\usepackage{hyperref}

\title{Running an Instagram-like filter on an remote embedded platform} 
\author{Group~5\\Abhimanyu~Selvan,
Bjarni~Thor~Arnason,
Mario~Voorsluys,
Quintijn~Hendrickx}

\begin{document}

\maketitle

\section*{Introduction}
The lab project for the course Embedded Computer Architecture this year was to parallelize, map and optimize
an Instragram-like filter application on an embedded platform called EAC-CompSoC. The platform has three 
processor tiles which each have a MicroBlaze core equipped. The first part of the project was to parallelize
the given filter function on a desktop computer so that the image is processed with multiple threads.
Next step was to port this application to the embedded platform to execute sequentially on Tile2. Then the 
next task was to parallize this to execute on at least two of the tiles on the platform. Finally, the 
parallel version was supposed to be optimized in terms of both performance and energy and also the 
energy-performance trade-off was to be explored. This document is the final report on the project 
and contains an overview of the solutions and results obtained.

\section{Parallelizing the code on a desktop}
The first step of the task is to get familiar with the filter itself, and parallelize it on a PC platform. This was done by using \emph{pthread}, a multi-threading library for C applications.
 
\subsection{Approach}
The parallel version of the given \emph{filter} program first splits the image up depending on the number of threads specified. That is, if the number of threads is X, then the image is split up horizontally into X parts (only the rows are split up but the columns remain whole). A total of X threads are then started and each of them is assigned a section of the image. If the number of threads specified is such that the number of rows can't be split equally between them, then the last thread processes the extra rows.


\subsection{Problems}
The output of the parallel version of the \emph{filter} should be compared to the output of the original filter. But when comparing the outputs of the original \emph{filter} running on different platforms (32 and 64 bits), the results were different. For this reason, all the comparisons between pictures done in the next steps of this report are not bitwise comparisons, but they allow small differences between the reference image (created with the original \emph{filter}) and the output of the application being described.

\section{Porting the code to the ECA-CompSoC platform}
The second step was to get the \emph{filter} application running on the remote platform. A big challenge in an embedded platform is that there are more tight memory restrictions than on a PC. In this assignment that affects the maximum size the program can have, but it also affects how the image is handled, since it is not possible to keep the original picture in the memory, it has to be replaced by the filtered one.

\subsection{Reducing code size}
The first program did not fit in the instruction memory. This was mainly due to the fact that the compiler added 64 bits emulation code since the MicroBlaze processors has only a 32 bits FPU.

The solution for this problem is to convert the operations in the code to single floating point precision (32 bits, float in C) instead of double precision (64 bits, double in C).

\subsection{Splitting the image}
The processor doesn't have direct access to the whole picture. It has to get the data from the frame buffer, and process it. As the DMA memory size is limited to 4 kB which for the \emph{filter} application represents a bit more than 5 rows, only five rows could be read from the frame buffer each time.

To filter the image, first five rows are read from the frame buffer using RDMA\_0 on tile 2, and the data was copied to D\_MEM. The rows were processed and the results stored in RDMA\_0. After the rows were processed, the data is transferred back to the frame buffer. The last two rows are kept in D\_MEM and used for the next iteration.

This solution can filter 5 rows each time, except the first time, when only 3 rows can be processed.

\begin{figure*}
\centering
\includegraphics[scale=0.9]{FilteringAlgorithm.pdf}
\centering
\caption{Data flow for each filtered block: The original data is kept in D\_MEM, and the filtered data is written to the RDMA. After that the last two rows in D\_MEM are copied to the first two rows, and 5 new rows are read from the frame buffer through the RDMA. Those five rows are then copied to D\_MEM.}
\label{fig:mem_struct}
\end{figure*}


\subsection{Delay in memory operations}
After the processor sent the last part of the filtered picture to the frame buffer it called the function \emph{print\_framebuffer} indicating to the server it was ready. But not all the data was written to the \emph{frame\_buffer} yet, resulting in a wrong image read by the server.

This problem was solved by reading data from the frame buffer. As memory operations are sequential, the read will only execute after the write is finished.

\section{Parallelizing the code on the ECA-CompSoC platform}
To make better use of the platform, the next step was to make the other two tiles also perform some calculations. The chosen approach was to let tile 0 and tile 1 perform the actual filtering while tile 2 only managed the data transfers. This choice was mainly based on the fact that tile 2 is the only one with access to the frame buffer where the picture is. The other two tiles communicate using the Global Shared Memory (GSM) and the synchronization registers.

This approach requires communication between tiles, so that tile 1 can tell tile 2 it is ready processing an image block. Another problem that had to be solved was how the border situations should be handled. The border situations are the first and last blocks from each tile.

\subsection{Inter-tile communication}
In \autoref{fig:intertile_com} the communication between the tiles is shown. The GSM is divided in two queues, one for each tile. Each queue contains four slots, and each slot contains five rows of the image. In the synchronization registers SHsr information about those slots are kept. The information about a slot is its state, the first row number of the slot, and the number of rows in the slot.

Tiles 0 and 1 read their synchronization registers to find which slot contains the line number they have to filter, and where the slot state is \emph{Unprocessed}. When the slot is found they filter it, send the data back, and write to the synchronization register of tile 2 that the slot is processed. Tile 2 then copies the data from the GSM to the frame buffer, and replace the slot data with new data, writing on the synchronization registers of tile 0 and 1 that the slot contains unprocessed data, telling also the starting row number of the data and the number of rows.

Two special slot states where created for the first and last image block to be processed by a tile. Those are the border cases.

\begin{figure*}
\centering
\includegraphics[scale=0.9]{MemoryManagement.pdf}
\centering
\caption{Memory Structures and inter-tile communications: The GSM contain job queues for tile 0 and 1. Each queue has 4 jobs (slots). The synchronization registers contain synchronization data about those slots (Slot state, first line number and number of rows).}
\label{fig:intertile_com}
\end{figure*}


\section{Optimizing performance and energy consumption}

\subsection*{Optimizations}
\begin{itemize}

\item Because of the good results achieved by parallelising the program onto tile 0 and tile 1 we extended our implementation with filtering on tile 2. Because tile 2 also handles communication with tile 0 and tile 1 we made sure that tile 2 would only filter if all job queues for tile 0 and tile 1 were full. 

\item The reference implementation split the filtering process into three stages (edge filter, blur filter and scaling/clamping). We merged all of these stages into a single loop. After this, we noticed that it was no longer necessary to store the result of the edge detection, removing this code reduced the amount of memory access. 

\item The filter code used two seperate filters (edge and blur) that both used a 3x3 coefficient kernel. We unrolled the for loops used for evaluating these kernels and removed all multiplications by zero. Additonally, we noticed that the edge filter could easily be changed to use integer operations instead of floating point operations. These changes improved performance considerably but also improved the correctness of the result (less differing pixels).

\item Some operations used in the reference code were unnecessary and have been removed in the optimized version (e.g. clamping result values to positive numbers).

\item Because division instruction typically take more time to execute than multiplications we replaced all divisions by constant with multplications of the reciprocal.
\end{itemize}

\section{Results}
The following table shows how the execution time was lowered throughout the project. The worst cases are
shown, that is, the image that took the longest time to process. The highest frequency is used (Level 0).

\newpage
\begin{table}[htb]
\centering
\begin{tabular}{|p{0.2\columnwidth}|p{0.30\columnwidth}|}
\hline 
Assignment & Exec. Time (cycles)\\ 
\hline 
2 & 317.03 million\\ 
\hline 
3 & 160.84 million\\ 
\hline 
4 & 15.41 million\\ 
\hline 
\end{tabular} 
\centering
\end{table}

So the total speedup was around 20x from the naive single-core version from part 2 to the final parallel and 
optimized version.

The minimum execution times achieved for each image are shown in the following table. In addition, the 
corresponding energy consumption and the number of pixels that differed are included.

\begin{table}[htb]
\centering
\begin{tabular}{|p{0.10\columnwidth}|p{0.30\columnwidth}|p{0.15\columnwidth}|}
\hline 
Image & Exec. Time (cycles) & Pixel diff. \\ 
\hline 
1 & 14.56 mil. & 3 \\ 
\hline 
2 & 15.38 mil. & 5 \\ 
\hline 
3 & 15.41 mil. & 8 \\ 
\hline 
\end{tabular} 
\centering
\end{table}

The next table shows how the mininum energy consumption was reduced throughout the project. The results from 
part 2 and 3 are shown without any downclocking or gating but the final lowest one (part 4) is the consumption
achieved with all the cores running on frequency level 10.

\begin{table}[htb]
\centering
\begin{tabular}{|p{0.2\columnwidth}|p{0.30\columnwidth}|}
\hline 
Assignment & Energy consumption\\ 
\hline 
2 & 9,83 J\\ 
\hline 
3 & 4,99 J\\ 
\hline 
4 & 1,22 J\\ 
\hline 
\end{tabular} 
\centering
\end{table}

Figures 2 and 3 contain graphs that show how the number of cycles and the energy consumption changed when the 
deadline is gradually relaxed. The green columns represent those frequency levels that made the execution
time stay beneath the maximum deadline of $2 \cdot \text{MaxET}$ (around 30 million cycles), while the red ones resulted 
in higher execution times.


\begin{figure}[htp!]
\centering
\includegraphics[trim=50 250 50 200, clip=true, width=0.9\columnwidth]{CyclesResults.pdf}
\centering
\caption{Number of cycles for a varied execution time deadline.}
\label{fig:results_cycles}
\end{figure}

\begin{figure}[htp!]
\centering
\includegraphics[trim=50 230 50 200, clip=true, width=0.9\columnwidth]{EnergyResults.pdf}
\caption{Energy consumption for a varied execution time deadline.}
\centering
\label{fig:results_energy}
\end{figure}

So for example if the customer wants the system to filter an image in less than 30 million cycles while being as energy efficient as possible, then the designer should choose a frequency level of 8. However if energy consumption is not a big issue and the customer wants a system that filters an image as quickly as possible, then of course the designer would choose the maximum frequency.\\
The final graph shows the energy-performance product while relaxing the deadline.

\begin{figure}[htp!]
\centering
\includegraphics[trim=50 250 50 200, clip=true, width=0.9\columnwidth]{EnergyPerformanceProductResults.pdf}
\centering
\caption{Energy-performance product for a varied execution time deadline.}
\label{fig:results_product}
\end{figure}

The lowest product is when the frequency is the highest. So the effect of changing the frequency level is proportionally greater on the execution time, as it is doubled when downclocking to level 8 while the energy consumption is only decreased by around 11\%.

\section*{Conclusions}
The group got hands-on experience with designing embedded software. During the design and implementation of the Instagram-like filter application many of the classic embedded systems challenges were addressed. 

The CompSoC platform provided very limited debugging support which made finding and fixing bugs significantly harder compared to developing on desktop platforms. Because hardware resources such as instruction- and data memory were limited, we had to take care of these constrains in our design. Unlike desktop programming providing a correctly working solution was only the start of the project, it was important that the provided solution was optimized for performance and energy consumption.

The solution proposed in this report takes these considerations into account and combines high-performance with low energy consumption in an implementation that provides correct results while operating under the constrains imposed by the hardware of the CompSoC platform.

\end{document}
