
% \begin{figure*}[t]
% 	\centering
% 	\subfigure[Increase in WCET for different arbitration mechanisms.]{%
% 	\includegraphics[width=0.3\textwidth]{figures/wcetarbiters-crop.pdf}
% 	\label{fig:arbiters}}
% 	\subfigure[Comparison with computed WCET bounds for Priority-based arbiter]{%
% 	\includegraphics[width=0.3\textwidth]{figures/priowork-crop.pdf}
% 	\label{fig:nvprio}}
% 	\subfigure[Comparison with computed WCET bounds for work conserving arbiter]{%
% 	\includegraphics[width=0.3\textwidth]{figures/nv_work-crop.pdf}
% 	\label{fig:nvwork}}
% \caption{Simulation results}
% \label{fig:figure}
% \vspace{-20pt}
% \end{figure*}







% \begin{figure}[t!]
% \centering
% \includegraphics[scale=.25]{figures/ncores-crop.pdf}
% \caption{Priority-based arbiter: Tasks assigned across 2, 4 and 8 cores } 
% \label{fig:varyncores}
% \vspace{-10pt}
% \end{figure}


This section experimentally evaluates the proposed framework by
simulating a multi-core system running real application traces.
First, the experimental setup is explained, followed by an experiment
that demonstrates the generality of our approach by executing the
applications with three different arbiters and evaluating the accuracy
and run-time of the proposed analysis. Lastly, we experiment with
different region sizes and show how finer-grained task request-profiles
improve accuracy and reduce the run-time of the analysis.

\subsection{Experimental Setup}

The hardware platform in our experiments is based on the SimpleScalar
3.0 processor simulator~\cite{austin2002simplescalar} with separate
data and instruction caches, each with a size of 16~KB. The L2 cache
is a private unified 128~KB cache with 128~B cache lines and
an associativity of 4. The processor core is assumed to run at a frequency
of 1.6~GHz. The memory device corresponds to a 64-bit
DDR3-1600 DIMM~\cite{DDR3SPECf} running at a frequency of 800~MHz,
meaning that one memory cycle equals two processor cycles. The memory
access time $TR=80$ processor cycles
for a request of 128~B, corresponding to an in-order DRAM
scheduler with limited pipelining of requests. This setup is similar
to contemporary COTS-platforms, such as Freescale~P4080.
The experiments consider a platform instance with 4 cores, each core running an
application from the WCET test suite~\cite{WCET2010}
as a single independent task. For each application in the benchmark,
memory-trace files were generated by running it on the experimental
platform. The traces were finally post-processed according to the
sample regions used in the experiments to deliver the task
request-profile previously presented in
Section~\ref{ssec:task_request_profiles}.
%
%We categorized the tasks as light, heavy and very heavy based on their request densities ($C_i/\NbReqPerTask{i}$).



\subsection{Application to Different Arbitration Mechanisms}

The objective of this experiment is to demonstrate the generality of our approach by applying it to three commonly used arbiters, being fixed-priority, an unspecified work-conserving arbiter, and TDM, respectively.
For each task, we determine the interference from other tasks and compute the increase in WCET for each of the three arbiters using
a region size of 2000 cycles. 
We also examine the run-time of the proposed analysis for the different arbiters.
To get a representative sample of applications for the WCET benchmark, we chose the two most memory-intensive (\emph{minmax} and \emph{lcdnum}) and the two least memory-intensive (\emph{lms} and \emph{adpcm}) applications.
The results of the experiment are shown in Figure~\ref{fig:arbiters}, where tasks are arranged in descending 
order of priorities (\emph{minmax} has the highest priority) for the case of fixed-priority arbitration. 
As expected, the task with the highest priority experiences minimal interference (an increase factor of 1x) from the other tasks. 
For the lower priority tasks, the interference increases per memory access while the memory intensity decreases, a counter-acting
effect that results in
that \emph{lcdnum} (priority 2) experiences the largest increase in WCET with fixed-priority arbitration.

\begin{figure}[htb]
\centering
\includegraphics[height=3.5cm,width=\columnwidth]{figures/wcetarbiters-crop.pdf}
\caption{Increase in WCET for different arbitration mechanisms.} 
\label{fig:arbiters}
\vspace{-10pt}
\end{figure}

%
For the unspecified work-conserving arbiter, the requests of a given task may be blocked by all requests from all concurrently executing tasks. Such a mechanism hence leads to a very pessimistic WCET, in this case an average increase in WCET for each task by approximately by 9 times. Note that this arbitration mechanism is equivalent to fixed-priority arbitration where every task is assumed to have the lowest priority. This can be seen in Figure~\ref{fig:arbiters}, where the lowest priority task, \emph{adpcm}, has the same WCET with fixed-priority arbitration and the unspecified work-conserving arbiter.

Unlike the previous two arbiters, TDM is neither priority-based, nor work conserving. Here, it is configured with a frame size of 4 and each core is allocated one slot. This basic fair configuration statically ensures periodic access to the memory, but its non-work conserving nature leads to 
poor performance, as allocated slots may be left unused despite pending requests from other tasks. Since this arbiter statically offers equals shares of the memory bandwidth, we see a direct relation between the memory intensity of a task and the increase in WCET.

Considering the run-time of the analysis, fixed-priority arbitration took 12 minutes to complete for all
tasks. The tasks with higher priorities complete faster than the slower ones, since they are less impacted by interference, resulting in fewer possible slot assignments. This is reflected in the analysis of the unspecified work-conserving arbiter, where all tasks can suffer interference from all other tasks, increasing the analysis time to approximately 35 minutes.
In contrast, the TDM arbiter is non-work-conserving and thereby completely independent of other tasks, enabling
the computation of $\Tmin{i}{.}$ and $\Tmax{i}{.}$ in constant time. Furthermore, small TDM frame sizes provide
relatively few possible slot assignments, reducing the total
analysis time to less than 5 minutes.
%
While running the analysis, 
we furthermore instrumented the algorithm to evaluate the benefits of the optimization proposed in Section~\ref{sec:wc_assignment} (List reduction).
The result of this evaluation showed that the hit-ratio ranged from 20-40\% (with an average of 30\%), 
which considerably reduces the run-time for cases where the number of candidate slots is very high.


\subsection{Impact of Region Size}

We conclude by experimentally evaluating the impact of the region size.  
To this end, we
rerun the previous experiment with the fixed-priority arbiter using
both smaller and larger region sizes. Four different sizes are used:
1000, 2000, 3000 and 5000 cycles, respectively, where larger region sizes imply
fewer regions and coarser-grained task-request profiles.
The results of the experiment are shown in Figure~\ref{fig:regions}.
Note that the highest priority task, \emph{minmax}, is not shown in
the figure, as it suffers the same negligible interference across all
region sizes.  For the other tasks, the results confirm the intuition
that smaller regions result in tighter WCET, since finer-grained
task-request profiles eliminate a lot of uncertainty.  In terms of
run-time of the analysis, the results reflect that smaller region
sizes imply fewer candidate slots, reducing run-time. To quantify
this claim, the total analysis time was 4, 12, 34 and 125
minutes for region sizes of 1k, 2k ,3k and 5k cycles, respectively.

\begin{figure}[htb]
\centering
\includegraphics[height=4cm,width=\columnwidth]{figures/rbcrop.pdf}
\caption{Increase in WCET for different region sizes (in cycles).} 
\label{fig:regions}
\vspace{-10pt}
\end{figure}


% 
% We conclude by demonstrating the accuracy of our analysis by comparing it to the naive approach of computing the bound by 
% multiplying the number of requests by the maximum delay for one request, $\Tmax{i}{1}$. Figure~\ref{fig:naivecomp} shows the tightness gained by using the proposed approach, which methodically assigns different delays to each request and provides tighter bounds for both the arbitration mechanisms. The tightness is visibly reflected for the unspecified work-conserving arbiter, for which the computed $\Tmax{i}{1}$ is very high for all tasks. Similar results were also observed for the TDM arbiter.
% \begin{center}
% \begin{footnotesize}
% \begin{table}[h!]
% \caption{Comparison against naive approach for fixed-priority and the unspecified work-conserving arbiter} 
% \begin{tabular}{|c|c|c|c|c|c|l}
%  \hline
% \emph{Benchmark}  &  \emph{FP proposed}  & \emph{FP naive} & \emph{WorkCon proposed}  & \emph{WorkCon naive} \\  \hline
% edn  (1)   &      1.003   &  1.19  & 4.38 &   120.92  \\  \hline
% fft (2)    &      5.55       &  15.96  &10.30 & 284.8 \\  \hline
% fir (3)    &      8.05&150.04&12.41& 287.84   \\  \hline
% matmult (4)&      6.8& 76.24 & 6.8& 76.24  \\  \hline
% \end{tabular}
% \label{tab:naivecomp} 
% \end{table}
% \end{footnotesize}
% \end{center}
% %\vspace{-20pt}
% 
% \subsection{Comparison with Naive Approach}
% 
% The objective of this experiment was to evaluate the value of WCET computed by our algorithm versus the naive approach.
% As described earlier, the naive approach computes the upper bound on the delay by multiplying the number of requests by the maximum delay for one request, which is $\Tmax{i}{1}$. 
% Table~\ref{tab:naivecomp} compares the increase (ratio) by using our approach, which methodically assigns different delays to each request, against the naive approach. 
% Each value in the table represents the increase factor in WCET by employing both the methods. The tightness is visibly reflected for the unspecified work-conserving arbiter, for which the computed $\Tmax{i}{1}$ is very high for all tasks. Similar results were also observed for the TDM based arbiter. 


 
