\newcommand{\wmax}{w_{\max}}

%\subsection{General Framework}

The memory bus is a shared resource, which means that any access to it by a given task may be deferred because of concurrent accesses from other tasks. 
%In other words, any shared resource can potentially delay any attempt to access it. 
To estimate the overall delay that can be incurred by a task due to the contention for the shared resource, a standard approach consists of deriving an upper bound on the delay that a \emph{single} access may incur. This upper bound is computed by constructing a worst-case scenario, in which every competing task gathers all its accesses to the resource within the shortest possible time window, thereby creating a burst of accesses all concentrated in time and occurring exactly when the access from the analyzed task occurs, thereby inducing the maximum delay. Then, the overall delay that a \emph{sequence}
 of accesses may suffer is computed by assuming that each access to the resources incurs this precomputed maximum delay. This assumption is clearly not valid, since the other tasks keep progressing in their execution, alternating between computation and memory fetch phases, and do not congest the memory bus at all times. 

%In order to factor the contention delay in the WCET of a task $\tau_i$, a basic approach is therefore to compute an upper bound $\wmax$ on the delay that a \emph{single} request of $\tau_i$ can incur and then attribute the same delay $\wmax$ to all its requests, which yields $C_i'  =  C_i + \NbReqPerTask{i} \times \wmax$. These methods lead to an overly pessimistic estimation of the increased WCET, $C_i'$, because they assume that all the requests of $\tau_i$ are subject to the same ``extreme-case'' interference scenario from the other tasks (which is the worst-case scenario for a \emph{single} request). 
% This concept of assuming the worst-case scenario for a given parameter and applying it to all its instances is \emph{widely} used in the area of timing analysis. For example, the WCET or the worst-case response time of a task are typically computed by considering the worst-case scenario for a \emph{single} job, and all the jobs are then assigned the same values in the subsequent schedulability analysis. This pessimistic approach also extends to Network-on-chip based many-core systems, in which the WCET of a task is computed by deriving the worst case traversal time of a single packet over the network (considering the pattern of traffic causing the maximum congestion in the network) and all the packets generated by the tasks are then assumed to incur this maximum delay during their traversal~\cite{Ferrandiz:11} (although the conditions in the network may be more favorable and they may transit faster). 

In this paper, we propose an alternative approach which bases its computation on a new modeling framework. Instead of computing a worst-case scenario for a single access to the resource and then considering that scenario for all the requests of the analyzed task, we model the overall availability of the resource to the analyzed task. Then, we leverage the complexity of this new model to derive an upper bound on the \emph{cumulative} delay that a \emph{sequence} of requests may incur. 
% 
% \todo{It is not novel as such to consider the worst case of sequences of requests (or task executions) instead of single executions. This is done by many approaches based on service curves. For instance, the latency-rate abstraction considers in a very general way that it may not even be possible for consecutive requests to get the worst-case interference. In addition to capture this effect in the scheduler, some of my work on memories does the same thing for the resource itself, i.e. that two consecutive memory slots (that have variable lengths) cannot both have maximum size (if refresh interferes with one, it cannot interfere with the second due to a particular minimum spacing). I think the twist that has
% to be exploited in this story is the task request profiles, namely that we do not know exactly when requests are issued. This is different from any other work considering sequences as far as I am aware.}

Our model captures the best-case and worst-case availability of a shared resource. It is based on the arbiter and coarse-grained memory access information provided by the task request-profiles. Specifically, for a given task $\tau_i$ under analysis and any positive integer $j$, we compute two functions $\Tmin{i}{j}$ and $\Tmax{i}{j}$ that give the \emph{earliest} and \emph{latest} instants at which the bus can be available to $\tau_i$ for the $j$'th time, i.e. the \emph{earliest} and \emph{latest} instants of the $j$'th bus slot available to $\tau_i$. If $\tau_i$ is run in isolation, there are no competing requests for the bus, which implies that the bus is always available to $\tau_i$ and we have $\Tmin{i}{j} = \Tmax{i}{j} = j$. Otherwise, we will have $\Tmin{i}{j} < \Tmax{i}{j}$. These two functions form what we call the \emph{bus availability model} $\BusAvailability{i} = \left\langle \Tmin{i}{}, \Tmax{i}{} \right\rangle$ of task $\tau_i$. This model can be 
computed for any predictable resource and a wide range of arbitration policies. Next, we demonstrate the computation for two extreme cases: a non-work-conserving TDM arbiter and a work-conserving fixed-priority arbiter.


%\subsection{Computation of $Tmin(X)$ and $Tmax(X)$}
%The $Tmin(X)$ and $Tmax(X)$ curves represent the earliest and the latest time at which a request can avail the Xth free slot of the bus. When the task is run in isolation, since there are no competing requests for the bus, the bus is always available and the $Tmin(X)$ and $Tmax(X)$ curves merge. 
% This scenario is represented in Figure~\ref{fig:minmax1}.  
% 
% \begin{figure}[!ht]
% \includegraphics[width=\columnwidth, height=3.0 in]{tminmaxmerge-crop.pdf}
% \caption{Merged Curves}
% \label{fig:minmax1}
% \end{figure}

%On the other hand, in the presence of interfering requests which can vary between $\PCRE^{min}_j(t)$ and $\PCRE^{max}_j(t)$ for a time interval of duration $t$, we can derive the corresponding earliest and the latest times at which requests from the analyzed task can avail the bus. The equations to compute the $Tmin(X)$ and $Tmax(X)$ follow:  
%\begin{equation}
%   Tmin(X) = \min_{t \ge 0} \{ t | t- (\sum_{\pi_q \neq \pi_p} \PCRE_q^{min}(\left\lceil \frac{t}{\ell} \right\rceil) \times TR) = X \} 
%\end{equation}
%\begin{equation}
%   Tmax(X) = \min_{t \ge 0} \{ t | t - (\sum_{\pi_q \neq \pi_p} \PCRE_q^{max}(\left\lceil \frac{t}{\ell} \right\rceil) \times TR )  = X \} 
%\label{eq:tmax}
%\end{equation}
%where $\ell$ is the length of the sampling interval. 

%From the perspective of the analyzed task $\tau_i$, the bus can be viewed as resource with two alternating phases: a busy phase, in which it serves the requests from the other cores and an idle phase which task $\tau_i$ may avail. If $\tau_i$ is executing on core $\pi_p$, Equation~\ref{eq:tmax} can be interpreted as follows: Scan the timeline to identify the Xth earliest time instant at which the (continuous stream of) requests from the other cores ($\neq \pi_p$) have been served. When the Xth slot is free, the time t will exceed the time for servicing the request (given by the summation term) by X. So if X=1, and the summation term is ``w'', the first free slot will be available at $t'=w+1$ (and thus t-t' = 1).
%\paragraph{Example}
%Let us for simplicity visualize the bus slots as (R,R,R,R,Z,R,R,Z,Z,R), where R refers to a request (issued by other cores) and Z refers to an idle slot. Let us for clarity introduce the tuple (t, $\sum_{.}\PCRE()$) to model the bus. For the above request sequence, the tuple corresponds~to \

%\scriptsize
%\{(1,1),(2,2),(3,3),(4,4),(5,4),(6,5),(7,6),(8,6),(9,6),(10,7)\} \
%\normalsize

%To find the second free slot (X=2), we scan this time line to find the first entry with a difference of 2. This is obtained in the tuple (8,6) which corresponds to time 8 and thus $Tmax(2)=8$. 	


\subsection{Non-Work-Conserving TDM Arbitration}

A TDM arbiter works by periodically repeating a schedule, or frame,
with fixed size, $\frameSize$. Each core $\pi_{i}$ is allocated a
number of slots $\slots_{i}$ in the frame at design time, such that
$\sum_{\pi_{i}} \slots_{i} \leq \frameSize$. There are different
policies for distributing the slots allocated to a core within the TDM
frame, but here we consider the case where slots are assigned
contiguously for simplicity.  An example of a TDM frame, a contiguous
allocation, and some of the associated terminology is illustrated in
Figure~\ref{fig:tdm}. 

\begin{figure}[htb]
\centering
\includegraphics[width=0.55\columnwidth]{figures/tdm_centralized.pdf}
\caption{TDM frame with 7 slots using a contiguous slot allocation per core.}
\label{fig:tdm}
\end{figure} 

We consider a non-work-conserving instance of the TDM arbiter, which
means that requests from a core are only scheduled during slots
allocated to that core. Empty slots or slots allocated to other cores
without pending requests are hence not utilized. This type of policy
makes the timing behavior of memory requests of tasks scheduled on
different cores completely independent. As a result, only the
configuration of the arbiter has to be considered when determining
$\Tmin{}{}$ and $\Tmax{}{}$. For non-work-conserving TDM arbitration with a contiguous slot
allocation, $\Tmin{}{}$ and $\Tmax{}{}$ are derived according to
Equations~\eqref{eq:tmin} and \eqref{eq:tmax}, respectively. 

\begin{eqnarray}
\label{eq:tmin} \Tmin{i}{k} & = & \left\lfloor \frac{k - 1}{\slots_{i}} \right\rfloor \times \frameSize + ((k-1)\mod \slots_{i}) \\
\label{eq:tmax} \Tmax{i}{k} & = & \Tmin{i}{k} + \frameSize - \slots_{i} + 1
\end{eqnarray}

The first
term in the computation of $\Tmin{}{}$ in Equation~\eqref{eq:tmin}
corresponds to the required number of full iterations of the TDM frame
to serve $k$ requests and the second term the remaining number of
required slots after these iterations. 
The computation of $\Tmax{}{}$
is similar, except it adds an additional $\frameSize - \slots_{i} + 1$
slots to account for releases with maximum misalignment with respect
to the set of contiguous slots allocated to the core in the table.
Note that these equations also cover non-work-conserving round-robin
arbitration, which is a special case of TDM where $\frameSize$ equals
the number of cores sharing the bus, $m$, and $\forall \tau_{i} \;
\slots_{i} = 1$.  Work-conserving versions of both these arbitration
policies can be derived by additionally considering the task
request-profiles, although this is omitted for brevity.
Figure~\ref{fig:tdm} graphically illustrates the arrival times and
waiting times corresponding to $\Tmin{1}{1}$ and $\Tmax{1}{1}$. 
As seen in the figure, the $\Tmin{1}{1}$ is achieved for a request
that arrives just at the beginning of any of the two slots allocated to
its corresponding core and $\Tmax{1}{1}$ for a request arriving just after
the last slot allocated to its core has been left idle.
For this particular arbitration policy, the best-case and worst-case
arrival with respect to the TDM frame is the same for any value
of $k$, although this does not hold in general.


\subsection{Work-Conserving Fixed-Priority Arbitration}

In the context of bus arbitration policies, one of the challenges with currently existing COTS-based multi-core systems is that the front-side bus does not recognize/respect task priorities. This is because the bus is generally designed with the aim to enhance the average-case performance and is not tailored for real-time systems. This can lead to a scenario similar to priority inversion in which requests from higher priority tasks are delayed by requests from lower-priority tasks on the bus. 
Although the scheduler enforces these priorities while allocating the processing element (CPU) to tasks, these priorities are not passed over to the shared hardware resources like the bus and the memory controllers, which have their own scheduling policies. 
This problem has been addressed in research by enabling priorities in priority-driven arbiters to be software programmable directly~\cite{Akesson09dsd} or indirectly by tagging each request with its priority~\cite{miao}. 
We assume in this section that the bus is designed according to any of these strategies.
In this section, we design a bus-availability model for a fixed-priority arbiter. 

\subsubsection{$PCRE^{min}$ and $PCRE^{max}$}

In spite of the uncertainty of the arrival patterns of the requests, it is important to 
determine the lower and upper bounds on the cumulative number of requests that tasks of a higher priority than $\tau_i$, scheduled on a given \emph{core} $q$
may inject into the bus. These bounds are denoted by the $\PCRE^{min}_q(i,t)$ and $\PCRE^{max}_q(i,t)$ functions.
Such bounds are computed using the methods in~\cite{Icess12}. The rationale behind their method is to pack the tasks of higher priority than $\tau_i$, ordered by request densities (number of requests in a given time unit) in an interval of time $t$ in such a manner that the minimum (or maximum) number of requests is derived, while conforming to the task arrival rates. 

\subsubsection{Computation of $\Tmin{i}{k}$ and $\Tmax{i}{k}$}
%The $\Tmin{i}{k}$ and $\Tmax{i}{k}$ curves represent the earliest and the latest time at which a request can avail the kth free slot of the bus. 
When run in isolation, in the absence of any competing requests for the bus, the bus is always available to $\tau_i$ and the $\Tmin{i}{k}$ and $\Tmax{i}{k}$ curves merge. 
In the presence of interfering requests, which can vary between $\PCRE^{\min}_q(i,t)$ and $\PCRE^{\max}_q(i,t)$ in a time interval of duration $t$, we derive the corresponding earliest and the latest times at which requests from the task $\tau_i$ can avail the bus. We let $\TR$ be an upper bound on the time to access the memory over the shared memory bus, correspondingly each bus slot is of duration $\TR$. 
Then, we compute $\Tmin{i}{k}$ and $\Tmax{i}{k}$ as:  

\vspace{-15pt}
\begin{footnotesize}
\begin{eqnarray}
\label{eq:TminGen}\Tmin{i}{k} =  \min_{t \ge 0} \{ t | t- (\sum_{\pi_q \neq \pi_p} \PCRE_q^{\min}(i,t) \times \TR)=k  \times \TR \} \\
\label{eq:TmaxGen}\Tmax{i}{k} = \min_{t \ge 0} \{ t | t - (\sum_{\pi_q \neq \pi_p} \PCRE_q^{\max}(i,t) \times \TR )=k \times \TR \}
\end{eqnarray}
\end{footnotesize}
From the perspective of the analyzed task $\tau_i$ executing on core $\pi_p$, the bus can be viewed as resource with two alternating phases: a busy phase, in which it serves the requests from the other cores and an idle phase which task $\tau_i$ may avail. Equation~\eqref{eq:TmaxGen} can be interpreted as follows: Scan the timeline to identify the earliest time instant at which the (continuous stream of) requests from the other cores ($\neq \pi_p$) have been served and $k$ free slots have been detected. When the $k^{th}$ slot is free, the time t will exceed the time for servicing the request (given by the summation term) by $k \times \TR$. 
% \footnotesize
% %\paragraph*{Example} Let us for simplicity visualize the bus slots as (R,R,R,R,Z,R,R,Z,Z,R), where R refers to a request (issued by other cores) and Z refers to an idle slot.
% Let $\alpha = \sum_{\pi_q \neq \pi_p} PCRE_q^{\min}(i,t) \times TR$
% Let us for clarity introduce the tuple (t, $\alpha$) to model the bus. For the above request sequence, the tuple corresponds~to \
% \{(1,1),(2,2),(3,3),(4,4),(5,4),(6,5),(7,6),(8,6),(9,6),(10,7)\}. To find the second free slot (k=2), we scan this time line to find the first entry with a difference of 2. This is obtained in the tuple (8,6) which corresponds to time 8 and thus $\Tmax{i}{2}=8$. 	
% \normalsize

As seen in this section, the $\Tmin{i}{k}$ and $\Tmax{i}{k}$ functions are arbitration dependent and can be computed for different arbiters (TDM, round-robin, fixed-priority and FIFO (omitted here)). 
They serve as an input to the next blocks of the proposed framework that compute the increased execution time based on the model. In contrast, the methods described in the following sections are independent of the arbitration mechanism.
