This section introduces the system model used in this paper. 
First, we present the platform model, followed by a characterization
of tasks and their corresponding memory profiles. We then explain
the assumptions on the task scheduler, before arriving at the problem
statement of this paper.

\subsection{Platform Model}
\label{ssec:platform_model}

The proposed approach in this work assumes a general multi-core platform.
%, illustrated in Figure~\ref{fig:platform_model}. 
The platform $\pi$
contains $m$ cores denoted by $\pi_1, \pi_2, \ldots, \pi_m$. It is
assumed that there is no cache memory shared between them 
%(as in the MPC8641D processor from Freescale~\cite{MPC8641}) 
or all levels of
shared cache, if present, are disabled or partitioned. This assumption of a private/partitioned cache aligns with the recommendations for certification of hard real-time systems~\cite{IEC61508}. All the cores communicate with the memory through the same shared bus that we refer to as the memory bus (MB). Contention between the cores is resolved by the arbitration policy in the memory bus, which depends on the particular platform. Fixed-priority arbitration may be used in platforms with diverse response time requirements, TDM in platforms that require robust partitioning between applications, and round robin when a simple notion of fairness between cores is required.
%ISO26262-4

Accesses to memory are obtained either by static analysis or measurements using performance counters~\cite{Sprunt} that
sample the number of last-level cache misses (bus requests) in the task at regular intervals. 
% To focus on requests that are
% generated by cache misses only, we assume that any hardware
% prefetching mechanism is disabled in the processor\footnote{Earlier works in WCET analysis have overlooked mentioning this assumption but since most multi-core processors feature this, it \emph{must} be highlighted}. 
% Turning off this mechanism reduces the unpredictability introduced by 
% speculative prefetches, as such prefetches generate additional memory requests over
% the bus at arbitrary times (beyond programmer control): these extra requests consume bandwidth and contribute to the interfere with the other tasks. 
Finally, we consider that the core is stalled after issuing a request until the request is served. This implies
that there cannot be multiple outstanding requests from a given core at any point in time.   
%Having temporal and spatial isolation between subsystems is a key requirement for real-time embedded systems to ensure composability and timing predictability and we believe that the above assumptions are not restrictive since they are motivated by the problem domain itself. 
 
%  \begin{figure}[t]
%  \centering
%  \includegraphics[scale=0.35]{figures/platform_model.jpg}
%  \caption{Illustration of the platform model.}
%  \label{fig:platform_model}
%  \vspace{-10pt}
%  \end{figure}

\subsection{Task Model} 
\label{ssec:task_model}
The workload is modeled by a set of periodic and constrained-deadline tasks in which a task $\tau_i$ is characterized by three parameters: $C_i$, $T_i$, and $D_i \leq T_i$. The parameter $C_i$ denotes an upper bound on the execution time of task $\tau_i$ when it executes uninterrupted in \emph{isolation}, i.e., with no contention on the memory bus. $T_i$ denotes the exact interval of time (called the period) between two consecutive activations of $\tau_i$ and $D_i$ is the deadline of the task. 
% Formally, each task $\tau_i$ releases a (potentially infinite) sequence of \emph{jobs}, with the first job released at time $0$ and subsequent jobs released exactly $T_i$ time units apart. Each job released by a task $\tau_i$ has to execute for at most $C_i$ time units within $D_i$ time units from its release in order to meet its deadline. 
The parameter $C_i$ can be computed by well-known techniques in WCET analysis~\cite{wcet-summary}. This work focuses on computing $C_i'$, which denotes an upper bound on the execution time when $\tau_i$ executes \emph{with} contention on the memory bus, i.e., when the co-scheduled tasks are running on the other cores. Clearly, the value of $C_i'$ is not an inherent property of $\tau_i$ but depends on the arbitration policy on the MB and on the memory request pattern of the tasks executing concurrently on the other cores during its execution. 

\subsection{Task Request-Profiles}
\label{ssec:task_request_profiles}

Given the complexity of the tasks' code, it may not be practically feasible to determine the exact time-instants at which tasks issue requests before run-time. However, there exist tools to compute the maximum number of requests that a task can issue in a given period of time, when the task runs in isolation. These tools are based on measurements~\cite{Jian,Icess11} or static analysis techniques. The method proposed in~\cite{Icess11} takes an input parameter $\SamplingRegionSize{i}$ and divides the execution of each task $\tau_i$ into $x_i = \frac{C_i}{\SamplingRegionSize{i}}$ sequential logical sampling regions, where $\SamplingRegionSize{i}$ is the length of each region. The maximum number of memory requests issued at the end of each region in task $\tau_i$ is recorded after running the tasks a significant number of times over different inputs. The method returns a set $\Regions{i} = \{\NbReqPerRegion{i}{1}, \NbReqPerRegion{i}{2}, \ldots, \NbReqPerRegion{i}{x_i}\}$, where each $\
\NbReqPerRegion{i}{k}$ is the upper bound on the number of requests that task $\tau_i$ can generate in its $k$'th logical region. Note that $\sum_{k=1}^{x_i} \NbReqPerRegion{i}{k}$ denotes the maximum number of requests that task $\tau_i$ can generate during the entire execution of one of its jobs and, for simplicity, we sometimes use the notation $\NbReqPerTask{i}$ to denote this value.

We denote by $\Requests{i} = \{\request{i}{1}, \request{i}{2}, \ldots, \request{i}{\NbReqPerTask{i}}\}$, the set of requests that $\tau_i$ can generate during its execution. Each request $\request{i}{k}$ is modeled by the tuple $\left\langle \reqrel{i}{k}, \reqserv{i}{k} \right\rangle$, where $\reqrel{i}{k}$ and $\reqserv{i}{k}$ denote the release and service time of request $\request{i}{k}$ during $\tau_i$'s execution, respectively. As mentioned above, these values cannot be determined at design time.

To summarize, each task $\tau_i$ has a request profile denoted $\Treqprof{i} = \{ \SamplingRegionSize{i}, \Regions{i}, \Requests{i}\}$, where $\SamplingRegionSize{i}$ and $\Regions{i}$ are assumed to be given at design time, and $\Requests{i}$ has to be computed by our approach in such a way that the cumulative waiting time is maximized, i.e $\sum_{k=1}^{\NbReqPerTask{i}} (\reqserv{i}{k}- \reqrel{i}{k})$ is maximum.

\subsection{Scheduler Specification}
We consider a partitioned scheme of task assignment in which each task is assigned to a core at design time and is not allowed to migrate from its assigned core to another one at run time (fully partitioned non-migrative scheduling scheme). 
%We denote by $\bar{\pi}(i)$ the set of $m - 1$ cores to which task $\tau_i$ is \textit{not} assigned (called the ``interfering cores'' of task $\tau_i$). 
We consider a non-preemptive scheduler and hence do not deal with cache-related and task-switching overheads. We make the non-work-conserving assumption as follows: whenever a task completes earlier than its WCET (say on its assigned core $\pi_p$), the scheduler idles the core $\pi_p$ up to the theoretical WCET of the task. This assumption is made to ensure that the number of bus requests within a time window computed at design time, is not higher
at run time due to early completion of a task and the subsequent early execution of the next tasks. 

%\emph{This important specification has been missing in corresponding research for timing analysis~\cite{Ernst} or existing non-static scheduler based approaches and ignoring it could result in unsafe and wrong WCET estimates}. 
%The effect of jitter which is inherent to any timing-based design is not the focus of this paper and thus will not be handled explicitly in the theory that follows. 

\subsection{Problem Statement}
The problem addressed in this paper is stated as follows: Given  a
multi-core platform conforming to the described model, a given task $\tau_i \in \tau$ and its WCET $C_i$ and the request-profiles of all
tasks, compute the WCET $C_i'$ of $\tau_i$ when $\tau_i$ executes concurrently with other
tasks. In essence, the problem consists of finding a tight upper bound
on the \emph{cumulative delays} incurred by all the requests of
$\tau_i$ due to the contention for the memory bus.
