% \begin{itemize}
% \item cite the works
% \item limitations
% \item how is our work different
% \end{itemize}

%The problem of analyzing the impact of shared hardware resource contention on the WCET of the tasks is of significant importance and the research community has taken initial steps to address this problem. 
%The problem of analyzing the impact of bus contention on the WCET of the tasks has been addressed by researchers. 
% Time Division Multiple Access (TDMA) based schemes have been proposed in~\cite{Rosen07_rtss07},~\cite{Chatto},~\cite{Timon} and \cite{Schra2010} and
% \cite{Schra2011}. The methods approach the problem in different ways: precomputing application-specific bus schedules, or analyzing buses with the assumption of separate buses for memories and data, 
% restricting accesses to the bus in specific phases of task execution, division of the tasks into
% superblocks which execute in specific slots and using FlexRay like approaches to have fixed and reserved slots. 
% An analysis of a work conserving bus is presented in \cite{Icess11} and \cite{Ernst}. 
% Also of interest are the works of \cite{Wang2} based on timed automata which is restricted to instruction accesses and the work of \cite{Jian} which
% again assumes division of tasks into superblocks which run in pre-assigned time slots.
% We believe all these works are important and slightly orthogonal to our work,
% since we aim at proposing a framework to model the bus and our approach of compute the worst-case executing time as maximizing the delay for each request
% is the first one to the best of our knowledge.

%\subsection{New proposal}

Memory contention analysis has received considerable attention in recent years
and these efforts can be classified into two classes: 1) approaches
that modify the hardware or the software of the system to enable or improve analysis, and 2)
approaches that analyze a given system. We proceed by discussing each of these
in turn.

On the hardware side, a number of memory controllers have been designed
specifically for real-time systems and proposed together with
corresponding analyses that bound the WCRT of memory
requests~\cite{Akesson11DATE,Reineke11,Paol,Shah12DATE,wu2013worst}. These analyses
benefit from full knowledge of the internals of the memory
controller, such as page policies,
transaction scheduler and the DRAM command scheduler, and
exploit this information to produce tight bounds. On the software side,
servers with memory budgets, built into the operating system, have been proposed
to limit the memory interference~\cite{yun12memory, behnam2012memory} from tasks executing
on other cores, enabling it to be bounded based on enforcement rather than characterization.
% Our work contrasts to these efforts in the sense that it targets
% COTS platforms and considers both the software and hardware to be given. 

Several approaches have been proposed for memory contention analysis
in given multi-core platform.  Similar to our work, most analyses consider
multi-core systems with a bus providing access to a shared memory with
a single port~\cite{Ernst,ErnstJournal,Jian,Icess11,Icess12}.
However, these works are quite different with respect to the considered 
task models and scheduling policies for both the tasks themselves and their
memory requests.
%
Applications are typically modeled as independent periodic/sporadic task sets
or acyclic task graphs~\cite{Rosen07_rtss07,Chatto}, and the scheduling is often based on
fixed-priorities~\cite{Icess11,ErnstJournal}, while tasks in task graphs
are statically scheduled using techniques that respect precedence constraints,
e.g. list scheduling.
The approaches support different task preemption models, ranging from fully
preemptive~\cite{Ernst,ErnstJournal} to non-preemptive~\cite{Icess11,Icess12,Rosen07_rtss07,Chatto},
and with limited-preemption at the granularity of TDM time slots as a compromise in between~\cite{Jian}.

The task model in~\cite{Jian} is based on superblocks, which are smaller pieces of
code with known BCET, WCET, and minimum and maximum number of memory accesses.
The concept of superblocks in~\cite{Jian} is related to
our regions in the sense that it provides finer-grained
information about when memory accesses occur during the execution of a
task, improving the accuracy of analysis. 
The main difference in this aspect relates to the preemption model, since preemption is possible on
boundaries of superblocks or sets of superblocks in a TDM time slot, while it is not on region boundaries.

A problem with most of the previously mentioned analysis approaches is that they
only support a single memory arbiter, such as an unspecified
work-conserving arbiter~\cite{Icess11,Ernst}, fixed-priority arbitration, round robin~\cite{Icess12},
TDM~\cite{Rosen07_rtss07, Chatto, Timon, Schra2010, Schra2011} or
first-come first-served. This does not address the diversity of memory
arbiters in multi-cores, making them point solutions exclusive to a
single platform rather than a reusable framework that can be applied more
generally.  This problem is partially mitigated by the analysis
in~\cite{Jian} that supports three of these
arbitration mechanism in a single unified framework, although this
work is limited to systems where periodic tasks are modeled as sets of superblocks
and scheduled using TDM. In contrast, our work is more general applies to 
any periodic deadline-constrained tasks under any non-preemptive
task scheduler. 
In a comparative analysis by Kelter et. al~\cite{Kelter13}, the authors compare different arbitration methods with respect to their average case
performance while the main objective for our framework is to derive worse-case estimates. 
This work presents a scalable framework for memory contention analysis
for non-preemptive real-time systems 
with respect to supported task schedulers and memory arbiters.
% Also it may be noted that our method is able to derive bus-availability models for 
% non-TDM arbiters (which have guaranteed slots) like fixed-priority models considering the 
% upper bound on the interference from co-executing tasks.  

%\todo{Make point about sequences of requests. We should check Schlieckers
%claims in this direction to make sure there is no overlap.}

%\todo{Pellizzoni makes worst-case superblock schedule per core when
%computing interference. We do a PCRE computation based on
%tasks. They assume all requests are either in beginning or in the
%end of the block, depending on what is worse. In contrast, we
%compute worst-case release times and service times based on previous
%requests. I cannot easily say which is more accurate or more
%computationally expensive.}

%\todo{Is it valid to say that their superblock model is much more complex than
%our region-based model?}

%\todo{Is our analysis much faster because we do not need to consider
%preemption at sub-task level? Pellizzoni can probably technically
%support non-preemption by only having a single superblock per task.}

