\subsection{On the synchronous release with the first execution region}
\label{sec:example1}

Exact worst-case response time analysis are based on the notion of \emph{critical instant}. The critical instant for a task $\tau_i$ is defined as an instant at which a request for that task will have the largest response time. Since the response time of a task is dependent on the higher priority tasks, a critical instant for a task $\tau_i$ is generally concerned with the release pattern of higher priority tasks.

In~\cite{Karthik:RTAS10}, Lakshmanan et. al. argue that the release pattern $\ssPhi$ is a critical instant for a self-suspending task $\sstask$, where $\ssPhi$ is defined as follows:
\begin{itemize}
	\item every higher priority non-self-suspending task $\tau_h \equals \left\langle \left(C_{h}\right), D_h, T_h\right\rangle$ is released simultaneously with $\sstask$;
	\item jobs of $\tau_h$ eligible to be released during any $j^\text{th}$ ($1 \leq j < m_i$) suspension region of $\sstask$ are delayed to be aligned with the release of the subsequent $(j+1)^\text⁄{th}$ execution region of $\sstask$; and
	\item all remaining jobs of $\tau_h$ are released every $T_h$.
\end{itemize}

We prove with a counter-example that $\ssPhi$ is not a critical instant for a self-suspending task $\sstask$.

\begin{lemma}
The worst-case response time of task $\sstask$ is not given by $\ssPhi$.
\end{lemma}
\begin{proof}
Consider a task set $\tau = \left\{\tau_1, \tau_2, \sstask\right\}$ of three constrained-deadline sporadic tasks scheduled on a single processor. $\tau_1$ and $\tau_2$ are non-self-suspending tasks and $\sstask$ is a self-suspending task. Let the characteristics of these tasks be as follows: $\tau_1 \equals \left\langle \left(1\right), 4, 4\right\rangle$; $\tau_2 \equals \left\langle \left(1\right), 100, 100\right\rangle$ and $\sstask \equals \left\langle \left(1, 2, 3\right), 1000, 1000\right\rangle$. Let the priorities of the tasks be assigned using the RM policy (i.e., smaller the period, higher the priority); this implies that task $\tau_1$ has the highest priority and $\sstask$ the lowest. 
Let us compute the response time of task $\sstask$ considering two different job release patterns: (i) a job release pattern $\ssPhi$ compliant with its definition made in \cite{Karthik:RTAS10} and (ii) a job release pattern different than $\ssPhi$. We show that there exists a job release pattern which \textit{is not} $\ssPhi$ and for which the response-time of task $\sstask$ is larger than its response time when the job release pattern is $\ssPhi$.

\textit{Scenario 1.} Let us consider the job release pattern $\ssPhi$ as shown in Fig.~\ref{fig:ex-phi}.
\begin{figure}
  \centering
  \subfloat[Scenario 1. Response-time analysis when the job release pattern \textit{is} $\ssPhi$.]{\label{fig:ex-phi} \includegraphics[width=0.85\linewidth]{ex-phi}} \\
  \subfloat[Scenario 2. Response-time analysis when the job release pattern \textit{is not} $\ssPhi$.]{\label{fig:ex-no-phi} \includegraphics[width=0.85\linewidth]{ex-no-phi}}
  \caption{Counter-example to $\ssPhi$ being the critical instant of $\sstask$.}
  \label{fig:hist-comp}
\end{figure}
Using the standard response-time equation, we obtain $\ssresponse{1}=3$ for the execution region $\sstasko$ and $\ssresponse{2}=4$ for the execution region $\sstaskt$ (see Fig.~\ref{fig:ex-phi}).
%Upon determining the response time $R_{3,1}$ of the computing region $\tau_{3,1}$ using the standard response-time expression, we obtain: $R_{3,1} = 3$. 
%With this, the second job of task $\tau_1$ in Fig.~\ref{fig:ex-phia} is released during the suspending region of task $\tau_3$. In order to obtain the job release pattern $\Phi_3$, let us (i) delay the release of the second job of task $\tau_1$ by one time unit (i.e., it is released at time $5$ now instead of $4$) so that it is now released at the same time as the computing region $\tau_{3,2}$ and (ii) delay all the subsequent job releases of $\tau_1$ by one time unit as well, in order to respect its minimum inter-arrival time of $\tau_1$. The job release pattern $\Phi_3$ is shown in Fig.~\ref{fig:ex-phib}. 
%And upon determining the response time $R_{3,2}$ of the computing region $\tau_{3,2}$, we obtain: $R_{3,2} = 4$. 
Hence, under the release pattern $\ssPhi$, the response-time of task $\sstask$ is given by: $\ssresp=\ssresponse{1}+\sssuspend{1}+\ssresponse{2}=3+2+4=9$. %Since the release pattern in $\ssPhi$, this is the worst-case response time of $\sstask$ as per work in~\cite{Karthik:RTAS10}.

\textit{Scenario 2.} Let us consider a job release pattern as shown in Fig.~\ref{fig:ex-no-phi}. Observe that this release pattern is not $\ssPhi$ since task $\tau_2$ is not released synchronously with task $\sstask$. Using the same standard response-time equation, we obtain a response time $\ssresponse{1}=2$ for the execution region $\sstasko$ and $\ssresponse{2}=6$ for the execution region $\sstaskt$ (see Fig.~\ref{fig:ex-no-phi}). Hence, under a release pattern which is not $\ssPhi$, the response-time of task $\sstask$ is given by: $\ssresp~=~2+2+6~=~10$.

Clearly, the response-time of task $\sstask$ obtained in Scenario~2 is larger than the response-time of $\sstask$ obtained in Scenario~1. Hence, the claim of Lakshmanan et. al.~\cite{Karthik:RTAS10} that $\ssPhi$ is the critical instant for a self-suspending task $\sstask$ is incorrect.
\end{proof}

%This counter-example proves the following Lemma:
%\begin{lemma}
%The worst-case response time of a lower priority self-suspending task $\sstask$ suffering interference from a set of higher priority non-self-suspending tasks is not given by $\ssPhi$.
%\end{lemma}

We now prove properties about the job release pattern characterizing the critical instant of task $\sstask$. 
%We now prove that the critical instant of a self-suspending task happens when each higher priority task releases a job synchronously with the beginning of the execution of at least one of the execution regions of the self-suspending task under consideration, although not all higher priority tasks must necessarily release a job synchronously with the same execution region. 

\begin{lemma}
\label{lem:proof1}
%Let $\sstask$ be the self-suspending task under analysis and let $\hp{ss}$ be the set of non-self-suspending tasks of higher priority than $\sstask$.
From any feasible release pattern $\relpattern$ of the tasks in $\hp{ss}$, we can construct a feasible release pattern $\relpattern'$ from $\relpattern$ such that:
\begin{description}
\item[(1)] In $\relpattern'$, at least one job of every task in $\hp{ss}$ is released synchronously with the release of an execution region of $\sstask$;
%\item[(2)] In $\relpattern'$, there is no busy period that starts or ends during a self-suspending region of $\sstask$;
\item[(2)] $\relpattern'$ entails a larger (or equivalent) response time of task $\sstask$ than $\relpattern$.
\end{description}
\end{lemma}
\begin{proof}
%The proof consists in generating $\relpattern'$ from $\relpattern$ such that (1) holds by construction and (2) holds true from the modifications made to $\relpattern$.
Let us assume that $\sstask$ is scheduled to execute concurrently with a set $\hp{ss}$ of higher priority tasks and suppose that those tasks are released according to the release pattern $\relpattern$. We denote by $\windowbegin{k}$ and $\windowend{k}$ the beginning and end of the $k^\text{th}$ time window during which only tasks in $\hp{ss}$ are executed. That is, $\sstask$ does not execute at all in the time intervals defined by $\left[ \windowbegin{k}, \windowend{k} \right]$, $\forall k > 0$. Those intervals will be referred to as the \emph{higher priority tasks busy windows}. %We denote by $\ssrelease{j}$ and $\ssresponse{j}$ the release and response time of the $j^\text{th}$ execution region of $\sstask$ and $\sscomplete{j} \equals \ssrelease{j} + \ssresponse{j}$ denotes the completion time of its execution. 

\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{proof1.pdf}
\caption{Illustration of the proof of Lemma~\ref{lem:proof1}.}
\label{fig:proof1}
\end{figure}

Fig.~\ref{fig:proof1} (top part) shows these notations with a simple example that will be used throughout the proof to illustrate the process of creating $\relpattern'$ from $\relpattern$. This example assumes that $\hp{ss}$ consists of three sporadic tasks. The interference by those tasks on the self-suspending task $\sstask$ is represented by light rectangles on the first line of Fig.~\ref{fig:proof1}. Dark rectangles correspond to the execution of the execution regions of $\sstask$. The busy windows generated by tasks in $\hp{ss}$ are shown by arrow filled rectangles on the second line of Fig.~\ref{fig:proof1}. Note that only the jobs potentially contributing to the response time of $\sstask$ are depicted in Fig.~\ref{fig:proof1}. 

%This proof considers only the time windows that overlap with the execution of the job of $\sstask$ under consideration. Consequently we also redefine $\relpattern$ as the set of higher priority task releases that occur only within $\left[ \windowbegin{1}, \sscomplete{\operatorname{last}} \right]$ (see the step 1 in Figure~\ref{fig:proof1}). Note that at this step, although there are three tasks in $\hp{ss}$ we assume that only two of them release jobs within $\left[ \windowbegin{1}, \sscomplete{\operatorname{last}} \right]$ and these two tasks are illustrated at the top and at the bottom of the orange boxes in Figure~\ref{fig:proof1}.

%Given these notations, note that there may exist $k$ and $j$ such that $\windowbegin{k} \leq \ssrelease{j} \leq \windowend{k}$, i.e., the $j^\text{th}$ region of $\sstask$ within a higher priority tasks busy period (like the first busy window in the example of Figure~\ref{fig:proof1}) or $\ssrelease{j} \leq \windowbegin{k} < \windowend{k} < \sscomplete{j}$ (i.e. $\left[ \windowbegin{k}, \windowend{k} \right] \subset \left[ \ssrelease{j}, \sscomplete{j} \right]$) but there cannot be $k$ and $j$ such that $\windowbegin{k} < \sscomplete{j} \leq \windowend{k}$ (i.e. $\sscomplete{j} \in \left[\windowbegin{k}, \windowend{k} \right]$) since by definition of the higher priority tasks busy windows $\sstask$ is not executed within $\left[ \windowbegin{k}, \windowend{k} \right]$ and thus must complete either before the beginning or after the end the busy window $\left[ \windowbegin{k}, \windowend{k} \right]$.

First of all, we remove from $\relpattern$ all the releases from the tasks in $\hp{ss}$ that occur in a busy window $\left[ \windowbegin{k}, \windowend{k} \right]$ that does not overlap with any execution region of $\sstask$ (see Step 1 in Figure~\ref{fig:proof1}). Note that removing these releases along with the execution of the corresponding jobs does not alter the schedule of $\sstask$ (i.e. it does not impact the response time of any of its execution regions) or that of the jobs of any higher priority task released in any other busy window in $\relpattern$. As a result, the response time of $\sstask$ is not impacted by this modification of $\relpattern$. We define the resulting release pattern as $\relpattern^{1}$. %as 
%\begin{align*}
%\relpattern^{1} \equals \relpattern \setminus \big\{ & \release{h}{x} | \exists k: \release{h}{x} \in \left[ \windowbegin{k}, \windowend{k} \right] \text{ and } \\
%& \exists \ell: \sscomplete{j} \leq \windowbegin{k} \text{ and } \windowend{k} \leq \ssrelease{j + 1} \big\}
%\end{align*}
%where $\release{h}{x}$ is the release of the $x^\text{th}$ job of a task $\tau_x \in \hp{ss}$.

In order to get (1), each task in $\hp{ss}$ must release at least one job in $\relpattern'$. Since there may be some tasks in $\hp{ss}$ that do not release a job in $\relpattern^{1}$, %(or they did release jobs in $\relpattern$ but those releases got discarded in the previous step because they did not contribute to $\sstask$'s response time), 
one job release from each of those tasks is added to $\relpattern^{1}$ such that it coincides with the arrival of the last execution region of $\sstask$ (see Step 2 on Fig.~\ref{fig:proof1}). 
%In the example of Figure~\ref{fig:proof1}, a release of the third higher priority task of $\hp{ss}$ is added to $\relpattern^{1}$ and that release is synchronous with the arrival of the last region of $\sstask$. 
This transformation of $\relpattern^{1}$ trivially increases the response time of the last execution region as compared to $\relpattern$ and consequently also increases the overall response time of $\sstask$.

The next step to construct $\relpattern'$ from $\relpattern$ consists in considering all the execution regions of $\sstask$ one-by-one, starting from $\ssregion{1}$, and for each execution region $\ssregion{j}$ do the following: if there is a busy window $k$ such that $\windowbegin{k} \leq \ssrelease{j} \leq \windowend{k}$ (i.e. $\ssregion{j}$ is released within $\left[ \windowbegin{k}, \windowend{k} \right]$), we then compute the distance $\offset{j}$ between the arrival of $\ssregion{j}$ and $\windowbegin{k}$, i.e. $\offset{j} \equals \ssrelease{j} - \windowbegin{k}$. Note that by definition, $\offset{\ell} \geq 0$. If such an overlap exists, we postpone all the higher priority job releases that occur at or after $\windowbegin{k}$ by $\offset{j}$ time units. This shift in the job releases makes $\offset{j}$ additional units of workload from the tasks in $\hp{ss}$ interfere with the execution of $\ssregion{j}$. As a consequence, the response time of $\ssregion{j}$ increases by $\offset{j}$ (i.e. $\ssresponse{j} \leftarrow \ssresponse{j} + \offset{j}$), and so does the time $\sscomplete{j}$ at which it finishes its execution ($\sscomplete{j} \leftarrow \sscomplete{j} + \offset{j}$) and, in a cascade effect, the times at which the next execution regions are released (i.e., $\forall \ell > j$, $\ssrelease{\ell} \leftarrow \ssrelease{\ell} + \offset{j}$). Step 3(1) in Fig.~\ref{fig:proof1}) illustrates this process for the first region of $\sstask$. At that step all the task releases are delayed by $\offset{1}$  time units. Then, Step 3(2) illustrates the second and last iteration of that process when the second execution region of $\sstask$ is considered and all releases occurring at or after $\windowbegin{2}$ get postponed by $\offset{2}$ time units. For clarity, we have redrawn the interference pattern on $\sstask$ resulting from that step.

Note that at each iteration of the transformation described above, the response time of the currently considered region $\ssregion{j}$ of $\sstask$ increases by $\offset{j}$ time units. However, given that along with this increase, we also delay by $\offset{j}$ time units the release of all the subsequent execution regions of $\sstask$ and the releases of all the jobs of the tasks in $\hp{ss}$ that interfere with those regions, there is no variation in the interference suffered by those execution regions and their response time is not impacted by the transformation. 
After each iteration, the overall response time of $\sstask$ therefore increases by $\offset{j}$ time units only.
%The overall response time of $\sstask$ has therefore increased by $\offset{j}$ time units. 
%there is no higher priority workload that can overlap with the time window $\left[ \ssrelease{j}, \sscomplete{j} \right]$ after that step that was not overlapping with $\left[ \ssrelease{j}, \sscomplete{j} \right]$ before that step. In fact, looking at the schedule from the updated finishing time $\sscomplete{j}$ onward, after each iteration we can see that nothing has changed because of the shifting of the releases of the tasks and regions except that the entire schedule has been shifted by $\offset{j}$ time units. 
The release pattern from this transformation is now referred to as $\relpattern^{2}$. One can notice that all the jobs in $\relpattern^{2}$ are released within the execution regions of $\sstask$.

As already explained, the response time of every region of $\sstask$ may have only increased (sometimes it remains the same) during the process of constructing $\relpattern^{2}$ as described above. Finally, in order to obtain $\relpattern'$, $\relpattern^{2}$ is further modified as follows. For each task $\tau_h \in \hp{ss}$, let $\mathcal{R}_h$ denote the set of all its release time-instants in the pattern $\relpattern^{2}$. We know that for each of these instants $\release{h}{x}$ there exists a execution region $\ssregion{j}$ of $\sstask$ such that $\ssrelease{j} \leq \release{h}{x} < \sscomplete{j}$, i.e. that release of $\tau_j$ happens while there is a execution region $\ssregion{j}$ of $\sstask$ which is running or waiting for the CPU. %This is a direct consequence of our first and second step in creating $\relpattern'$, in which we removed all the releases of $\tau_j$ that occurred during a suspension period of $\sstask$ and did not contribute to $\sstask$'s response time and then added an additional release in the last region if $\tau_j$ did not have any release left. 
Now, for each release in $\mathcal{R}_h$, we compute the offset $O_{h,x}$ of $\release{h}{x}$ relative to the release of the execution region of $\sstask$ which is active at that time. That is, for each $\release{h}{x} \in \mathcal{R}_h$ we compute $O_{h,x} \equals \release{h}{x} - \ssrelease{j}$ where $j$ is such that $\ssrelease{j} \leq \release{h}{x} < \sscomplete{j}$. We then compute the minimum offset $\minoffset{h}$ for $\tau_h$  such that $\minoffset{h} \equals \min_{\forall x} \left\{ O_{h,x} \right\}$
and shift to the left all the releases of $\tau_h$ by that minimum offset, i.e. for all $\release{h}{x} \in \mathcal{R}_h$, we impose $\release{h}{x} \leftarrow \release{h}{x} - \minoffset{h}$. As a result, none of the releases of $\tau_h$ exit its "encompassing" $\sstask$'s execution region and, as a consequence, the interference on $\sstask$ is not modified when passing from $\relpattern^{2}$ to $\relpattern'$. Moreover, because the releases of all the jobs of $\tau_h$ are shifted by the same amount of time, the minimum inter-arrival time between all those jobs is still respected. Finally, at least one job of each task $\tau_k \in \hp{ss}$ is now synchronous with the release of an execution region of $\sstask$ (the one[s] for which the relative offset was minimum, i.e. $O_{h,x} = \minoffset{h}$).
%, which proves the lemma. 
This last step of the proof is depicted on the last line of Fig.~\ref{fig:proof1} for the second task in $\hp{ss}$.
From the entire discussion, it can be seen that (1) and (2) hold true. Hence the lemma.
\end{proof}

The previous lemma leads to the following corollary.

\begin{corollary}
In the critical instant of a self-suspending task $\sstask$, every higher priority task releases a job synchronously with the arrival of at least one execution region of $\sstask$, although not all higher priority tasks must  release a job synchronously with the same execution region. 
\end{corollary}


