Under fixed-priority scheduling, we study the schedulability of sporadic task sets composed of one self-suspending task and multiple non-suspending tasks. Assuming that the lowest priority is assigned to the self-suspending task, we provide a response time test for such task. We first consider the case where the self-suspending task has only one suspension region. Without loss of generality, we assume that the lower priority task suspends for the entire length of its suspending region (i.e. $S_{i,1}$) as a shorter suspension cannot increase the response time.

In a suspension-aware analysis the response time of a self-suspending task is given by
\begin{equation}
R_i = R_{i,1} + S_{i,1} + R_{i,2}
\end{equation} where $R_{i,1}~\text{and}~R_{i,2}$ are the response times of its first and last computing segment, respectively.

As explained in the previous section, the critical instant for $\tau_i$ happens when the higher priority tasks are released synchronously with the first and/or the last computing segment. However,  since we do not know a priori which release pattern corresponds to the critical instant, an exact response time analysis must consider all release combinations. Then the exact response time for $\tau_i$ is simply the maximum response time of those combinations.

Let us now focus on how to compute $R_{i,1}~\text{and}~R_{i,2}$ for any release pattern. $R_{i,2}$ is given by Equation 2 and is computed forward as the traditional response time for non-self-suspending tasks. That is, we seek a minimum response time that satisfies the fixed-point iteration by starting with $R_{i,2}= C_{i,2}$.

\begin{equation}
R_{i,2} = C_{i,2} + \sum_{\forall k\in hp(\tau_i)} \ceil[\bigg]{\frac{R_{i,2} - O_{i,2}}{T_k}} \times C_k
\end{equation}

\noindent where

\begin{equation}
O_{i,2} = \left\{ 
  \begin{array}{l l}
    0 \quad \text{if $\tau_k$ is synchronous with $C_{i,2}$}\\
    \max(0, \ceil[\bigg]{\frac{R_{i,1}}{T_k}} \times T_k - R_{i,1} - S_{i,1}) \quad \text{otherwise}
  \end{array} \right.
\end{equation}

The offset for higher priority tasks that do not have a synchronous release with $C_{i,2}$ (i.e. they have a synchronous release exclusively with $C_{i,1}$) depends on the response time of the first computing segment of $\tau_i$. Therefore we must find $R_{i,1}$ first.

Unfortunately, computing $R_{i,1}$ is not trivial as it must be computed backward when at least one task has a synchronous release with $C_{i,2}$. Due to this peculiarity, following the same strategy of increasing the response time until it converges may yield an optimistic value for $R_{i,1}$. Based on this observation, we compute $R_{i,1}$ by decreasing its maximum value. That is, starting with the upper bound on $R_{i,1}$ (which corresponds to its worst-case response time when all the higher priority tasks are released synchronously with $C_{i,1}$), we remove interfering jobs iteratively if any of the following two conditions is violated.

\begin{enumerate}
\item For those higher priority tasks $\tau_k$ that have a synchronous release with $C_{i,2}$, the release of their last job interfering with $R_{i,1}$ and the start of $R_{i,2}$ has to be separated by at least $T_k$ time units. Otherwise one interfering job of the misbehaving task is removed.
\item Following the traditional response time equation (with no offsets), response time's convergence cannot be attained by a lower number of interfering jobs. Otherwise the number of interfering jobs for each higher priority tasks is given by the new value of response time.
\end{enumerate}

%seek a response time value that accounts for as many interfering jobs as possible, respecting the release pattern under consideration.  satisfies the fixed-point iteration of Equation 4 where $R_i^{1'}$ is subject to some constraints. The very first of which is taken for the initial value of $R_i^{1'}$ and it is an upper bound on $R_i^1$ that can be found using the traditional response time analysis assuming all higher priority tasks have a synchronous release with $C_i^1$ (i.e. $O_k^1 = 0$). Therefore, calculating $R_i^1$ can also be formulated as an optimization problem.

%\begin{equation}
%R_i^1 = C_i^1 + \sum_{\forall k\in hp(\tau_i)} \ceil[\bigg]{\frac{R_i^{1'} - O_k^1}{T_k}} \times C_k
%\end{equation}
%
%\noindent where
%
%\begin{equation}
%O_k^1 = \left\{ 
%  \begin{array}{l l}
%    0 \quad \text{if $\tau_k$ is synchronous with $C_i^1$}\\
%    R_i^{1'} + S_i - \floor[\bigg]{\frac{R_i^{1'} + S_i}{T_k}} \times T_k \quad \text{otherwise}
%  \end{array} \right.
%\end{equation}

%A fundamental difference to the traditional response time analysis is that here the offsets for higher priority tasks that have a synchronous release with $C_i^2$ are dynamic, as they depend on $R_i^{1'}$ (in the following we show how to compute $R_i^{1'}$) that is computed at each iteration. Also the notation of offset differs considerably, as when depicting the worst-case schedule the offsets are not seen as the distance between the starting time of the computing segment of $\tau_i$ and the release time of the first interfering job of $\tau_k$, but instead as an increase in the inter-arrival time of the job of $\tau_k$ that is released synchronously with $C_i^2$. That is, the inter-arrival time between the job released synchronously with $C_i^2$ and the previous job becomes $T_k + O_k^1$ time units when capturing the worst-case scenario for that particular release combination.

%The other constraint specifies that the number of jobs of each task $\tau_k$ interfering with $tau_i$ must at same point become equal when computing the response time backward and forward.

Accordingly, the exact response time for $R_{i,1}$ can then be computed by the procedure shown in Algorithm~\ref{algo:rt1}. Condition 1) is satisfied in lines~5-7, whereas condition 2) is enforced in lines~11-15. A release pattern $X$ is a binary array of $n-1$ elements, where each element is set to 1 if the corresponding higher priority task is assumed to have a synchronous release with the last computing segment of $\tau_i$. Let $NI_k$  denote the number of interfering jobs of task $\tau_k$.

\begin{algorithm}[h]
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
   \Input{$\tau$ - The taskset\\$X$ - A release pattern\\$UB$ - The upper bound for $R_{i,1}$}
   \Output{$R_{i,1}^{X}$ - The exact response time for the first segment of $\tau_i$ according to the release pattern $X$}
	 $R_{i,1}^{X,bwd} \leftarrow 0$ \;
	 $R_{i,1}^{X} \leftarrow UB$ \;
	 $NI_k \leftarrow \ceil[bigg]{\frac{UB}{T_k}}$ \;
	 \While{$R_{i,1}^{X,bwd} \neq R_{i,1}^{X}$} {
		\If{$\frac{R_{i,1}^{X} + S_{i,1}}{T_k} < NI_k~\text{and}~X_k = 1$} {
			$NI_k \leftarrow NI_k - 1$ \;
		}
		$R_{i,1}^{X,bwd} \leftarrow R_{i,1}^{X}$ \;
		$R_{i,1}^{X,fwd} \leftarrow 0$ \;
		$R_{i,1}^{X} \leftarrow C_{i,1}$ \;
		 \While{$R_{i,1}^{X,fwd} \neq R_{i,1}^{X}$} {
			$R_{i,1}^{X,fwd}\leftarrow R_{i,1}^{X}$ \;
			$R_{i,1}^{X} \leftarrow C_{i,1} + \sum_{\forall k\in hp(\tau_i)} \min(NI_k, \ceil[bigg]{\frac{R_{i,1}^{X,fwd}}{T_k}}) \times C_k$ \;
		}
		$NI_k \leftarrow \min(NI_k,  \ceil[bigg]{\frac{R_{i,1}^{X}}{T_k}})$ \;
	 }
	 return $R_{i,1}^{X}$\;
\caption{Compute exact $R_{i,1}$ for one release pattern}
\label{algo:rt1}
\normalsize
\end{algorithm}

The exact response time for $R_i^1$ can also be found through an optimization formulation, as follows. For simplicity, let $k, a, \text{and}~b$ be the variables that iterate over the indexes of all the higher priority tasks, the higher priority tasks with synchronous release with $C_{i,1}$, and the higher priority tasks with synchronous release with $C_{i,2}$, respectively. The decision variables are $R_{i,1}, NI_k, R_{i,1}{'}, \text{and}~NI_k^{'}$. Equations 12-15 ensure condition 2), while condition 1) is imposed by Equation 8. 

\begin{eqnarray}
& \text{max} & R_{i,1} \label{eq:of}\\
& \text{s.t.} & R_{i,1} \ge C_{i,1} \label{eq:ct1}\\
& & R_{i,1} \le UB \label{eq:ct2}\\
& & NI_k \ge 0 \label{eq:ct3}\\
& & NI_b \le \floor[bigg]{\frac{R_{i,1} + S_{i,1}}{T_b}} \label{eq:ct4}\\
& & NI_k \le \ceil[bigg]{\frac{R_{i,1}}{T_k}} \label{eq:ct5}\\
& & R_{i,1} = C_{i,1} + \sum NI_k \times C_k \label{eq:ct6}\\
& & NI_k^{'} \ge 0 \label{eq:ct7}\\
& & NI_k^{'} \le NI_k \label{eq:ct8}\\
& & \sum NI_a^{'} < \sum NI_a \label{eq:ct9}\\
& & R_{i,1}^{'} = C_{i,1} + \sum NI_b \times C_b + \sum NI_a^{'} \times C_a \label{eq:ct10}\\
& & NI_a \le \ceil[bigg]{\frac{R_{i,1}^{'}}{T_a}} \label{eq:ct11}
\end{eqnarray}

%The idea now is to transform this algorithm in an ILP formulation that is able to return the exact response time for $R_i^1$ independently. That is we need to be able to capture all the constraints on $R_i^1$, so that when we maximize $R_i^1 + R_i^2$ we get the exact value and not an upper-bound. Then we extend it to multiple self-suspending regions in a similar fashion and get an upper-bound on the total response time, since the exact one might be complex to compute/characterize all the constraints between the different computing segments. 

%The other constraint is on the new value of $R_i^{1'}$ to be used in the next iteration of the recursive iteration. As the offset for higher priority tasks that have a synchronous release with $C_i^1$ is fixed and retains the traditional meaning (i.e. as the response time was increasing fowardly), we must ensure that the completion time of the last interfering job of such $\tau_k$ does not exceed the response time of $R_i^1$, otherwise it has to be dropped out. This is accomplished in the following equation.
%
%\begin{equation}
%R_i^{1'} = R_i^1 - \sum_{\forall k\in hp^{*}(\tau_i)} \floor[\Bigg]{\frac{\floor[\bigg]{\frac{R_i^1 - O_k^1}{T_k}} \times T_k + C_k}{R_i^1}} \times C_k
%\end{equation}
%
%\noindent where $hp^{*}$ is the set of higher priority tasks that have a synchronous release with $C_i^1$. The other high priority tasks do not need to be considered in this constraint because their offset is computed dynamically. Consequently, if they have any job that must be discarded, then the next fixed-point iteration of the main recursive equation will take care of that.
%
%A more cumbersome way to define Equation 6 is presented below. Since it takes in consideration all higher priority tasks, offsets must be computed again after the fixed-point iteration. In this version of the equation we assume
%\begin{equation*}
%\floor[\Bigg]{\frac{\floor[\bigg]{\frac{R_i^1 - O_k^1}{T_k}} \times T_k + C_k}{R_i^1}} \ge 2
%\end{equation*} is possible, therefore we remove only one job. I think both cases are unnecessary, that is why I removed them!
%
%\begin{equation*}
%R_i^{1'} = R_i^1 - \sum_{\forall k\in hp(\tau_i)} \min(1, \floor[\Bigg]{\frac{\floor[\bigg]{\frac{R_i^1 - O_k^1}{T_k}} \times T_k + C_k}{R_i^1}}) \times C_k
%\end{equation*}

\textbf{Optimazing the number of scenarios to be checked.} If the following equation holds true, then $\tau_k$ is guaranteed to have synchronous releases with both computing segments of $\tau_i$.

\begin{equation*}
T_k - C_k \le S_{i,1}
\end{equation*}