\section{Timing Analysis: Without Slot Multiplexing}
\label{sec:withoutSM}
	\begin{figure*}%[t]
		\centering
		        %\vspace{-cm}
			\includegraphics[width=1.0\textwidth]{transformation.pdf}
			\vspace{-13cm}
		\caption{Logical transformation of the problem of computing $busCycles_i$.}
		\label{fig:transformation}
	\end{figure*}
In the above discussion, $busCycles_i$ is the only component for which we have not presented the computation technique. This will be detailed in this section. For clarity of exposition, we will first assume that slot multiplexing is not allowed by FlexRay. However, subsequently, we describe how  $busCycles_i$ can be computed by our proposed approach assuming  slot multiplexing is allowed on the DYN segment. Note that the calculation of the rest of the components of $\messageWCRT_i$ remain exactly same as described in Equations \ref{eq:msgWCRT} to \ref{eq:busCycle} in both cases --- with and without slot multiplexing. %To summarize, our algorithm to compute $BusCycle_i$ first transforms the problem into a bin covering problem with conflicts and, after that, to run a price directive decomposition method \cite{JansenTC2003} to find an approximative solution for it by taking as an input an approximation factor $\epsilon \in (0, 1]$. 	
	
%	According to the definition of the busy window we model the process of filling a group of consecutive cycles only with higher priority messages with the process of covering bins under the assumptions that the number of items and the total number of copies of each item is given. For this case the conflicts which results after this transformation are the followings: only one copy of each item is allowed to be packed into a bin. The optimization goal of the bin covering is thus to maximize the total number of bins which can be covered based on the previous described conflicts. 

As mentioned before, $busCycles_i$ is the maximum number of cycles that a message $m_i$ can be delayed by the higher priority messages. An outline of our algorithm to compute $busCycles_i$ for each message $m_i$ is listed in Algorithm \ref{algo:withoutSM}. Starting with the first cycle, i.e., $l=1$, the algorithm iteratively tries to fill cycle $l$ with instances of higher priority messages and if it succeeds the algorithm will try to fill cycle $l+1$ and so on (lines 4 to 9). If the algorithm cannot fit all the instances within $dCycle_i$ cycles for any message $m_i$, then it terminates and declares that the given message set $\Gamma$ is not schedulable (lines 9 to 17). $dCycle_i$ is computed directly from the deadline as an upper bound the relative number of cycles based on the lenght of the deadline (line 3). Otherwise, if $l\leq dCycle_i$ and  the algorithm can fill completely $l-1$ cycles but not the $l$th cycle, the Algorithm \ref{algo:withoutSM}  will report that the  value of $busCycles_i$ is $l - 1$.
 
 The largest number of cycles that can be filled to the minimum level $\phi_{m_i}$ by higher priority messages from the set $hp(m_i)$ is essentially the value of $busCycle_i$. Let $k_h^l$ be the number of  instances of message $m_h$ ($m_h \in hp(m_i)$) that are generated during $l$ consecutive cycles. If the algorithm manages to fill $l$ cycles, then the number of higher priority messages that need to be packed first needs to be recomputed as $k_h^{l+1}$ (line 6) in the next iteration. 

%We note that our Algorithm is similar to the one presented by Zeng et al. \cite{ZengGN10} in the sense that we also try to iteratively fill the cycles. However,  how we 
At any iteration, the problem of filling $l$ cycles  is essentially a bin covering problem. Bin covering tries to maximize the number of bins that can be filled to a fixed minimum capacity using a given set of items. In our case, the instances of the higher priority messages that are produced during the $l$ consecutive bus cycles are the items and the DYN segment is the bin where $\phi_{m_i}$ is the bin size that must be filled. Each message is considered as a separate item and the number of instances that are ready is considered as the number of copies of the same item. %Since $BusCycle_i$ is being computed itself, the number of higher priority messages will vary with $BusCycle_i$ and hence, the computation of $BusCycle_i$ is an iterative process that continues until the a fixed point is reached. 
The minimum capacity of the bin that must be filled is $\phi_{m_i}$ as discussed in the previous section. Note, however, that in contrast to the traditional bin covering problem, in our case only one copy of each type of item is allowed to be packed in the same bin. This follows from the FlexRay protocol specification (see Section \ref{sec:basic_flex}) that allows only one instance of each message to be transmitted in each DYN segment cycle. Finally, the objective of this bin covering problem is to maximize the total number of bins that can be covered. Note that we are solving a decision version of this problem because we want to know whether the $l$ bins can be filled or not.

We solve the resulting bin covering problem based on the technique presented by Jansen and Solis-Oba \cite{JansenTC2003}. Following their approach, a high-level scheme of our technique is illustrated in Figure \ref{fig:transformation}. As shown in this figure, we first transform the bin covering problem into a \textit{convex block angular resource sharing problem} (see Section \ref{sec:convex}). This problem is solved by the price directive decomposition method which, in turn, must solve a knapsack like problem at its heart. This is discussed in Section \ref{sec:kp}. %The technique was extended to the general case of bin covering with conflicts. The problem is solved based on a directive price decomposition algorithm based on a modified logarithmic potential function. In [Jansen] the bin covering problem is transformed into a new maximization problem as follows:
	
%A straightforward algorithm to compute the busy window would be the following:
	\begin{algorithm}
		\caption{Computing the $busCycles_i$ for message $m_i$ for the case of no Slot Multiplexing} 
		\begin{algorithmic}[1]
		         \REQUIRE The message $m_i$ ($m_i \in \msgset$), the set $hp(m_i)$ ($hp(m_i) \subseteq \msgset$), and system parameters of messages in the set $\msgset$
		  \FORALL {$m_i \in \msgset$}
			  \STATE schedulable = false
				\STATE $dCycle_i=\left\lceil \dfrac{\messageDeadline}{\flexrayCycleLength} \right\rceil $
				\FOR{$l = 1 \to dCycle_i$} 
					\FORALL {$m_h \in hp(m_i)$}
						\STATE {$k_{h}^l = \left\lceil l\dfrac{\flexrayCycleLength}{T_h} \right\rceil$}
					\ENDFOR
					\STATE {Solve the bin covering problem using the logical transformation presented in Figure \ref{fig:transformation}}
					\STATE {Let $P\left( \epsilon \right)$ be the approximate solution of the bin covering problem}
					\IF {$P\left( \epsilon \right) < l$} 
						\STATE schedulable = true
						\RETURN $l - 1$ as the value of $busCycles_i$
					\ENDIF
				\ENDFOR
				\IF {schedulable == false}
					\RETURN the set \msgset is not schedulable
				\ENDIF
			\ENDFOR
		\end{algorithmic}
		\label{algo:withoutSM}
	\end{algorithm}

\subsection{Step I}
\label{sec:convex}
As a first step, we formulate the bin covering problem as a Integer Linear Program (ILP). Without any loss of generality, we drop the subscript $i$ from $m_i$ but the interpretation remains same if the subsript is used. The set of high priority messages is $hp(m)$ and let $N=|hp(m)|$. Considering the messages instances in $hp(m)$ as items, we define a bin configuration $\binConfigSet$  to be any subset of items from the set $hp(m)$ such that the items in $\binConfigSet$ can cover the bin. %Note that there can be $2 ^ N$ bin configurations in total. 
Let the set of all possible bin configurations be $\allBinConfigs=\{\binConfigSet_1,\binConfigSet_2,\ldots, \binConfigSet_{|\allBinConfigs|}\}$. For any configuration $\binConfigSet_c$, let there be a set of boolean variables, $\left\{\elementInBinConfigSizesSet_{1, c}, \elementInBinConfigSizesSet_{2, c}, \cdots, \elementInBinConfigSizesSet_{|\binConfigSet|, c} \right\}$ that are appropriately set to 1 in order to represent the messages that are in the set $\binConfigSet_c$. Note that $N=|\binConfigSet|$. Given the above, a bin is covered if:
	\begin{equation}
		\sum\limits_{n = 1} ^ {|\binConfigSet|} \elementInBinConfigSizesSet_{n, c} \times \left( W_n - 1 \right) \geq \phi_{m}
	\end{equation}

%Given $N$, let us consider a set of $N$ boolean variables, i.e., $Q=\left\{ \elementInBinConfigSizesSet_{1, m}, \elementInBinConfigSizesSet_{2, m}, \cdots, \elementInBinConfigSizesSet_{|\binConfigSet|, m} \right\}$. We define a bin configuration $\binConfigSet$  to be the any set of boolean variables that are a subset of $Q$ and when they are all assigned the value one, the messages corresponding to the $\binConfigSet$ can cover the minimum capacity of the bin. 

Thus, by definition, any $\binConfigSet_c$ consists of items that satisfy the above equation. 
	%In [Jansen] variables $\elementInBinConfigSizesSet_{n, m}$ are allowed to take values in the set $\{0, 1, \cdots, k_n^L\}$. By forcing them to be in the interval $\{0, 1\}$ we respect the conflicts that maximum only one instance of each message can be packed into a bin.
		
 Let there be an integer value $x_c$ associated with each configuration $\binConfigSet_c$ that denotes how many times the configuration $\binConfigSet_c$  occurs in the final solution. Let $M = |\allBinConfigs|$. %Note that in the worst case $M = 2 ^ N$. 
 The optimization goal of the ILP formulation is to maximize the sum of the integer variables $x_c$.  We note that formulating the problem in this manner relies on the brute-force construction of the $M$ configurations as discussed above. In the worst case $M = 2 ^ N$ and hence, generating all such feasible bin configurations grows exponentially with the value of $N$. However, as we will discuss later only a few bin configurations need to be generated in order to achieve an approximate upper bound on the solution to the bin covering problem. These configurations must be generated while adhering to FlexRay standard as will be discussed in Section \ref{sec:kp}.%Even for relatively small size problems it is not possible to generate all possible feasible bin configurations. 
	
	%Between the optimal solution of the bin covering problem, $O_{BC} ^ L$, and the optimal solution of the ILP formulation, $O_{ILP}^L$, we have the following relation:
	%\begin{equation}
	%	O_{BC}^L \leq O_{ILP}^L
	%\end{equation}
	
	The ILP problem is formulated below:
	\begin{equation}
	\begin{array}{lll}
		\mbox{maximize:}   & \sum\limits_{c = 1} ^ {M} x_c               			& \\
		\mbox{subject to:} & \sum\limits_{c = 1} ^ {M} q_{n, c} x_c \leq k_n^l, & \forall n \in \{1, 2, \cdots, N\}\\
		 	                 & x_c \in \mathbb{N}_{+}, 													 						& \forall c \in \{1, 2, \cdots, M\}
	\end{array}  	                 
		\label{eq:LPBC}
	\end{equation}
	
	%The second is about solving an ILP problem which is known that in general has exponential complexity. This issue can be solved by removing the integrality constraint and obtaining an LP relaxation. It is known from the theory of linear programming that for a maximization problem if one removes the integrality constraints will always get a solution which is an upper bound for the integral solution of the same problem. Therefore between the optimal solution of the ILP problem, $O_{ILP}^L$, and the optimal solution, $O_{LP}^L$, of the LP relaxation of the ILP problem \ref{eq:LPBC} we have the following relation:
	%\begin{equation}
	%	O_{ILP}^L \leq O_{LP}^L
	%\end{equation}
	\begin{figure*}%[t]
		\centering
		\vspace{-8mm}
		\hspace{-10mm}
			\includegraphics[width=.9\textwidth]{cycles}
		%\vspace{-12cm}
		
		\vspace{-19cm}
		\caption{The knapsack problem has a constraint corresponding to each higher priority message.}
		\label{fig:cycles}
		\vspace{-5mm}
	\end{figure*}
	
 Recall that in our problem, the bin covering problem manifests itself in the form of a decision problem, i.e., in each iteration of the Algorithm \ref{algo:withoutSM}, we check whether $l$ bins can be covered before moving to the next iteration where we check whether $l+1$ bins can be filled. We also relax the integrality constraint on the variables $x_c$ to obtain a linear program. Hence, we can re-write the problem in Equation \ref{eq:LPBC} in a new form as presented below:
	\begin{equation}
		\lambda^* = \min \left\{\lambda \bigg| 
			\begin{array}{l}
		 		\sum\limits_{c = 1} ^ {M} \dfrac{q_{n, c}}{k_n^l}x_c \leq \lambda, \forall n \in \{1, 2, \cdots, N\} \\
		 		\sum\limits_{c = 1} ^ {M} x_c = l
			\end{array} \right\}
		\label{eq:ETLPBC}
	\end{equation} %Following this, we re-write Equation \ref{eq:LPBC} in the following form. 
	
%	\begin{equation}
%		\lambda^* = \min\left\{\lambda \bigg| \sum\limits_{c = 1} ^ {M} \dfrac{q_{n, c}}{k_n^l}x_c \leq \lambda, \forall n \in \{1, 2, \cdots, N\} \right\}
%		\label{eq:TLPBC}
%	\end{equation}
% In order to deal with the first issue we will use a price directive decomposition method to get an approximative solution for the LP relaxation of the ILP problem \ref{eq:LPBC}. 
	Note that an optimal solution of this problem with $\lambda^* = 1$ is an optimal solution of the LP relaxation obtained from Equation \ref{eq:LPBC}. As noticed in \cite{JansenTC2003} this new problem (Equation \ref{eq:ETLPBC}) is a \textit{convex block-angular resource sharing problem} and the price directive decomposition method \cite{JansenTCS02} can be used to solve it with any given precision $\epsilon > 0$. In fact, the algorithm presented in \cite{JansenTCS02}  has been proved to return  a solution with a bound $(1 + \epsilon)\lambda^*$.  In this paper, we will deploy the same algorithm to solve our bin covering problem. In the following, we provide a brief description on how this algorithm \cite{JansenTCS02}  works. \\
	
	%After solving the problem \ref{eq:ETLPBC}, depending on the value of $\lambda$, we have the following relations with the optimum value of the bin covering problem, $O_{BC}^L$:
	%\begin{equation} \left\{
	%	\begin{array}{ll}
	%		O_{BC}^L \geq L & \mbox{, if } 1 \leq \lambda \leq 1 + \epsilon \\
	%		O_{BC}^L < L 		& \mbox{, otherwise}
	%	\end{array}
	%\end{equation}
\noindent
\textbf{Background}: Let us define with $\mathcal{X}$ the set of all possible vectors such that $\mathcal{X} = \left\{\left(x_1, x_2, \cdots, x_M \right) \in \mathcal{R}_{+} ^ {M} \bigg| \sum\limits_{c = 1} ^ {M} x_m = l \right\}$. Note that $\mathcal{X}$ is a simplex by construction. We introduce the following notations for a vector $X = \left(x_1, x_2, \cdots, x_M \right) \in \mathcal{X}$:
	\begin{equation}
		f_n(X) = \sum\limits_{c = 1} ^ {M} \dfrac{q_{n,c}}{k_n^l}x_c \mbox{ and }
		\lambda(X) = \max\limits_{n = 1} ^ {N} f_n(X)
	\end{equation}
	
Let $\mathcal{F} = \left(f_1, f_2, \cdots, f_N \right)$. The algorithm in \cite{JansenTCS02}  computes a solution $X \in \mathcal{X}$ such that:
	\begin{equation} 
		\mbox{\textbf{Primal}$_\epsilon$: } \mathcal{F}(X) \leq (1 + \epsilon) \lambda^* \times I
	\end{equation}
	where $I = (1, 1, \cdots, 1)$ (the unit vector with $N$ elements). The approach is based on the Lagrangian duality relation:
	\begin{equation}
		\lambda^* = \min_{X \in \mathcal{X}} \max_{P \in \mathcal{P}} P^T \mathcal{F}(X) = \max_{P \in \mathcal{P}} \min_{X \in \mathcal{X}} P^T \mathcal{F}(X)
	\end{equation}
	where $\mathcal{P} = \left\{ \left(p_1, p_2, \cdots, p_N \right) \in \mathcal{R}_{+} ^ {N} \bigg| \sum\limits_{n = 1} ^ {N} p_n = 1\right\}$ is the unit simplex. Denoting $\Lambda(P) = \min\limits_{X \in \mathcal{X}} P^T \mathcal{F}(X)$ a pair $X \in \mathcal{X}$ and $P \in \mathcal{P}$ is optimal if and only if $\lambda(X) = \Lambda(P)$. The corresponding $\epsilon$-approximation dual problem has the form:
	\begin{equation}
		\mbox{\textbf{Dual}$_\epsilon$: } \Lambda(P) \geq (1 - \epsilon) \lambda^*
	\end{equation}
	The price-directive decomposition method is an iterative strategy that solves the primal problem $\mbox{\textbf{Primal}$_\epsilon$}$ and its dual problem $\mbox{\textbf{Dual}$_\epsilon$}$ by computing a sequence of pairs $X$ and $P$ to approximate the exact solution from above and below, respectively. In \cite{GrigoriadisOR1996} it has been shown that the primal and the dual problem can be solved with a $t$-approximate block solver that solves the block problem a given tolerance t.
	The block problem is:
	\begin{equation}
		\min\limits_{Y \in \mathcal{X}} P^T \mathcal{F}(Y)
	\end{equation}
	
\noindent
{\bf Our problem:} In our case we choose $t = \epsilon$ and the block problem has the following form:
	\begin{equation}
		\min\limits_{Y \in \mathcal{X}} \sum\limits_{c = 1} ^ {M} y_c \times \left( \sum\limits_{n = 1} ^ {N} \dfrac{q_{n,c}}{k_n^l}p_n \right)
		\label{eq:blkPb}
	\end{equation}
	
We solve the above block problem in our context as the described in the following.	
\subsection{Step II}
\label{sec:kp}
The minimization problem obtained above in Equation \ref{eq:blkPb} still contains a large number of variables because it assumes that the $M$ feasible bin configurations have been generated in a brute-force fashion. However, we overcome this problem by transforming this problem into a  knapsack problem which, as mentioned before, is the final step in our algorithm.
	The set $\mathcal{X}$ is a simplex. Using the linear programming theory, we know that for the case when the integrality constraints are removed a minimization problem always has its optimum in one of the corners of the simplex. The corners of the simplex $\mathcal{X}$ have only one variable $x_c = l$ and all the others are equal with 0. Therefore the minimization problem in Equation \ref{eq:blkPb} transforms into:
	\begin{equation}
		l \times \min\limits_{c = 1} ^ {M} \left( \sum\limits_{n = 1} ^ {N} \dfrac{q_{n, c}}{k_n^l}p_n \right)
		%\limits_{Y \in \mathcal{X}} l
	\end{equation}
	This problem has M variables where M can be $2^N$ in the worst-case. Hence, we will now approximate the problem, using the algorithm by Jansen and Zhang \cite{JansenTCS02}. By doing this we decrease the number of variables to a polynomial number $N$. The algorithm by Zhang takes as an input an $\epsilon$ value and generates a number of columns which leads to a solution with the desired quality. A higher value of $\epsilon$ would imply that less columns will have to be generated leading to fast running times. On the other hand, the pessimism in the solution increases. The opposite argument holds true when $\epsilon$ is assigned small values. Since our algorithm is based on this approach, a designer using our scheme can also choose an appropriate value of $\epsilon$ according to her/his desired quality. 
	 
	This problem is a variation of the knapsack problem with additional constraints and the variables $q_{n, c}$ are now the optimization variables. Note that now we have only $N$ optimization variables compared with the initial number - $M$. What we have to solve is:
	\begin{equation}
	\begin{array}{ll}
		\mbox{minimize:} 		& \sum\limits_{n = 1} ^ {N} \dfrac{q_{n, c}}{k_n^l}p_n \\
		\mbox{such that: }	& \sum\limits_{n = 1} ^ {N} q_{n, c} (W_n - 1) \geq \phi_{m} \\
												& \sum\limits_{i = 1} ^ {n - 1} q_{i, c} (W_i - 1) \leq \phi_{m_n} - 1 + (1 - q_{i,n}) R, \\ \forall n \in \{2, 3, \cdots, N\} \\
												& q_{n,c} \in \{0, 1\}
	\end{array}
	\label{eq:blockProblem}
	\end{equation}
This is a multiple-constraint knapsack problem and  has several constraints that correspond to FlexRay specific details.	As mentioned in the beginning of this section the problem of computing the worst case delay of a message $m$ is solved by doing a logical transformation into the bin covering problem. The core idea is that cycles can be logically transformed into bins. The constraints regarding how items should be packed into bins are given by the FlexRay specifications. In order to make the transformation reversible we have to take into consideration all such constraints. The first constraints ensures that the message under analysis will be displaced. The second constraint represents a set of $N - 1$ constraints which ensures that each message $m_h \in \hp(m)$ will not be displaced by its own set of high priority messages (R represents a large enough constant).
The last constraint ensures that at the maximum only one instance of a given message will be transmitted in one cycle. %We will explain why it is important to have all the FlexRay constraints when solving the knapsack problem with the help of one example. 

	We will explain the significance of the second set of constraints with the help of one example. In our example presented in Figure \ref{fig:cycles} we show 4 messages. We are interested in the worst-case delay of message $m_4$. The figure shows 2 scenarios. The first case presents a bin covered with copies of $m_1, m_2$ and $m_3$. We can see that this bin configuration cannot be transformed back into a FlexRay cycle because message $m_3$ is displaced by message $m_1$ and $m_2$ and this violates the FlexRay constraints. Note that $m_3$ is not the message under analysis but it is still important to encapsulate such FlexRay constraints. On the other hand, the configuration which corresponds to case 2 is a reversible one because message $m_2$ and $m_3$ are not displaced beyond $\phi_{m_2}$ and $\phi_{m_3}$ respectively. 

	%The solution of the previous problem represents the $m^{th}$ column in the matrix of the big ILP. In this way is not required to have the set $\allBinConfigs$ of all possible bin configuration from the very beginning. Necessary configurations will be generated one by one as needed until a solution with the desired precision is obtained. We use dynamic programming ? to solve the knapsack problem optimally. 
\noindent
{\bf Discussion:}	
	The overall complexity of the Algorithm \ref{algo:withoutSM} is: 
	\begin{equation}
		\mathcal{O} \left( N \times \left(\dfrac{1}{\epsilon ^ 2} + \log{N} \right) \times \Omega(N) \right)
	\end{equation}
	where $\Omega(N)$ is the complexity of solving the above knapsack problem with $N$ elements where the knapsack has a capacity value equal with $\phi_{m}$. Note that this multiple-constraint knapsack problem, that appears at the heart of the overall problem, is solved optimally by a  dynamic programming algorithm that runs in pseudo-polynomial time \cite{KPBook}. Note that we can bound the number of such knapsack problems to be solve with $\mathcal{O} \left(N \times \left(\dfrac{1}{\epsilon ^ 2}+\log{N} \right)\right)$. %This is a direct consequence of the fact that we used the price decomposition method and is in contrast to previously known bin covering algorithm like the one proposed by Gilmore and Gomore where a bound on the knapsack problems to be solved cannot be given. 
	
	As a  detail in our algorithm we would like to note that for each simplex $\sum\limits_{c = 1} ^ {M} x_c = l$ a column reduction process is conducted because each simplex has exactly $N$ linear constraints. Therefore the maximum number of non-zero components (the maximum number of variables $x_c^l \neq 0$) cannot exceed the value of $N$. A way to transform a solution of the simplex into a one with at most $N$ non-zero components can be done using the \textit{Singular Value Decomposition - SVD} algorithm \cite{trefethen1997numerical}.
	
	Finally, note that our analysis will provide a safe result from the point of view that the actual worst-case delay of a message $m_i$ will always be smaller or equal compared with the results provided by our analysis. Broadly, allowing $x_c$ to have non-integral values and thereafter, not generating all the bin configurations are the only approximation schemes in our approach. The first, i.e., an LP relaxation that we perform always leads to pessimistic results. It is known from the theory of linear programming that for a maximization problem if one removes the integrality constraints will always get a solution which is an upper bound for the integral solution of the same problem \cite{Bazaraa2004}. Secondly, even if we generate only a few bin configurations, we follow the price decomposition method that guarantees upper bounds as proved previously \cite{JansenTCS02}.
	
	% % In this paper, we have not investigated the feasibility of solving our algorithm as a dynamic programming algorithm. Rather, we invoke a ILP-solver, viz. CPLEX to solve the problem.  Modern day ILP sovlers like CPLEX can solve moderately sized knapsack-like problems extremely fast as shown in many studies \cite{}. The contribution of our paper is not  how to solve this knapsack like problem in an efficient manner. Rather it is the fact that we can bound the number of such knapsack problems with $\mathcal{O} \left(N \times \left(\dfrac{1}{\epsilon ^ 2}+\log{N} \right)\right)$. This is a direct consequence of the fact that we used the price decomposition method and is in contrast to previously known bin covering algorithm like the one proposed by Gilmore and Gomore where a bound on the knapsack problems to be solved cannot be given.\\

%\noindent
%{\bf Guaranteed upper bounds:}
%In this section we will briefly explain why the analysis proposed in the previous sections will provide a safe result from the point of view that the worst case delay of a message $m_i$ will always be smaller or equal compared with the results provided by our analysis. In order to check if $l$ cycles can be filled with instances of message from the set $hp(m_i)$ the bin covering problem is transformed into a decision problem where instead of maximizing the number of bins which can be filled we will put the question if the problem has a solution which can cover $l$ bins. The decisions problem are \ref{} for the case when slot multiplexing is not used and \ref{} when slot multiplexing is allowed. Under the assumptions that it is possible to build the set of all possible bin configurations and the integrality constraints regarding the $x_m$ variables are not removed the solutions provided by solving the ILP formulations and the optimal implementations of the bin covering are equal.
	
	% According to the FlexRay specifications the length of the minislot $\delta$ can take values in the interval $\delta \in [2, 63]$ macroticks, where one macrotick usually has a length of 6 $\mu s$. Therefore a typical value for $\delta = 12 \mu s$. The maximum length of a FlexRay frame, including the header and the footer, is 2096 bits which corresponds to a communication time of 209.6 $\mu s$ when the bus frequency is 10 MHz. Therefore the lengths of the FlexRay messages in minislots will not exceed the maximum value of 18 minislots. 
	
	% In general this value can be very pessimistic even when running the price directive decomposition method for a very small $\epsilon$. In what follows we will explain why.
	
	%When transforming the problem of checking if the length of the busy window can be at least equal with $l$ into the bin covering problem we loose the relation of how instances are produced with respect to the cycles. In other words we may end in a situation where when solving the bin covering problem we pack instances from the future into cycles from the past. At the end of each step of the Algorithm \ref{alg:simpleAlg} as an output we have a matrix. The connection between the elements of the matrix, the variables $x_c$ assigned to the columns of the matrix and the final value of $\lambda$ is presented bellow:
	%\begin{equation}
	%	\sum\limits_{c = 1} ^ {M} \dfrac{q_{n, c}}{k_n^l}x_c \leq \lambda \leq \left(1 + \epsilon \right) \lambda^*
	%\end{equation}
	%The previous equation is equivalent with:
	%\begin{equation}
	%	\sum\limits_{c = 1} ^ {M} q_{n, c}x_c \leq \left(1 + \epsilon \right) \lambda^* \times k_n^l
	%\end{equation}
	%Since at each step we have the condition $\sum\limits_{c = 1} ^ {M} x_c = l$ the first part of the previous inequality represents the distribution of the instances of message $m_n$ over the entire set of $l$ consecutive cycles. In other words, we have
	%\begin{equation}
	 %\sum\limits_{l = 1} ^ {l} a_{l, n} \leq \left(1 + \epsilon \right) \lambda^* \times k_n^l
	%\end{equation}