\subsection{Non-expanding Windows}
\label{analysisofsolutions:nonexpandingwindows}
% subTop: Remember to introduce paper references so that we do not take credit for the approaches! 
% Brief introductions - do not spread to thin..
% Introduce decoding regions as a way to evaluate?
Possibly the simplest approach to the UEP design problem is termed \ac{NW}. The method of \ac{NW} is to split the source block, e.g. a full GOP, into a number of independent (non-overlapping) importance layers to be encoded and transmitted seperately. By defining a probabilistic layer decision rule, $\boldsymbol \Gamma$, each importance layer is transmitted with different overhead. Given that $\boldsymbol \Gamma$ is probabilistic, it has to fulfil the rules in Equation \eqref{eq:separatelayers:gammarules}.

\begin{align}
&\sum_{i=1}^{n} \boldsymbol \Gamma_i = 1 ~~~~~ \land ~~~~ 0 \leq \boldsymbol \Gamma_i \leq 1 \label{eq:separatelayers:gammarules}
\intertext{Where:}
&\text{$i$ denotes the layer number.} \notag \\
&\text{$n$ denotes the total number of layers.} \notag \\
&\text{$\boldsymbol \Gamma_i$ is the layer decision probability.} \notag \\ \notag
\end{align}

By controlling the overhead of each layer the probability of the nodes decoding the layers can be modelled as the designer desires. The first layer is considered most important which is why linear combinations from this layer should be generated with a $\boldsymbol \Gamma$ such that the decoding probability is higher than subsequent layers. Figure \ref{fig:separatelayers:packetallocation} shows how the source block is divided into the non-overlapping importance layers.

\begin{figure}[h!]
\centering
	\begin{tikzpicture}[>=stealth',shorten >=1pt,auto, semithick]
	\tikzstyle{every state}=[fill=white,text=black,, minimum height=0.7cm, minimum width=3cm, node distance=3cm]
	
		\node[state, rectangle, fill=black!30] (L11) {Layer 1 (L1)};
		\node[state, rectangle, right of=L11](L12){};
		\node[state, rectangle, right of=L12](L13){};
		\node[state, rectangle, right of=L13](L14){};

		\node[state, rectangle, below of=L13,node distance=0.7cm](L23){};
		\node[state, rectangle, right of=L23,node distance=3cm](L24){};
		\node[state, rectangle, below of=L11,node distance=0.7cm](L21){};
		
		\node[state, rectangle, below of=L11,node distance=1.4cm,yshift=-0cm](L31){};
		\node[state, rectangle, right of=L31,node distance=3cm](L32){};
		\node[state, rectangle, right of=L32,node distance=3cm](L33){$\hdots$};
		\node[state, rectangle, right of=L33,node distance=3cm](L34){};

		\node[state, rectangle, left of=L23, fill=black!30] (L22) {Layer 2 (L2)};
		\node[state, rectangle, below of=L34, fill=black!30,node distance=0.7cm] (L44) {Layer \textit{i} (Li)};
		\node[state, rectangle, below of=L33,node distance=0.7cm](L43){};
		\node[state, rectangle, below of=L32,node distance=0.7cm](L42){};
		\node[state, rectangle, below of=L31,node distance=0.7cm](L41){};
		
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L11.north west) -- (L12.north west) node[midway,above,yshift=7pt]{$k_1$};		
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L12.north west) -- (L13.north west) node[midway,above,yshift=7pt]{$k_2$};
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=40pt] (L13.north west) -- (L14.north west) node[midway,above,yshift=10pt]{$\mathbb{\hdots}$};
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L14.north west) -- (L14.north east) node[midway,above,yshift=7pt]{$k_i$};
		\draw [decorate,thick,decoration={brace, mirror, amplitude=10pt},yshift=-20pt] (L41.south west) -- (L44.south east) node[midway,below,yshift=-7pt]{Source block};
				
		\node[left of=L11,xshift=-1.5cm](){$\boldsymbol \Gamma_1$};
		\node[left of=L21,xshift=-1.5cm](){$\boldsymbol \Gamma_2$};
		\node[left of=L31,xshift=-1.5cm](){$\vdots$};
		\node[left of=L41,xshift=-1.5cm](){$\boldsymbol \Gamma_i$};
	\end{tikzpicture}
\caption{The division of a source block, e.g. a generation, into $i$ importance layers where $k_i$ denotes the length and $\boldsymbol \Gamma_i$ denotes the layer decision probability of the $i$'th layer.}
\label{fig:separatelayers:packetallocation}
\end{figure}

\subsubsection{Decoding Probability}
The decoding probabilities for \ac{NW} is provided. The case of two layers, where \ac{L1} carries the important I-frame data and \ac{L2} carries the less important P-frame data, will be considered. From Figure \ref{fig:separatelayers:packetallocation} it is seen that this scenario is completely described by the layer sizes $k_1$ and $k_2$, the $\boldsymbol \Gamma $ layer decision distribution and also the field size $q$. For the analysis, the decoding probabilities of interest are: (I) only \ac{L1} and (II) \ac{L1} and \ac{L2}. The reason for only considering case (II) and not \ac{L2} alone is that the P-frame data in \ac{L2} is considered unusable if the corresponding I-frame, stored in \ac{L1}, is missing. Two functions are defined, in Equation \eqref{eq:special_func} and \eqref{eq:binom_func}, which will be used in the analysis of the decoding probabilities. Equation \eqref{eq:special_func} is implemented by transition matrices\footnote{Contrary to \cite{NC_EXACT_DECOD_PROB,UEP_RLC_MC} where the equivalent to $P_\text{M}$ is implemented using Gaussian coefficients.} as defined in Section \ref{eep:prob_of_decod_data}. 
%\fxnote{More explanation of PM and B equations.}

\begin{align}
p_{\text{r=i|a,b}}&=P_\text{M}(r=i|a,b)\label{eq:special_func}
\intertext{Where:}
p_{\text{r=i|a,b}}&\text{ is the probability of drawing a random matrix of dimension}\notag \\
&\text{ $b\times a$ over FF($q$) that has rank $i$.}\notag \\\notag
\end{align}

Furthermore, the received packets are distributed to either \ac{L1} or \ac{L2}. For $N$ received packets, the number of packets, $n$, in \ac{L1} follow a binomial distribution. In Equation \ref{eq:binom_func}, the notation used for the analysis is given.   

\begin{align}
p_{\text{binom}}&=\mathbb{B}(n|N,\boldsymbol \Gamma_1)\label{eq:binom_func}
\intertext{Where:}
p_{\text{binom}}&\text{ is the binomial probability that $n$ out of $N$ generated packets are}\notag\\
 &\text{ \ac{L1}, the remaining, $N-n$ packets are \ac{L2}.}\notag \\ \notag
\end{align}

The probability of decoding \ac{L1} is given in Equation \eqref{eq:nw_l1_anal}. For a total number, $N$, of received packets, $n=\{0,1,2,\hdots,N\}$ may be \ac{L1} packets. The probability of \ac{L1} having full rank is non-zero when $n=\{k_1,k_1+1,\hdots,N\}$ packets are \ac{L1}. The weighted sum of the probabilites for all $n$ constitutes the total probability of decoding \ac{L1} when $N$ packets has been received.     

\begin{align}
P_{\text{L1}}(N)&=\sum_{n=0}^{N}\mathbb{B}(n|N,\boldsymbol \Gamma_1)\g P_{\text{M}}(r_1=k_1|k_1,n) \label{eq:nw_l1_anal}
\intertext{Where:}
P_{\text{L1}}(N)&\text{ The probability of decoding \ac{L1} after $N$ received packets.}\notag\\ \notag
\end{align}

The probability of decoding \ac{L1} and \ac{L2} is given in Equation \eqref{eq:nw_l2_anal}. For $N$ received packets, $n=\{0,1,2,\hdots,N\}$ may be \ac{L1} packets and $N-n$ is then \ac{L2} packets. The probability of \ac{L1} and \ac{L2} having full rank is non-zero when $n=\{k_1,k_1+1,\hdots,N-k_2\}$, i.e. there is at least $k_1$ packets for \ac{L1} and $k_2$ for \ac{L2}. The weighted sum of the probabilities for all $n$ constitutes the total probability of decoding  \ac{L1} and \ac{L2} when $N$ packets has been received. 

\begin{align}
P_{\text{L1L2}}(N)&=\sum_{n=0}^{N}\mathbb{B}(n|N,\boldsymbol \Gamma_1)\g P_{\text{M}}(r_1=k_1|k_1,n)\g P_{\text{M}}(r_2=k_2|k_2,N-n)\label{eq:nw_l2_anal}
\intertext{Where:}
P_{\text{L1L2}}(N)&\text{ The probability of decoding \ac{L1} and \ac{L2} after $N$ received packets.}\notag\\ \notag
\end{align}

\input{analysis_of_solutions/eval_sim_new.tex}

\subsubsection{Overhead Considerations}

The decoding probabilities for \ac{NW} is shown in Figure \ref{fig:new_decoding_plots} for three constructed examples of $\boldsymbol \Gamma$ values. The generation size in the example is 96 Symbols, where \ac{L1} consists of 32 Symbols and \ac{L2} of 64, according to the I- and P-frame distribution found in Section \ref{video_data_model} on Page \pageref{video_data_model}. In Figure \ref{fig:new_decoding_plots} it is seen that \ac{L1} dictates the decoding probability of the total generation when $\boldsymbol \Gamma$ is dimensioned such that the probabilities overlap. In the case of Figure \ref{fig:new_analytic_50_50}, where the curves does not overlap, the curves tend to look alike because the certainty of having decoded \ac{L1} before reaching the number of packets equal to the generation size is making them independent, i.e. the term $P_{\text{M}}(r_1=k_1|k_1,n)$ is 1 before $k_2$ packets has been received for \ac{L2}. 

\begin{figure} \centering
\subfloat[Decoding probabilities for \ac{NW} \ac{UEP} with two layers and $\boldsymbol \Gamma_1=0.3$, $\boldsymbol \Gamma_2=0.7$ and varying field size.]{\label{fig:new_analytic_30_70}\includegraphics[width=1\textwidth]{figs/uep_new_analytic_g1_30_g2_70.eps}}\\
\subfloat[Decoding probabilities for \ac{NW} UEP with two layers and $\boldsymbol \Gamma_1=0.4$, $\boldsymbol \Gamma_2=0.6$ and varying field size.]{\label{fig:new_analytic_40_60}\includegraphics[width=1\textwidth]{figs/uep_new_analytic_g1_40_g2_60.eps}}\\
\subfloat[Decoding probabilities for \ac{NW} UEP with two layers and $\boldsymbol \Gamma_1=0.5$, $\boldsymbol \Gamma_2=0.5$ and varying field size.]{\label{fig:new_analytic_50_50}\includegraphics[width=1\textwidth]{figs/uep_new_analytic_g1_50_g2_50.eps}}\\
\caption{Decoding probabilities for \ac{NW} \ac{UEP} with two layers, \ac{L1} with 32 symbols and \ac{L2} with 64, and varying layer decision probabilities and field sizes.}
\label{fig:new_decoding_plots}
\end{figure}
\clearpage



The expected amount of packets before source data can be decoded can be used as a single value performance measure. The expected value can be calculated from a given decoding probability curve as seen in Equations \eqref{eq:exp_l1_nw} and \eqref{eq:exp_l1l2_nw}. The following counting principle is applied to deduce the Equations: The probability of needing one more packet, at any given amount of received packets, is the complementary probability of being able to decode. This is exact if continuing to infinity, but in this case the calculation is limited to the available data sets, and therefore approximate.  

\begin{align}
&\mathrm{E}[\text{Pkts$_{\text{L1}}$}]=\sum_{i=0}^{\infty}(1-P_{\text{L1}_i})\label{eq:exp_l1_nw}\\
&\mathrm{E}[\text{Pkts$_{\text{L1L2}}$}]=\sum_{i=0}^{\infty}(1-P_{\text{L1L2}_i})\label{eq:exp_l1l2_nw}
\intertext{Where:}
\mathrm{E}&\text{ is the expected amount of packets before source }\notag\\
 &\text{data can be decoded for the respective layers.}\notag
\end{align}

In Table \ref{tab:approx_exp_nw_q2} the approximate expected number of packets are listed for the scenarios in Figure \ref{fig:new_decoding_plots}. For \ac{EEP} in Section \ref{subsec:codingoverhead}, geometric convergence was used to eliminate summing to infinity. For this analysis of \ac{NW}, summing over the data sets in Figure \ref{fig:new_decoding_plots} will suffice.

\begin{table}[h] \centering \small
\begin{tabular}{c| c| c| c  }
 & $\approx \text{E}[\text{Pkts$_{\text{L1}}$]}$ & $\approx \text{E}[\text{Pkts$_{\text{L1L2}}$}]$ & $\approx \text{O}[\text{Pkts$_{\text{L1L2}}$}]$ \\ \hline
$\boldsymbol \Gamma_1=0.3$, $q=2$ & 112 & 114 & 18.7 \\ \hline
$\boldsymbol \Gamma_1=0.4$, $q=2$ & 84  & 110.5 & 14.5 \\ \hline
$\boldsymbol \Gamma_1=0.5$, $q=2$ & 67.2 & 131.2 & 35.2  \\ \hline
$\boldsymbol \Gamma_1=0.3$, $q=2^8$ & 106.7 & 109.7 & 13.7 \\ \hline
$\boldsymbol \Gamma_1=0.4$, $q=2^8$ & 80 & 107.5 & 11.5  \\ \hline
$\boldsymbol \Gamma_1=0.5$, $q=2^8$ & 64 & 128 & 32 \\ 
\end{tabular}
\caption{Approximate expected packets before source data can be decoded.}
\label{tab:approx_exp_nw_q2}
\end{table}







%\begin{table}[h] \centering \small
%\begin{tabular}{c| c |c| c}
% & $\approx \text{E}[\text{Pkts$_{\text{L1}}$]}$ & $\approx \text{E}[\text{Pkts$_{\text{L1L2}}$}]$ & $\approx \text{O}[\text{Pkts$_{\text{L1L2}}$}]$ \\ \hline
%
%$\boldsymbol \Gamma_1=0.3$, $q=2^8$ & 106.7 & 109.7 & 13.7 \\ \hline
%$\boldsymbol \Gamma_1=0.4$, $q=2^8$ & 80 & 107.5 & 11.5  \\ \hline
%$\boldsymbol \Gamma_1=0.5$, $q=2^8$ & 64 & 128 & 32 \\ \hline
%\end{tabular}
%\caption{Approximate expected packets before source data can be decoded.}
%\label{tab:approx_exp_nw_q256}
%\end{table}








%%%%%To compare the overhead performance of \ac{NW} with both \ac{EEP} and the \ac{EW} \ac{UEP} method the expected overhead needed to decode the whole generation is to be calculated. By also calculating the expected overhead needed to decode at least the most important data it will become clear which UEP method to choose. The overhead of \ac{NW} can be calculated using Equation \eqref{eq:separatelayers:total_overhead} derived in Subsection \ref{subsec:codingoverhead}. The equation describes the expected overhead needed for a certain generation size to be decoded in \ac{EEP} \ac{NC}. \fxnote{E: Perhaps 3.23 here is not necessary, we give a reference. (a) in 3.24 can also have a reference?}

%%%%%\begin{align}
%%%%%O_{\text{\tiny{total}}} &= \sum_{r=0}^{g-1}\frac{1}{q^{g-r}-1} \label{eq:separatelayers:total_overhead}
%%%%%\intertext{Where:}
%%%%%&\text{$O_{\text{\tiny{total}}}$ is the total overhead of a generation in packets.}\notag\\
%%%%%&\text{$g$ is the generation size.} \notag\\
%%%%%&\text{$r$ is rank of the coding matrix.} \notag\\
%%%%%&\text{$q$ is the field size.} \notag
%%%%%\end{align}

%%%%%In Equation \eqref{eq:separatelayers:expectednumberofpackets} the $\boldsymbol \Gamma $ distribution, with which packets from the individual layers are sent, is taken into account in calculating the expected overhead of each individual layer. 

%%%%%\begin{align}
%%%%%O_{\text{\tiny{total,NW}}} &= \frac{1}{\Gamma_i}\left(\underbrace{\sum_{r=0}^{k_i-1}\frac{1}{q^{k_i-r_i}-1}}_\text{\textbf{(a)}}+k_i\right)-k_i \label{eq:separatelayers:expectednumberofpackets}\\
%%%%% &= \frac{1}{\Gamma_i}\left(O_\text{\tiny{total}}+k_i\right)-k_i
%%%%%\intertext{Where:}
%%%%%&\text{\textbf{(a)} is the total overhead of layer $i$.}\notag\\
%%%%%&\text{$\boldsymbol \Gamma_i$ is the decision probability of layer $i$.}\notag\\
%%%%%&\text{$k_i$ is the layer size of layer $i$.}\notag\\
%%%%%&\text{$r_i$ is the rank of the coding submatrix for layer $i$.}\notag\\
%%%%%&\text{$q$ is the field size.}\notag
%%%%%\end{align}

%%%%%Expression \textbf{(a)} has been shown in Table \ref{tab:eep_overhead} to go towards $\approx1.6$ for $q=2$ as the layer size becomes larger. The overall expected overhead of a \ac{NW} generation is found by calculating the overhead of the layer which is the last to reach a decoding probability of 1.



%%%%%\subsubsection{The effect of field size and distribution of $\boldsymbol \Gamma $} \fixme{B: Skal det være den endelige overskrift? E: Remove it? It is still overhead considerations - i think...}

%%%%%The field size and the layer decision probabilities are two parameters that will vary depending on the application.
%%%%%To elucidate the effects of these parameters a hypothetical example is set up. To represent the distribution of video data described in \ref{video_data_model} the layer sizes for this example is chosen to 32 symbols for \ac{L1} and 64 symbols for \ac{L2}, that is a third of the generation are I-frames and two thirds are P-frames. Figure \ref{fig:new_decoding_plots} shows the analytical decoding probabilities with the hypothetical layer sizes, field sizes of $2^1$ and $2^8$ and varying layer decision probabilities.



%%%%In \ac{NW} the layers are independent of each other which means that the decoding process can be split up into the number of layers, which is why each layer is regarded as a separate \ac{EEP} generation for analysis purposes. This in effect makes the analysis method similar to that of \ac{EEP} \ac{NC} given in Section \ref{sec:eep} with the extension of the layer decision probability, $\boldsymbol \Gamma_i$. 
%%%%The probability of increasing the rank of layer $i$ depends on the layer decision probability for that particular layer, which is an expression of the expected distribution of the number of packets received to the layers. The probability of increasing and not increasing rank is shown in Equation \eqref{eq:separatelayers:rankprobabilities}. \fxnote{E: Why the bold P?}

%%%%%%%State probabilities
%%%%\begin{align}
%%%%\text{\textbf{P}}_{i,r\rightarrow r+1} &= \Gamma_i\left(1-\frac{1}{q^{k_i-r_i}}\right) \notag\\
%%%%\text{\textbf{P}}_{i,r\rightarrow r} &= 1-\Gamma_i\left(1-\frac{1}{q^{k_i-r_i}}\right) \label{eq:separatelayers:rankprobabilities}
%%%%\intertext{Where:}
%%%%&\text{$\boldsymbol \Gamma_i$ is the decision probability of layer $i$.}\notag\\
%%%%&\text{$q$ is the field size.} \notag \\
%%%%&\text{$k_i$ is the layer size in symbols.} \notag\\
%%%%&\text{$r_i$ is the current rank of the coding matrix for layer $i$.}\notag\\
%%%%&\text{$\text{\textbf{P}}_{i,r\rightarrow r+1}$ is the probability of reaching next rank of coding matrix for layer $i$.}\notag\\
%%%%&\text{$\text{\textbf{P}}_{i,r\rightarrow r}$ is the probability of not increasing rank for layer $i$.}\notag 
%%%%\end{align}



%%%%The transition probabilities is used to construct a markov chain for each layer shown in Figure \ref{fig:separatelayers:markov_chain}, where the state is the rank of the layer. When the rank reaches the size of the layer, denoted $k_i$, it has reached full rank and can be decoded.

%%%%%%%Markov chain
%%%%\begin{figure}[h!]
%%%%\centering
%%%%	\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=4cm,
%%%%		            semithick]
%%%%	\tikzstyle{every state}=[fill=white,text=black]

%%%%		\node[initial,state]	(A)			{$0$};
%%%%		\node[state]		(B) [right of=A]	{$1$};
%%%%		\node[state] 		(C) [right of=B]	{$r_i$};
%%%%		\node[state] 		(D) [right of=C]	{$k_i$};

%%%%		\path	(A)	edge [loop above] 	node {$1-\Gamma_i(1-\frac{1}{q^{k_i}})$}	(A)
%%%%					edge              	node[below] {$\boldsymbol \Gamma_i(1-\frac{1}{q^{k_i}})$}	(B)
%%%%				(B)	edge [loop above] 	node {$1-\Gamma_i(1-\frac{1}{q^{k_i-1}})$}	(B)
%%%%			    	edge              	node[below] {$\boldsymbol \Gamma_i(1-\frac{1}{q^{k_i-1}})$}	(C)
%%%%				(C)	edge [loop above] 	node {$1-\Gamma_i(1-\frac{1}{q^{k_i-r_i}})$} 	(C)
%%%%					edge 		  		node[below] {$\boldsymbol \Gamma_i(1-\frac{1}{q^{k_i-r_i}})$}	(D)
%%%%				(D)	edge [loop above] 	node {$1$} 	(D);
%%%%		
%%%%	\end{tikzpicture}
%%%%\caption{A markov chain showing the probabilities of increasing rank of an importance layer $i$ using Non-expanding Windows, where $q$ is field size and $k_i$ is the layer size, $r_i$ is the current rank of the layers coding matrix and $\boldsymbol \Gamma $ is the layer decision probability.}
%%%%\label{fig:separatelayers:markov_chain}
%%%%\end{figure}





%%%Figure \ref{fig:new_decoding_plots} clearly shows the effect of changing both the layer decision probabilities and the field size. 

%%%By using a field size of $2^8$ instead of the binary field the decoding probability is improved, as seen on figure \ref{fig:new_decoding_plots}. The improvement in decoding probability leads to an improvement\fxnote{E: reduction?} in overhead, which can be shown by applying Equation \eqref{eq:separatelayers:expectednumberofpackets} to the hypothetical example. The overhead difference shown in \ref{fig:new_analytic_70_30} is calculated in Equation \eqref{eq:new_overhead_calc_70_30}. \fxnote{E: would like a ref to each of the formulas. What does 155 and 150 exactly mean (larger field larger overhead?)? we know k2 or?}

%%%\begin{align}
%%%O_{\Gamma_2=0.3, q=2^8} &= \frac{1}{\Gamma_2}\left(\sum_{r=0}^{k_2-1}\frac{1}{q^{k_2-r_2}-1}+64\right)-k_2 \approx 155 \text{packets}\notag\\
%%%O_{\Gamma_2=0.3, q=2^1} &= \frac{1}{\Gamma_2}\left(\sum_{r=0}^{k_2-1}\frac{1}{q^{k_2-r_2}-1}+64\right)-k_2 \approx 150 \text{packets}\label{eq:new_overhead_calc_70_30}\\
%%%\intertext{Where:}
%%%&\text{$\boldsymbol \Gamma_2$ is the decision probability of layer $2$.}\notag\\
%%%&\text{$k_2$ is the layer size of layer $2$.}\notag\\
%%%&\text{$r_2$ is the rank of the coding submatrix for layer $2$.}\notag\\
%%%&\text{$q$ is the field size.}\notag
%%%\end{align}


%%%The improvement in overhead by justing the larger field size is $\approx3\%$. The price to pay for the improvement, as discussed in Section \ref{eep:dec_enc_latency}, is the increased amount of performance\fxnote{E:computations?} needed to decode over a larger field and an increased amount of overhead because of the larger encoding vectors. Figure \ref{fig:overhead_q2_vs_q256_nw} shows the percentage-wise extra overhead needed with a field size of $2^1$ compared to $2^8$, which stems from Equation \eqref{eq:percent_new_overhead}.

%%%\begin{align}
%%%\Phi_O  &= \frac{O_{q=2^1}}{O_{q=2^8}}-1
%%%\intertext{Where:}
%%%&\text{$q$ is the field size.}\notag\\
%%%&\text{$O_{q}$ is the overhead generated with field size $q$.}\notag\\
%%%&\text{$\Phi_O$ is the overhead ratio.}\notag\\
%%%\end{align}

%%%\fxnote{E: line style in figure, a bit hard tell dashed dot? y label a bit confusing? percent improvement in percent, maybe i dont understand it right?}

%%%\begin{figure}[h]
%%%\centering
%%%\includegraphics[width=\textwidth]{figs/overhead_q2_vs_q256_nw.eps}
%%%\caption{Overhead improvement in percent by using field size of $2^8$ over using $2^1$ in \ac{NW}.}
%%%\label{fig:overhead_q2_vs_q256_nw}
%%%\end{figure}

%%%The overhead intuitively depends on $\boldsymbol \Gamma $; The higher the probability of generating a packet for a layer is the more packets it gets. The figure shows an asymptotic behaviour, as the generation size grows the advantage of choosing FF$(2^8)$ tends to zero. 

%%%As expected an increase in $\boldsymbol \Gamma_1$ increases the probability of decoding \ac{L1} at the expense of the probability of decoding \ac{L2}. As $\boldsymbol \Gamma_1$ and $\boldsymbol \Gamma_2$ approaches the distribution of data in the layers respectively, the decoding probabilities tends to equality. It also makes sense intuitively, that when every symbols in a generation have the same probability of being represented in a packet the probability of decoding will tend to a single layer approach\fxnote{E: the start of this sentence dobbeltkonfekt?}.

%%%The decision on the value of $\boldsymbol \Gamma $ and field size depends on the application. In the case of video data it should be considered that the more a layer is protected the more overhead is generated, which also means an increased separation of the two layers. 

%%%%The probability of decoding the most important\fxnote{E:any layer?} layer is independent from the other layers. The opposite is valid for the rest of the layers\fxnote{E: what is meant by this?}. The data contained in e.g. importance \ac{L2} is only worth decoding if the data in \ac{L1} can be decoded, when considering the case of splitting I- and P-frames into \ac{L1} and \ac{L2} respectively. FIXME\fixme{JK: Det kommer vel an på dataen i ens generation - omtal videodata?, B: Fixet? E: tilfreds}. Therefore the probability of decoding layer 2 alone is not relevant to this project FIXME\fixme{JK: Igen, dette er vel kun i specialtilfælde som vores (videodata), B: Fixet? E: tilfreds}. The relevant probability is that of decoding \ac{L1} \textit{and} \ac{L2} at the same time. By assuming\fxnote{E: This is the truth? no dependency between layers decoding wise} that the probability of decoding each layer individually are independent from each other the probability of decoding \ac{L1} and \ac{L2} at the same time is the intersection of the two separate probabilities, shown in general in Equation \eqref{eq:separatelayers:intersection}. 


%%%%\begin{align}
%%%%&Pr(r_1=k_1|r_2=k_2|\hdots |r_i=k_i) =P_1 \cap P_2 \cap \hdots \cap P_i \label{eq:separatelayers:intersection}
%%%%\intertext{Where:}
%%%%&\text{$P_i$ is the probability of decoding layer $i$ separately.} \notag \\ \notag
%%%%\end{align}

%%%%For the purpose of calculating the decoding probability of the individual layers a general transition matrix is constructed in Equation \eqref{eq:separatelayers:transitionmatrix} from the markov chain shown on Figure \ref{fig:separatelayers:markov_chain}.

%%%%\begin{align}
%%%%&\text{\textbf{P}}_i=
%%%%\left[
%%%%\begin{array}{ccccc}	
%%%%\Gamma_i\left(1-\frac{1}{q^{k_i}}\right)		& 0 					& 0 		& \cdots 				& 0 \\
%%%%1-\Gamma_i\left(1-\frac{1}{q^{k_i}}\right)	& \Gamma_i\left(1-\frac{1}{q^{k_i-1}}\right) 	& 0  		& \cdots 				& 0 \\
%%%%0 					& 1-\Gamma_i\left(1-\frac{1}{q^{k_i-1}}\right)	& \ddots 	& \ddots 				& \vdots \\
%%%%\vdots 				& \ddots 				& \ddots 	& \Gamma_i\left(1-\frac{1}{q^{1}}\right) 		& 0 \\
%%%%0 					& \cdots 				& 0	 		& 1-\Gamma_i\left(1-\frac{1}{q^{1}}\right) 	& 1
%%%%\end{array}
%%%%\right]^{\left(\text{$k_i$+1 $\times$ $k_i$+1}\right)}
%%%%%
%%%%\label{eq:separatelayers:transitionmatrix}
%%%%\intertext{Where:}
%%%%&\text{$\text{\textbf{P}}_{i,j}$ is the probability of reaching rank $i$-1 when the coding matrix has rank $j$-1.} \notag
%%%%\end{align}

%%%%The \ac{pmf} describing the rank of the layer $i$ matrix after $n$ transition, that is received packets, is calculated by Equation \eqref{eq:separatelayers:layerpmf}.

%%%%\begin{align}
%%%%\mathbf{s}_n &= \mathbf{P}_i^n \times \text{\textbf{s}}_0 \label{eq:separatelayers:layerpmf} \\
%%%%\intertext{Where:}
%%%%&\text{$\text{\textbf{P}}_i$ is the transition matrix.} \notag\\
%%%%&\text{$\text{\textbf{s}}_0$ is the initial pmf.} \notag\\
%%%%&\text{${\text{\textbf{s}}_{n}}$ is the \ac{pmf} after $n$ received linear combinations from layer $i$.} \notag
%%%%\end{align}

%%%%The probability of the initial state being rank 0 is considered always to be 1 which is why the initial state \ac{pmf}, $\boldsymbol{s}_0$, always will have 1 in the first entry and 0 in the rest. The entry in the resulting rank probability \ac{pmf} corresponding to the last state is the probability of reaching full rank with $n$ packets. \fxnote{E: Can the reader maybe remember this paragraph from earlier?}



%Hvad skal der ske her: 
%-Skriv om effekten af gamma
%	-Lagene bliver flyttet fra hinanden
%	-Lag 1 dekodes tidligere og tidligere jo højere gamma 1 bliver, forventeligt!!
%-Skriv om effekten af en højere feltstørrelse
%	-Større felt = færre pakker nødvendig for dekodning, mindre overhead
%	-Beregn overhead for scenarierne?

