\newpage
\subsection{Expanding Windows}
\label{analysisofsolutions:expandingwindows}
\ac{EW} is an \ac{UEP} method based on coding over an increasing subset of a source block. The division of the source block into importance layers is shown in Figure \ref{fig:expandingwindows:packetallocation}. The most important data is represented in all layers, and thereby in all encoded symbols. Which layers are sent, is based on a probability distribution denoted $\boldsymbol \Gamma$. The size of the individual layers depend on the source data and $\boldsymbol \Gamma$ controls the amount of protection the importance layers are given. An analysis of the decoding probabilities and coding overhead is provided. \ac{EW} has earlier been proposed as an \ac{UEP} method in \citep{UEP_RLC_MC}. The analysis uses some of the notation and principles introduced in \citep{UEP_RLC_MC}.

\begin{figure}[h!]
\centering
	\begin{tikzpicture}[>=stealth',shorten >=1pt,auto, semithick]
	\tikzstyle{every state}=[fill=white,text=black,, minimum height=0.7cm, minimum width=3cm, node distance=3cm]
		\node[state, rectangle, fill=black!30] (L11) {Layer 1 (L1)};
		\node[state, rectangle, right of=L11](L12){};
		\node[state, rectangle, right of=L12](L13){};
		\node[state, rectangle, right of=L13](L14){};

		\node[state, rectangle, below of=L13,node distance=0.7cm](L23){};
		\node[state, rectangle, right of=L23,node distance=3cm](L24){};

		\node[state, rectangle, below of=L11,node distance=1.4cm,yshift=-0cm](L31){$\hdots$};
		\node[state, rectangle, right of=L31,node distance=3cm](L32){$\hdots$};
		\node[state, rectangle, right of=L32,node distance=3cm](L33){$\hdots$};
		\node[state, rectangle, right of=L33,node distance=3cm](L34){};

		\node[state, rectangle,minimum width=6cm, left of=L23, fill=black!30,node distance=4.5cm] (L22) {Layer 2 (L2)};
		\node[state, rectangle, below of=L31, fill=black!30,node distance=0.7cm, minimum width=12cm, xshift=4.5cm] (L33) {Layer \textit{i} (Li)};		
		
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L11.north west) -- (L12.north west) node[midway,above,yshift=7pt]{$k_1$};		
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L12.north west) -- (L13.north west) node[midway,above,yshift=7pt]{$k_2$};
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=40pt] (L13.north west) -- (L14.north west) node[midway,above,yshift=10pt]{$\mathbb{\hdots}$};
		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L14.north west) -- (L14.north east) node[midway,above,yshift=7pt]{$k_i$};
		\draw [decorate,thick,decoration={brace, mirror, amplitude=10pt},yshift=-20pt] (L33.south west) -- (L33.south east) node[midway,below,yshift=-7pt]{Source block};
				
		\node[left of=L11,xshift=-1.5cm](){$\boldsymbol \Gamma_1$};
		\node[left of=L22,xshift=-3.0cm](){$\boldsymbol \Gamma_2$};
		\node[left of=L31,xshift=-1.5cm](){$\vdots$};
		\node[left of=L33,xshift=-6cm](){$\boldsymbol \Gamma_i$};
	\end{tikzpicture}
\caption{The division of a source block according to the \ac{EW} UEP scheme.}
\label{fig:expandingwindows:packetallocation}
\end{figure}


The probability, by which a symbol from a layer is sent, is defined by the probabilistic layer decision rule, $\boldsymbol \Gamma$. The boundaries for $\boldsymbol \Gamma$ given in Equation \eqref{eq:separatelayers:gammarules} also applies for \ac{EW}. The subset size, $K_i$, will be needed in the analysis of \ac{EW} and is defined in Equation \eqref{eq:subset_size_ew}.

\begin{align}
K_i&=\sum_{j=1}^{i}k_j\label{eq:subset_size_ew}
\end{align}


\subsubsection{Decoding probability}
% Introduce notation
An analysis of the decoding probability for \ac{EW} is provided for the case of two layers. Two-layered \ac{EW} is completely described by the parameters: $k_1, k_2, K_1, K_2, \boldsymbol \Gamma$ and $q$ as illustrated in Figure \ref{fig:expandingwindows:packetallocation}. An expression for the exact decoding probabilities of the two layers will be provided. The rank, or number of pivots obtained through \ac{L1} packets is denoted $r_1$ and has a range $0\leq r_1\leq K_1$. Similarly, the number of pivots obtained through \ac{L2} packets is denoted $r_2$ and has a range $ 0\leq r_2\leq K_2$. The $P_{\text{M}}$ function, Equation \eqref{eq:special_func}, used for the \ac{NW} analysis also applies in this analysis. 

% Introduce L2 probabilities (easiest to start with)
The decoding probability for \ac{L2} can be calculated given Equation \eqref{eq:ew_l2_anal}. \ac{L1} can contribute with rank $r_1$ up to $K_1$, the remaining pivots must come from \ac{L2}. It is assumed in Equation \eqref{eq:ew_l2_anal} that the pivots from \ac{L1} can be used to eliminate $r_1$ number of columns in \ac{L2}. Consequently, for a received number of \ac{L2} packets, the corresponding entries in these columns does not matter.
 
\begin{align}
P_{\text{L2}}(N)&=\sum_{n=0}^{N}\quad\underbrace{\mathbb{B}(n|N,\boldsymbol \Gamma)}_{(a)}\g \underbrace{\sum_{i=0}^{K_1}P_{\text{M}}(r_1=i|K_1,n)\g P_{\text{M}}(r_2=K_2-i|K_2-i,N-n)}_{(b)} \label{eq:ew_l2_anal}
\intertext{Where:}
(a)&\text{ The binomial probability that $n$ out of $N$ received packets are in}\notag\\
   &\text{ \ac{L1}. The remaining $N-n$ are in \ac{L2}. Depends on $N$ and $\boldsymbol \Gamma$.}\notag\\
(b)&\text{ The probability that \ac{L1} and \ac{L2} combined, reaches rank}\notag \\
   &\text{ $K_2$. When \ac{L1} has $n$ packets and \ac{L2} has $N-n$ packets.}\notag \\
   P_{\text{L2}}(N)&\text{ The probability of decoding \ac{L2}.}\notag 
\end{align}

% Introduce L1 probabilities (easier after L2 has been described)
There are three outcomes where the \ac{L1} data can be decoded. Firstly, if $r_1\rightarrow K_1$. Secondly, if the rank from \ac{L1} and \ac{L2} combined, increases to $K_2$, then \ac{L1} is also decoded. This is denoted $r_1+r_2\rightarrow K_2$.  Thirdly, it is theoretically possible that a \ac{L2} packet contains only \ac{L1} data, thus making it possible to increase the rank of \ac{L1} without necessarily increasing the rank of the combined \ac{L1} and \ac{L2} to $K_2$. The third possibility is neglected in the analysis and will tend to zero for $k_2>>k_1$. The probabilities from the first and second outcome are accumulated in Equation \eqref{eq:ew_l1_anal}. 

\hspace*{-1.7cm}\vbox{\begin{align}
P_{\text{L1}}(N)=\sum_{n=0}^{N}&\quad \underbrace{\mathbb{B}(n|N,\boldsymbol \Gamma)}_{(a)}\g\bigg(\underbrace{P(r_1=K_1|K_1,n)}_{(b)}+\underbrace{\sum_{i=0}^{K_1-1}P_M(r_1=i|K_1,n)\g P_M(r_2=K_2-i|K_2-i,N-n)}_{(c)} \bigg) \label{eq:ew_l1_anal}
\intertext{\hspace*{1cm}Where:}
(a)&\text{ The probability that $n$ out of $N$ received packets are in \ac{L1}}\notag\\
   &\text{ The remaining $N-n$ packets are in \ac{L2}.}\notag\\
(b)&\text{ The probability that \ac{L1} reaches rank $K_1$ given $n$ packets.}\notag\\
(c)&\text{ The probability that \ac{L1} and \ac{L2} combined, reaches rank}\notag \\
   &\text{ $K_2$. When \ac{L1} has $n$ packets and \ac{L2} has $N-n$ packets.}\notag\\
   &\text{ Not counting when $i=K_1$ since it is included in (b)} \notag \\
P_{\text{L1}}(N)&\text{ The probability of decoding \ac{L1} after $N$ received packets.}\notag
\end{align} }

A discussion of the decoding probabilities as a function of $\boldsymbol \Gamma$ follows in the next section. For reference, Figure \ref{fig:dejan_scenario} gives the decoding probabilities as calculated with Equations \eqref{eq:ew_l1_anal} and \eqref{eq:ew_l2_anal} for a scenario given in \cite{UEP_RLC_MC}. 

\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/dejan.eps}
\caption{Decoding probabilities calculated by Equations \ref{eq:ew_l1_anal} and \ref{eq:ew_l2_anal} for $k_1=15, k_2=25,q=2^8$ and a sweep of different $\boldsymbol \Gamma$. The scenario is identical to the one given in \cite{UEP_RLC_MC} and by inspection so are the results.}
\label{fig:dejan_scenario}
\end{figure}


\newpage 
\input{analysis_of_solutions/eval_sim_ew.tex}

\subsubsection{Overhead Considerations}
It is not easily seen from Equations \eqref{eq:ew_l1_anal} and \eqref{eq:ew_l2_anal} how the layer probability distribution, $\boldsymbol \Gamma$, should be chosen or how field size, $q$, affects the decoding probabilities. An illustrative comparison is given in Figure \ref{fig:ew_analytic_dec} for different $\boldsymbol \Gamma$ and field sizes. From the video source data investigation, in Section \ref{analysisofsolutions:videosourcecoding}, the I-frame was found to contribute $\approx 1/3$ of the GOP data. Thus, \ac{L1} contains 32 packets and \ac{L2} 96. The given examples in Figure \ref{fig:ew_analytic_dec} ranges from almost \ac{EEP}, in \ref{fig:ew_analytic_dec_a}, to a more distinct \ac{UEP} behavior in \ref{fig:ew_analytic_dec_c}, with Figure \ref{fig:ew_analytic_dec_b} somewhere in between. For $q=2^8$, despite different $\boldsymbol \Gamma$, the probability of decoding \ac{L1} approaches 1 when $K_2$ packets are received, regardless of the probability at $K_2-1$ packets. As with \ac{RLNC} \ac{EEP}, a larger field increase the decoding probability. As with \ac{NW}, the expected amount of packets before the respective layers can be decoded can be approximated by summing the probability of not being able to decode. The expected values are given for $q=2^1$ and $q=2^8$ in Table \ref{tab:approx_exp_ew_q2} respectively.



\begin{table}[h] \centering \small
\begin{tabular}{c| c| c| c  }
 & $\approx \text{E}[\text{Pkts$_{\text{L1}}$]}$ & $\approx \text{E}[\text{Pkts$_{\text{L2}}$}]$ & $\approx \text{O}[\text{Pkts$_{\text{L2}}$}]$ \\ \hline
$\boldsymbol \Gamma_1=0.3$, $q=2$ &  96.2  & 98.9  & 2.9  \\ \hline
$\boldsymbol \Gamma_1=0.4$, $q=2$ &  83.2  & 109.7 & 13.7 \\ \hline
$\boldsymbol \Gamma_1=0.5$, $q=2$ &  67.2  & 131.2 & 35.2 \\ \hline
$\boldsymbol \Gamma_1=0.3$, $q=2^8$ & 93.9 & 96.9 & 0.9 \\ \hline
$\boldsymbol \Gamma_1=0.4$, $q=2^8$ & 79.5 & 107  & 11  \\ \hline
$\boldsymbol \Gamma_1=0.5$, $q=2^8$ & 64   & 128  & 32  \\ 
\end{tabular}
\caption{Approximate expected packets before source data can be decoded.}
\label{tab:approx_exp_ew_q2}
\end{table}

%\begin{table}[h] \centering \small
%\begin{tabular}{c| c |c| c}
% & $\approx \text{E}[\text{Pkts$_{\text{L1}}$]}$ & $\approx \text{E}[\text{Pkts$_{\text{L2}}$}]$ & $\approx \text{O}[\text{Pkts$_{\text{L2}}$}]$ \\ \hline
%
%$\boldsymbol \Gamma_1=0.3$, $q=2^8$ & 93.9 & 96.9 & 0.9 \\ \hline
%$\boldsymbol \Gamma_1=0.4$, $q=2^8$ & 79.5 & 107  & 11  \\ \hline
%$\boldsymbol \Gamma_1=0.5$, $q=2^8$ & 64   & 128  & 32  \\ \hline
%\end{tabular}
%\caption{Approximate expected packets before source data can be decoded.}
%\label{tab:approx_exp_ew_q256}
%\end{table}

\begin{figure} \centering
\subfloat[Decoding probabilities for \ac{EW} with two layers. \ac{L1} consists of 32 packets and \ac{L2} of 96 packets.]{\label{fig:ew_analytic_dec_a}\includegraphics[width=1\textwidth]{figs/uep_ew_analytic_g1_03_g2_07.eps}}\\
\subfloat[Decoding probabilities for \ac{EW} with two layers. \ac{L1} consists of 32 packets and \ac{L2} of 96 packets.]{\label{fig:ew_analytic_dec_b}\includegraphics[width=1\textwidth]{figs/uep_ew_analytic_g1_04_g2_06.eps}}\\
\subfloat[Decoding probabilities for \ac{EW} with two layers. \ac{L1} consists of 32 packets and \ac{L2} of 96 packets.]{\label{fig:ew_analytic_dec_c}\includegraphics[width=1\textwidth]{figs/uep_ew_analytic_g1_05_g2_05.eps}}\\
\caption{Comparison of decoding probabilities for different field sizes, $FF(q)$, and layer probability distributions, $\boldsymbol \Gamma_1, \boldsymbol \Gamma_2$, for \ac{EW} with two layers.}
\label{fig:ew_analytic_dec}
\end{figure}













%%%%%\clearpage

%%%%%A helper function is presented in Equation \eqref{eq:special_func}.

%%%%%\begin{align}
%%%%%p&=P_\text{M}(m,n,r,q)\label{eq:special_func}
%%%%%\intertext{Where:}
%%%%%p&\text{ is the probability of drawing a random matrix of dimension}\notag \\
%%%%%&\text{ $m\times n$ over a finite field of size $q$ has rank $r$.}\notag \\ \notag
%%%%%\end{align}


%%%%%\begin{figure}[p]\small
%%%%%\begin{verbatim}
%%%%%01: for N=N_min:N_max
%%%%%02:
%%%%%03:    l1_prob_partial=0
%%%%%04:
%%%%%05:    for n=0:N
%%%%%06:
%%%%%07:        % Probability that L1 -> rank K1	
%%%%%08:        p1=PM(n,K1,K1,q)		
%%%%%09:
%%%%%10:        % Probability that L2 -> rank K2 (Which also gives L1)
%%%%%11:        p2=0
%%%%%12:
%%%%%13:        for i=0:K1-1
%%%%%14:            p2+=PM(n,K1,i,q)*PM(N-n,K2-i,K2-i,q)
%%%%%15:        end
%%%%%16:
%%%%%17:        l1_prob_partial+=(p1+p2)*binopdf(n,N,Gamma1)
%%%%%18:    end
%%%%%19:   
%%%%%20:    l1_prob_final(N)=l1_prob_partial
%%%%%21:
%%%%%22: end
%%%%%\end{verbatim}
%%%%%\caption{\ac{EW} with two layers. Calculating exact decoding probability for layer 1.}
%%%%%\label{proc:exact_layer1_decod_procedure}
%%%%%\end{figure}

%%%%%\begin{figure}[p]\small
%%%%%\begin{verbatim}
%%%%%01: for N=N_min:N_max
%%%%%02:
%%%%%03:    l2_prob_partial=0
%%%%%04:
%%%%%05:    for n=0:N
%%%%%06:
%%%%%07:        p2=0
%%%%%08:
%%%%%09:        for i=0:K1
%%%%%10:            p2+=PM(n,K1,i,q)*PM(N-n,K2-i,K2-i,q)
%%%%%11:        end
%%%%%12:
%%%%%13:        l2_prob_partial+=p2*binopdf(n,N,Gamma1)
%%%%%14:    end
%%%%%15:   
%%%%%16:    l2_prob_final(N)=l2_prob_partial
%%%%%17:
%%%%%18: end
%%%%%\end{verbatim}
%%%%%\caption{\ac{EW} with two layers. Calculating exact decoding probability for layer 2.}
%%%%%\label{proc:exact_layer2_decod_procedure}
%%%%%\end{figure}





%%%%%%%\begin{figure}[h!]
%%%%%%%\centering
%%%%%%%	\begin{tikzpicture}[>=stealth',shorten >=1pt,auto, semithick]
%%%%%%%	\tikzstyle{every state}=[fill=white,text=black,, minimum height=0.7cm, minimum width=4cm, node distance=4cm]
%%%%%%%		\node[state, rectangle, fill=black!30] (L11) {Layer 1 (L1)};
%%%%%%%		\node[state, rectangle,minimum width=8cm, left of=L23, fill=black!30,node distance=6cm] (L22) {Layer 2 (L2)};
%%%%%%%		\node[state, rectangle, right of=L11](L12){};
%%%%%%%		\node[state, rectangle, right of=L12](L13){};
%%%%%%%		\node[state, rectangle, below of=L13,node distance=0.7cm](L23){};
%%%%%%%		\node[state, rectangle, below of=L22, fill=black!30,node distance=0.7cm, minimum width=12cm, xshift=2cm] (L33) {Layer \textit{i} (Li)};		
%%%%%%%		
%%%%%%%		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L11.north west) -- (L12.north west) node[midway,above,yshift=7pt]{$k_1$};		
%%%%%%%		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L12.north west) -- (L13.north west) node[midway,above,yshift=7pt]{$k_2$};
%%%%%%%		\draw [decorate,thick,decoration={brace, amplitude=10pt},yshift=20pt] (L13.north west) -- (L13.north east) node[midway,above,yshift=7pt]{$k_i$};
%%%%%%%		\draw [decorate,thick,decoration={brace, mirror, amplitude=10pt},yshift=-20pt] (L33.south west) -- (L33.south east) node[midway,below,yshift=-7pt]{Source block};
%%%%%%%				
%%%%%%%		\node[left of=L11,xshift=-1.5cm](){$\boldsymbol \Gamma_1$};
%%%%%%%		\node[left of=L21,xshift=-1.5cm](){$\boldsymbol \Gamma_2$};
%%%%%%%		\node[left of=L31,xshift=-1.5cm](){$\boldsymbol \Gamma_i$};
%%%%%%%	\end{tikzpicture}
%%%%%%%\caption{The division of a source block according to the Expanding Windows UEP scheme.}
%%%%%%%\label{fig:expandingwindows:packetallocation}
%%%%%%%\end{figure}





%\begin{align}
%P_{L2}(N)&=\sum_{n=0}^{N}\quad\underbrace{P(n|N)}_{(a)}\g \underbrace{\sum_{i=0}^{K_1}P(r_1=i|n)\g P(r_2=K_2-i|N-n)}_{(b)} \label{eq:ew_l2_anal}
%\intertext{Where:}
%(a)&\text{ The probability that $n$ out of $N$ received packets are in}\notag\\
%   &\text{ \ac{L1}. The remaining $N-n$ are in \ac{L2}.}\notag\\
%(c)&\text{ The probability that \ac{L1} and \ac{L2} combined, reaches rank}\notag \\
%   &\text{ $K_2$. When \ac{L1} has $n$ packets and \ac{L1} has $N-n$ packets.}\notag\\ \notag
%\end{align}

