\documentclass[11pt,letterpaper]{article}

\newcommand{\mytitle}{CS262 Homework 2}
\newcommand{\myauthor}{Kevin Lewi \\ \small Collaborators: Saket Patkar, Siddhi 
Soman, Valeria Nikolaenko}
\date{Februrary 14, 2012}

\usepackage{hwformat}

\begin{document}

\maketitle

\section*{Problem 1}

a) false, it should be that the weights of all outgoing edges must sum to 
$1$.

b) false, the edge from state $s$ to $t$ denotes the conditional probability of 
going to state $t$ from state $s$, not vice versa.

c) false, there should not be any terms like $P(x_2 \mid x_1)$, since we should 
never be conditioning on the terms $x_i$.

d) true, since the probabilities of the cells in the Viterbi matrix represent 
the maximum over all paths of the probability of taking that path (a.k.a. the 
mots likely path), whereas the Forwards algorithm computes the sum of the 
probabilities of reaching a point over all possible paths to that point.

e) true, since both run in time $O(n)$, and the only difference algorithmically 
is that one takes the max probability over all paths, and the other sums all 
paths. Either way, you must implement it by looking through all paths.

f) false, since posterior decoding in an HMM will only identify, given a 
position $i$, the state that we are most likely in. This does not necessarily 
correspond to anything to do with the most likely parse.

g) true, there is certain behavior observed in exonic regions that is not 
observed for intronic regions that makes duration modeling for vanilla HMMs 
useful.

h) false, although the Markov property states that that states must be 
memoryless, this does not mean you can construct the states such that the 
current state is only obtainable based on information from several timesteps in 
the past.

i) false, the Baum-Welch algorithm is guaranteed to converge to a local optimal 
solution, not a global one.

\section*{Problem 2}

\subsection*{(a)}

If $a_{kl} = 0$ initially, then note that the Baum-Welch will attempt to count 
the number of transitions made in the first iteration in order to determine the 
value of $a_{kl}$ in the next iteration during the training phase. However, note 
that $a_{kl} = 0$ implies that no transitions will be made in the first phase, 
and so $a_{kl}$ will remain $0$ after the first phase. Inductively, this process 
will be repated for all phases in the training step, and so we see that $a_{kl}$ 
will still keep its value of $0$ after the training phase completes.

\subsection*{(b)}

Baum-Welch will keep these two states identical.

We first show that the forward and backward probabilities for the two states $k$ 
and $k'$ will be identical. Note that both the forward and backward 
probabilities only depend on $e_k(x_i)$ and $a_{kl}$:
\begin{align*}
	f_k(l) &= e_k(x_i) \sum_{\ell} f_l(i-1) \cdot a_{kl} \\
	b_k(i) &= \sum_l e_l(x_{i+1}) \cdot a_{kl} \cdot b_l(i+1)
\end{align*}

Furthermore, the parameters $A_{kl}$ and $E_k(b)$ used in Baum-Welch only depend 
on forward, backward, and emission probabilities, which we have already 
established to be identical between the states. Thus, we can conclude that for 
all states $l$, $A_{kl} = A_{k'l}$ and $E_k(b) = E_{k'}(b)$. This then implies 
that in the next iteration, $a_{kl} = a_{k' l}$ and $e_k(b) = e_{k'}(b)$. 
Therefore, this property has remained invariant across an iteration of 
Baum-Welch, and so these values will always remain identical.

However, Viterbi training will not keep these two states identical. To see this, 
note that the idea behind Viterbi training is to determine a parse of maximal 
likelihood and modify initial condition values based on this parse. Thus, it is 
possible that there could be two paths of maximal likelihood during some 
iteration of Viterbi training, where $k$ lies on one path and $k'$ lies on the 
other. In this case, Viterbi will have to break the tie between the two parses 
and choose to modify one differently from the other. This means that the 
conditions surrounding $k$ and $k'$ will no longer be identical.

\subsection*{(c)}

\subsubsection*{(i)}

We will be analyzing Viterbi here.

\subsubsection*{(ii)}

The HMM that we will be working under here is the following: suppose we have two 
dice, one fair and one loaded. The fair die has $6$ sides, each occuring with 
equal probability. The loaded die also has $6$ sides, but it is $7$ times more 
likely to output a $1$ than any other number. Thus, its probability of 
outputting a $1$ is $7/12$, and any other number is outputted with probability 
$1/12$.

The probability of transitioning from the Fair to the Loaded die is $0.1$, and 
the probability of transitioning from the Loaded to the Fair die is $0.05$. The 
following HMM depicts the transition probabilities.

\begin{center}
	\includegraphics[scale=0.75]{2c.pdf}
\end{center}

\subsubsection*{(iii)}

We use MATLAB here, where we enter the following:
\begin{align*}
	\text{TRANS} &= [.9, .1; .05, .95]; \\
	\text{EMIS} &= [1/6, 1/6, 1/6, 1/6, 1/6, 1/6; 7/12, 1/12, 1/12, 1/12, 1/12, 
	1/12]; \\
	[\text{seq},\text{states}] &= 
	\text{hmmgenerate}(1000,\text{TRANS},\text{EMIS});
\end{align*}

Note that TRANS and EMIS correspond to the values represented by the HMM that we 
are modeling. Given the last line, the MATLAB function hmmgenerate yields 
training sequences under the given HMM model. For example, one output generated 
by a call to hmmgenerate for sequences of length $20$ is as follows:

\[ \text{seq} = 2 1 5 2 6 1 5 5 2 3 1 2 4 1 4 5 1 1 1 3 \]
\[ \text{states} = 1 2 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2 2 2 \]

\subsubsection*{(iv)}

Now, given the following initial parameters:
\begin{align*}
	\text{TRANS-GOOD} &= [0.88, 0.12; 0.08, 0.92]; \\
	\text{EMIS-GOOD} &= [0.16, 0.17, 0.17, 0.17, 0.16, 0.17; 0.6, 0.08, 0.08, 
	0.08, 0.08, 0.08];
\end{align*}

We use the built-in MATLAB function hmmtrain to see if there is convergence 
under those initial parameters:

\[ [\text{TRANS2}, \text{EMIS2}] = \text{hmmtrain}(\text{seq}, 
\text{TRANS-GOOD}, \text{EMIS-GOOD}) \]

And it turns out that after 500 iterations, we get:

\begin{align*}
	\text{TRANS2} &= [0.8831, 0.1169; 0.0426, 0.9574] \\
	\text{EMIS2} &= [0.1179, 0.1509, 0.1790, 0.2087, 0.1861, 0.1575; 0.5796, 
	0.0975, 0.0829, 0.0814, 0.0733, 0.0854]
\end{align*}

which is very close to the real HMM model.

\subsubsection*{(v)}

However, these initial parameters yield a different output that does not seem to 
ever converge to the true model, even after 500 iterations.

Initial Parameters:
\begin{align*}
	\text{TRANS-BAD} &= [0.05, 0.85; 0.9, 0.1]; \\
	\text{EMIS-BAD} &= [0.6, 0.08, 0.08, 0.08, 0.08, 0.08; 0.17, 0.16, 0.17, 0.16, 
	0.17, 0.17];
\end{align*}

MATLAB function to run simulation:
\[ [\text{TRANS3}, \text{EMIS3}] = \text{hmmtrain}(\text{seq}, \text{TRANS-BAD}, 
\text{EMIS-BAD}) \]

Output after 500 iterations:

\begin{align*}
	\text{TRANS3} &= [0.0993, 0.9007; 0.9073, 0.0927] \\
	\text{EMIS3} &= [0.4978, 0.0798, 0.1491, 0.1064, 0.1072, 0.0597;	0.4099, 
	0.1444, 0.0686, 0.1257, 0.1008, 0.1506]
\end{align*}

Note that these probabilities do not represent anything close to the original 
HMM model, and so we see that our initial conditions are not converging in this 
case.

\section*{Problem 3}

\subsection*{(a)}

\newcommand{\argmax}{\text{argmax}}

The main reason why this is the wrong $\pi$ to find is because it does not take 
into account the probabilities of the dies switching between being Fair and 
Loaded. Thus, although the quantity $\argmax_{\pi}(x \mid \pi)$ is the parse 
with the highest probability of producing $x$, the probability of this parse 
occurring could be extremely (almost impossibly) low, and such information is 
not captured by the quantity $\argmax_{\pi}(x \mid \pi)$.

Consider the following pathological HMM as an example: the probability of 
switching between states Fair and Loaded is $0.9$ (so, we switch very often). 
Also, the Loaded die is such that it outputs the number $1$ with probability 
$1$, always.

Now, let $x = 1111\ldots 1$ and be of length $n$.

Note that $\argmax_{\pi}(x \mid \pi) = LL \cdots L$, since $\pr[x \mid LL \cdots 
L] = 1$, since the Loaded die always outputs $1$. However, this parse is 
extremely unlikely to occur since it implies that the die never switches state. 
Thus, it is the wrong quantity to look at.

\subsection*{(b)}

The goal here is to find the parse that maximizes the expected payoff. We can 
use dynamic programming to help us find such a parse. As noted in the question 
statement, it is not necessarily our goal to find the most likely parse, as this 
does not imply that it is indeed the parse that maximizes the expected payoff.

Let $f(i,k)$ represent the maximal expected payoff for a parse that is in state 
$k$ after $i$ rolls of the die. Suppose we have precomputed $f(i,k)$ for all 
states $k$. Then, the goal is to compute $f(i+1,k')$ for each state $k'$, given 
such information.

we are interested in computing the expected payoff contribution from state $i+1$ 
at state $k'$ given the assumption that we ended in state $k$ at position $i$. 
Let this quantity be represented as $\ex[Win(\pi_{i+1}^*, k') \mid \pi_i = k]$.

Then, we know that we can compute:
\begin{align*}
	\ex[Win(\pi_{i+1}^*, k') \mid \pi_i = k] &= (Win(F,k') \cdot \pr[x_{i+1} 
	\wedge \pi_{i+1} = F \mid \pi_i = k] + \\
	& Win(L,k') \cdot \pr[x_{i+1} \wedge \pi_{i+1} = L \mid \pi_i = k]) / \\
	& \pr[x_{i+1} \mid \pi_i = k]
\end{align*}

The idea behind this equation is that we simply do a case analysis on whether or 
not the true state at position $i+1$ is Fair or Loaded, and the probability of 
reaching such a state given the past state information is multiplied by the 
payoff if we were in that state. This step is just a simple application of 
probability.

This quantity can be computed for all $i+1$ given $i$, and for the first die, we 
can use the fact that we start with a Fair die with probability $0.5$, and this 
quantity can still be computed.

Now, to determine $f(i+1,k')$, we can simply take the maximum expected payoff 
over all previous states $k$, adding the quantity we just computed with the 
previous expected payoff on the first $i+1$ positions. Thus, we have that
\[ f(i+1,k') = \max_{k} ( \ex[Win(\pi_{i+1}^*, k') \mid \pi_i = k] + f(i,k) ). 
\]

Note that $f$ only outputs payoffs, not actual paths. To retrieve the parse, we 
can simply observe at each $i$ which state $k$ achieved the maximum in the above 
equation. This corresponds directly to which state we should guess in the $i$th 
roll. Thus, our final algorithm is as follows:

Compute $f(i,k)$ for each $i$, and to determine the state guessed for the $i$th 
roll, output the state that achieved the maximum value when computing $f(i,k)$.

\subsection*{(c)}

\subsubsection*{(i)}

We will use $f_k(i,m)$ to denote the forward probability of being in state $k$ 
at the $i$th position of the sequence, conditioned on the assumption that $m$ of 
the first $i-1$ positions are Loaded. Similarly, we will use $b_k(i,m)$ to 
denote the backward probability of being in state $k$ at the $i$th position of 
the sequence, conditioned on the assumption that $m$ of the positions from $i+1$ 
to $1000$ are Loaded.

Then, we will use $\pr[\pi_i = k \mid x, \sum_j 1\{\pi_j^* = L\} = 600]$ to 
denote the quantity that we are interested in calculating. First, note that
\[ \pr[\pi_i = k \mid x, \sum_j 1\{\pi_j^* = L\} = 600] = \sum_{m=0}^i f_k(i,m) 
\cdot b_k(i,600-m) / \pr[x, \sum_j 1\{\pi_j^* = L\} = m]. \]

Thus, it remains to show that we can compute $f_k(i,m)$ and $b_k(i,m)$ 
efficiently in order to determine the above quantity.

Note that we can formulate $f_k(i,m)$ and $b_k(i,m)$ recursively, which implies 
that a dynamic programming strategy may be useful in computing their values. We 
have that
\[ f_F(i,m) = e_F(x_i) \cdot a_{LF} \cdot f_L(i-1,m-1) + e_F(x_i) \cdot a_{FF} 
\cdot f_F(i-1,m). \]
To see this, note that if in position $i-1$ we were in state $L$, then the 
probability of being in $F$ at the current position is that of transitioning to 
$F$ from $L$ (which explains the $a_{FL}$ term, and then the probability of 
being in state $L$ at position $i-1$ with $m$ decreased by $1$. However, if in 
position $i-1$ we were in state $F$, then it is the transition probability of 
staying in state $F$ multiplied by the probability of being in state $F$ at 
position $i-1$ with $m$ loaded states to spread, still. This is all multiplied 
by the probability of emitting $F$ at position $x_i$, of course. Using similar 
logic, we also get that
\[ f_L(i,m) = e_L(x_i) \cdot a_{LL} \cdot f_L(i-1,m-1) + e_L(x_i) \cdot a_{FL} 
\cdot f_F(i-1,m). \]

Now, for the backwards probabilities, we will use essentially the same idea, but 
this time the terms will be of a slightly different format:
\[ b_L(i,m) = e_L(x_{i+1}) \cdot a_{LL} \cdot b_L(i+1,m-1) + e_F(x_{i+1}) \cdot 
a_{LF} \cdot b_F(i-1,m-1)), \]
and that
\[ b_F(i,m) = e_L(x_{i+1}) \cdot a_{FL} \cdot b_L(i+1,m) + e_F(x_{i+1}) \cdot 
a_{FF} \cdot b_F(i-1,m)). \]

Thus, since we have shown that the calculation of the probabilties for 
parameters $(i,m)$ only relies on the entries of $(i-1,m-1)$, $(i-1,m)$, and 
$(i,m-1)$, one can fill out a dynamic programming matrix in order to determine 
the values for cell $(i,m)$. The values range in $i \in [1,1000]$ and $m \in 
[0,600]$, and there are $k=2$ states (either Fair or Loaded), so the running 
time is $O(k^2 \cdot |[1,1000]| \cdot |[0,600]|)$. 

\subsubsection*{(ii)}

Our strategy will be similar to that of the strategy we described in part b, 
except this time we must condition on the new information given to us, which is 
that there are 600 Loaded rolls in the sequence. Thus, in all of our equations 
from part b, we add the extra condition that $\sum_j 1\{\pi_j^* = L\} = 600$. 
Otherwise, the procedure for obtaining the strategy remains the exact same.

Note that this does not necessarily imply that we guess 600 Loaded rolls. This 
is highly dependent on the scoring matrix --- for example, suppose the scoring 
matrix is such that the penalty for a wrong guess is $0$, and the score obtained 
for guessing Fair is much higher than the score obtained for guessing Loaded. 
Then, even given the information that there are $600$ loaded rolls, we would 
still always guess Fair in order to maximize our expected payoff.

\section*{Problem 4}

\subsection*{(a)}

We can certainly construct a $3$-state pair HMM that corresponds to running 
Needleman-Wunsch with linear gap penalty. In lecture and on the slides, we see 
an example for a $3$-state pair HMM with affine gap penalty. The only 
modification we must make to this pair HMM is to set $\delta = \epsilon$, since 
with linear penalties, we need not be concerned with the length of the gaps, as 
the penalty scales linearly with the length of the gap and thus can be 
calculated memorylessly.

However, upon making this observation, note that we can actually combine the two 
gap states used in the affine model. This is because we only care about a 
boolean value of whether or not a gap exists, and we are not concerned with the 
type of gap (either $(x_i,\eta)$ or $(\eta, y_i)$) since we do not need to store 
any memory about the gap. Therefore, we can simply combine the two gap states 
into one, for a pair HMM consisting of only two states.

But finally, note that we can even combine the matching state $M$ with the gap 
state. This is because, given the current position, it does not matter whether 
or not we were previously in a gap state or a match state. This does not affect 
the cost of any of the penalties for the current state, and so thus, there is 
essentially no need for a transition to different states, since no memory is 
stored between alignment positions of the strings.

Therefore, we conclude that it is possible to model Needleman-Wunsch with linear 
gap penalty in only one state.

\subsection*{(b)}

The following pair HMM will represent overlap detection.

\begin{center}
	\includegraphics[scale=0.5]{4b.pdf}
\end{center}

The idea behind this HMM is that there is a start and end state that is able to 
``suck up'' as many gaps as needed for the prefix and suffixes of one of the 
sequences. There should be no gap penalty assigned to these offsets made. But, 
other than these offsets, the HMM is the same as in global alignment, and so the 
transitions between states $M$, $I$, and $J$ model that of Needleman-Wunsch.

\subsection*{(c)}

The following pair HMM will represent regular Needleman-Wunsch with affine gap 
penalty where the gaps can be of length at most $L$.

\begin{center}
	\includegraphics[scale=0.75]{4c.pdf}
\end{center}

We have the $M$, $I$, and $J$ states again, but this time, we have $L$ copies of 
each of the $I$ and $J$ states. Each of these states will represent the current 
length of the gap recorded so far, for either type $I$ or $J$. Thus, we are able 
to wire up the HMM so that it can keep a memory of how long the gap has been so 
far. Once the gap reaches length $L$, it has probability $1$ of traveling back 
to $M$, since it cannot be any longer.

\subsection*{(d)}

The following pair HMM will represent regular Needleman-Wunsch with piecewise 
linear gaps, where the gap function is composed of $s$ linear functions.

\begin{center}
	\includegraphics[scale=0.75]{4d.pdf}
\end{center}

We have the $M$, $I$, and $J$ states again, but this time, we have $s$ copies of 
each of the $I$ and $J$ states. Each of these states will represent each of the 
different linear piecewise functions, and the idea is that one will take the 
maximum penalty across the choices when actually computing the gap penalty for a 
specific alignment. Since there are $s$ states for each of the two types of 
gaps, plus the one $M$ state, there are exactly $2s+1$ states in this HMM.

\end{document}
