\documentclass[11pt,letterpaper]{article}

\newcommand{\mytitle}{CS262 Homework 3}
\newcommand{\myauthor}{Kevin Lewi \\ \small Collaborators: Saket Patkar, Siddhi 
Soman, Valeria Nikolaenko}
\date{Februrary 28, 2012}

\usepackage{hwformat}

\begin{document}

\maketitle


\section*{Problem 1}

\subsection*{(i)}

True, since the advantages behind a CRF lie in the fact that one does not need 
to compute $P(\pi,x)$ in order to obtain $P(\pi \mid x)$.

\subsection*{(ii)}

True, since as the population size increases, more isolated cases of mutations 
occur, which increases the polymorphism rate.

\subsection*{(iii)}

False, since this does not take into account adverse events such as mass 
extinction of large parts of the species, which could decrease heterozygosity.

\subsection*{(iv)}

True, since the population size of wild mice is much larger than the population 
size of humans, which implies a higher heterozygosity.

\subsection*{(v)}

False, since mutations cause the Y chromosomes to not be identical.

\subsection*{(vi)}

False, next generation sequencing is about sacrificing accuracy for decreasing 
the cost of sequencing. Thus, the error rates should be higher, not lower.

\subsection*{(vii)}

False, it is not impossible with clustering and linking, though the procedure is 
certainly made easer if the read sizes are longer than the repeat sizes.

\subsection*{(viii)}

False, these repeated regions are often longer than the length of the sequencing 
reads and hence cause trouble when being aligned.

\subsection*{(ix)}

False, although $A$ has a higher coverage, note that the process of assembling 
the genomes is random, and so it is possible that we are unlucky and assembling 
$A$'s genome happens to be more difficult than assembling $B$'s genome. Also, 
the structure of $B$'s genome could be more conducive to assembling than $A$'s 
genome.

\section*{Problem 2}

\subsection*{(a)}

The statement is incorrect. Note that the right hand side is simply the score, 
which could take on a value greater than $1$. Thus, such a distribution cannot 
be a valid probability distribution. Furthermore, if this statement were true, 
it would follow by the rule of conditional probabilities that $P(x) = Z(x)$ 
(since $P(\pi \mid x) = P(\pi,x) / P(x)$, which we also know to not be true.

\subsection*{(b)}

The key idea is to create a similar definition of the ``forward probability'' 
for HMMs. The goal here is to compute $Z(x)$ using dynamic programming. We first 
rewrite $Z(x)$ as follows:
\begin{align*}
	Z(x) &= \sum_{\pi} \exp \sum_{j=1}^K w_j F_j(x, \pi) \\
	&= \sum_{\pi} \exp \sum_{j=1}^K w_j \sum_{i=1}^L f_j(\pi_i, \pi_{i-1}, i, x) 
	\\
	&= \sum_{\pi} \prod_{j=1}^K \prod_{i=1}^L \exp( w_j \cdot f_j(\pi_i, 
	\pi_{i-1}, i, x))
\end{align*}

Now, let $f_k^*(z)$ represent the forward probability that we used for HMMs. Our 
summation will range from $\pi_1$ to $\pi_z$, where $\pi_z = k$.
\[ f_k^*(z) = \sum_{\pi_1, \cdots, \pi_z : \pi_z = k} \prod_{i=1}^L 
\prod_{j=1}^K \exp(w_j \cdot f_j(\pi_i, \pi_{i-1}, i, x)). \]
Now, in this form, it more clear how to compute $f_k^*(z)$ more efficiently 
(dynamic programming). We can phrase $f_k^*(z)$ recursively as follows, using 
$\ell$ as the previous state:
\[ f_k^*(z) = \sum_{\text{states } \ell} f_\ell^*(z-1) \prod_{j=1}^K \exp(w_j 
\cdot f_j(k, \ell, r, x)) \]
with base case $f_k(1) = 1$ for all states $k$. Now, to compute $Z(x)$, we 
simply note that \[ Z(x) = \sum_{\text{states } k} f_k^*(L). \]

Note that the table for $f_k^*(z)$ for all states $k$ and all lengths $z$ has 
dimensions $N \times L$. For each entry, we must take a sum over all states, so 
each entry takes time $O(N)$ to compute. Thus, our algorithm takes $O(N^2 L)$ 
time and $O(NL)$ space.

\subsection*{(c)}

If the features depend on the entire parse, as opposed to only the previous 
state, our dynamic programming approach would not work, and we could potentially 
have exponential running time for computing $Z(x)$. This is because we would 
have potentially an exponential number of parses to check.

\subsection*{(d)}

\def \score{\operatorname{score}}

This part is just algebra. We start with the left hand side and show that it is 
equal to the right hand side as follows:

\begin{align*}
	\frac{\partial T}{\partial w_j} &= \frac{\partial \log P(\pi \mid x)}{\partial 
	w_j} \\
	&= \frac{\partial}{\partial w_j} \log(\frac{\exp(\sum_{j=1}^L w_j 
	F_j(x,\pi))}{Z(x)}) \\
	&= \frac{\partial}{\partial w_j} \left(\sum_{j=1}^L w_j F_j(x,\pi) \right) - 
	\log(Z(x)) \\
	\frac{\partial T}{\partial w_j} &= F_j(x,\pi) -\frac{\frac{\partial}{\partial 
	w_j} Z(x)}{Z(x)}
\end{align*}
So now, it remains to show that $\frac{\partial}{\partial w_j} Z(x)$ is equal to 
$\sum_{i=1}^L \sum_{\pi'} f_j(\pi_i', \pi_{i-1}',i,x) \cdot \score(\pi',x)$. So, 
we have that
\begin{align*}
	\frac{\partial}{\partial w_j} Z(x) &=	\frac{\partial}{\partial w_j} 
	\sum_{\pi'} \exp \sum_{i=1}^L w_i F_i(x,\pi') \\
	&= \frac{\partial}{\partial w_j} \sum_{\pi'} \exp(w_j F_j(x,\pi')) \cdot 
	\prod_{i=1 \neq j}^L \exp(w_i F_i(x,\pi')) \\
	&= \sum_{\pi'} F_j(x,\pi') \cdot \exp(w_j F_j(x,\pi')) \cdot \prod_{i=1 \neq 
	j}^L \exp(w_i F_i(x,\pi')) \\
	&= \sum_{pi'} F_j(x,\pi') \cdot \prod_{i=1}^L \exp(w_i, F_i(x,\pi')) \\
	&= \sum_{i=1}^L \sum_{\pi'} F_j(x,\pi') \cdot \score(\pi',x) \\
	\frac{\partial}{\partial w_j} Z(x) &= \sum_{i=1}^L \sum_{\pi'} 
	f_j(\pi_i',\pi_{i-1}',i,x) \cdot \score(\pi',x)
\end{align*}
as desired, which completes the proof.

\subsection*{(e)}

Note that we have already shown in part (b) how to compute $Z(x)$ in $O(N^2 L)$ 
time, so we focus here only on computing the numerator,
\[ \sum_{\pi} f_\ell(\pi_i, \pi_{i-1}, i, x) \score(\pi,x) = \sum_{\pi} 
f_j(\pi_i, \pi_{i-1}, i, x) \prod_{j=1}^K \prod_{h=1}^L \exp(w_j \cdot 
f_j(\pi_h, \pi_{h-1}, i, x)). \]

We will compute the outermost summation by partitioning the parse based on a 
position $i$, and computing the forward and backward probabilities associated 
with being in a state $k$ at positino $i$. Here, we will use the idea of 
creating analogous ``backward probability'' functions $b_k(i)$ as were used with 
HMMs. The backward probability can be computed using dynamic programming in the 
same manner that we calculated forward probability. For completeness, we show 
the exact definition here again:
\[ b_k^*(z) = \sum_{\pi_z, \cdots, \pi_L : \pi_z = k} \prod_{i=1}^L 
\prod_{j=1}^K \exp(w_j \cdot f_j(\pi_i, \pi_{i-1}, i, x)). \]
Now, in this form, it more clear how to compute $b_k^*(z)$ more efficiently 
(dynamic programming). We can phrase $b_k^*(z)$ recursively as follows, using 
$\ell$ as the previous state:
\[ b_k^*(z) = \sum_{\text{states } \ell} b_\ell^*(z+1) \prod_{j=1}^K \exp(w_j 
\cdot f_j(k, \ell, r, x)). \]

\def \sumkk{\sum_{\pi : \pi_{i-1}=k', \pi_i=k}}

Now, we can take the outermost summation over all parses $\pi$ such that 
$\pi_{i-1} = k'$ and $\pi_i = k$, for all possible pairs of states $k,k'$. This 
is equivalent to taking the sum over all parses $\pi$, since we are just 
specifying what states are in position $i-1$ and $i$. We will thus represent the 
summation as $\sumkk$.

Let's first define some shortcut notation. Let $\mathcal{A} = \left( 
\prod_{j=1}^K \exp(w_j \cdot f_j(k,k',i,x)) \right)$. Thus, we have that the 
numerator is equal to:
\[ \sumkk f_j(k, k', i, x) \cdot \mathcal{A} \cdot \sum_{\pi_{i-1}=k'} 
\prod_{h=1}^{i-1} \exp(w_j \cdot f_j(\pi_h,\pi_{h-1},h,x)) \sum_{\pi_i = k} 
\prod_{h=i+1}^L \exp(w_j \cdot f_j(\pi_h, \pi_{h-1}, h, x)). \]

Now, note that the first pair of summation and product represents the forward 
probability (when taking into account the product hidden in $\mathcal{A}$, and 
the second pair represents the backward probabilty in the same manner. Thus, we 
can reduce the above expression, which is equal to the numerator, to:
\[ \sumkk f_j(k,k',i,x) \cdot \mathcal{A} \cdot b_k^*(i) \cdot f_{k'}^*(i-1) \]

Now, recall that $f_k^*$ and $b_k^*$ can be computed in $O(N^2 L)$ time each. 
The quantity $\mathcal{A}$ can be computed in time $O(K)$ since it just requires 
iterating over all states. Finally, the outermost summation requires iterating 
over all pairs of states, and hence can be computed in time $O(K^2)$. Thus, the 
total running time is still $O(N^2 L)$ as desired.

\section*{Problem 3}

\subsection*{(a)}

\subsubsection*{(i)}

We use the Poisson distribution as an approximation since we are given the 
assumption that the ends of the reads are distributed uniformly. This 
distribution $P(k, \lambda)$ expresses the probability of $k$ events occurring, 
given $\lambda$ being the expected number of occurrences. We want to calculate 
$\lambda$ for this situation. The probability that a particular fixed read 
starts within a fixed interval of length $L$ is simply $L/G$. There are $N$ 
reads, so the expected number of reads that lands within the length-$L$ interval 
is simply $NL/G$, which is equal to $C$.

Thus, the probability that $0$ reads fall in the interval of length $L$ when the 
expectation is $C$ is simply $P(k,\lambda) = (\lambda^k / k!) \cdot e^{-\lambda} 
= e^{-C}$.

Thus, the probability that a particular base-pair at position $x$, say, is not 
covered by any read is equal to the probability that no reads start within the 
interval $[x-L,x]$, so this is equal to $e^{-C}$. Thus, the probability that the 
base-pair is covered by at least one read is $1-e^{-C}$. There are $G$ 
base-pairs, so the expected number of base-pairs covered by at least one read is 
simply $G(1-e^{-C})$. Thus, the \emph{proportion} is still $1-e^{-C}$.

\subsubsection*{(ii)}

Setting $C = -\ln(0.01) \approx 4.6$ is enough so that $1-e^{-C} \approx 0.99$.

\subsubsection*{(iii)}

Setting $C = -\ln(0.001) \approx 6.9$ is enough so that $1-e^{-C} \approx 
0.999$.

\subsection*{(b)}

Following the procedure outlined, we first want to compute the probability that 
read $i+1$ starts at a position greater than $L$ positions away from read $i$. 
This is equivalent to the probability that a fixed length-$L$ interval $I$ has 
no reads that start within $I$, so it is also $e^{-C}$ (as we detailed in part 
a). Now, we can compute the expected number of gaps.

We know that between reads $i$ and $i+1$, there is an $e^{-C}$ chance that they 
are far away enough (in other words, there is a gap between these two reads). 
Thus, we sum over all $i \in [1,N-1]$ of the probability that there is a gap 
between reads $i$ and $i+1$. On expectation, this is simply $(N-1) e^{-C}$.

By definition, the expected number of contigs is simply one more than the 
expected number of gaps. Thus, it is $(N-1) e^{-C} + 1$.

Recall that the expected number of base-pairs that will be covered by some read 
is $G(1-e^{-C})$, by part 9a). Thus, the average contig length is simply this 
quantity over the expected number of contigs, by definition. This is simply
\[ \frac{G(1-e^{-C})}{(N-1) e^{-C} + 1}. \]

\subsection*{(c)}

Here, we do some recalculation of the previous part, although the general 
technique remains the same.

We are now interested in the probability that read $i+1$ starts at a position 
greater than $L-K$ bases away from read $i$. This is equivalent to the 
probability that a fixed length-$L-K$ interval $I$ has no reads that start 
within $I$, so it is equal to $e^{-N(L-K)/G}$ by the same calculations used in 
part a. Thus, the expected number of gaps is $(N-1) e^{-N(L-K)/G}$.

Therefore, the expected number of contigs is $(N-1) e^{-N(L-K)/G} + 1$ and the 
average contig length is
\[ \frac{G(1-e^{-C})}{(N-1) e^{-N(L-K)/G} + 1}. \]

\section*{Problem 4}

\subsection*{(a)}

\def \suff{\text{suffixes}}
\def \pref{\text{prefixes}}

Let $s$ be an arbitrary read. Define $\suff(s)$ to be the set of all suffixes of 
$s$, and $\pref(s)$ to be the set of all prefixes of $s$. For example, if $s = 
abcd$, then
\[ \suff(s) = \{ abcd, bcd, cd, d \} \]
and
\[ \pref(s) = \{ a, ab, abc, abcd \}. \]

We will now only consider suffixes and prefixes of length at least $K$, and 
$\pref(x)$ and $\suff(s)$ will represent these sets, since we are only 
interested in considering the edges with weights at least $K$. Thus, if $K=2$, 
then $\suff(s) = \{ abcd,bcd,cd\}$ and $\pref(s) = \{ab,abc,abcd\}$.

Let $R$ be the set of all of the reads. Construct a list $L$ that contains all 
of the suffixes and prefixes of every read in $R$. In other words,
\[ L = \{ \suff(s) \cup \pref(s) : s \in R \}. \]
Note that the construction of $L$ takes time $O(NL)$.

Now, we organize $L$ into a hash table, where each bucket of the hash table 
actually contains two lists, one for suffixes and one for prefixes. Thus, by the 
end of the hashing process, each suffix and prefix of every read $r$ will be 
assigned to some bucket $B$, where $B$ contains a list of all suffixes and all 
prefixes that were hashed to $B$.

Now, to construct the graph $OM(R)$, we will do the following: first, create a 
node $n(r_i)$ for each read $r_i \in R$. Then, for each bucket $B$, let $s_j$ be 
a suffix that was hashed to bucket $B$, and $r_j$ the read originally associated 
with $s_j$. For every prefix $p_k$ that was hashed to $B$, let $r_k$ be the read 
associated with prefix $p_k$. Create an edge from $n(r_j)$ to $n(r_k)$, whose 
weight will simply be the length of $s_j$ (which is the same as the length of 
$p_k$).

This construction process takes time proportional to the number of edges in the 
multigraph $OM(R)$, since we only perform one operation per construction of an 
edge. Thus, if the graph is sparse, we do not need to perform all $N(N-1)/2$ 
alignments between reads. This process is in fact optimal, since of course if 
$OM(R)$ contains $m$ edges, we must take time at least $O(m)$ in order to 
construct the graph.


\subsection*{(b)}

Consider the following example:

\begin{center}
	\includegraphics[scale=0.5]{4b.pdf}
\end{center}

The algorithm proposed will see the bottom edge as having the heaviest weight, 
and will then choose the length-1 edge since it is the only edge that exits the 
last node of the current path, daba. Thus, the algorithm returns a path of 
weight $4$. However, the optimal path in this example is of weight $5$, by 
simply taking the edges of weight $2$ and $3$.

The greedy algorithm finds the sequence: cdababcd

The optimal sequence: abcdaba

\subsection*{(c)}

\subsubsection*{(i)}

All edges that are not included have weight $0$.

\begin{center}
	\includegraphics[scale=0.5]{4ci.pdf}
\end{center}

\subsubsection*{(ii)}

The sequence, ``We Can All Agree That The CS262 Teaching Assistants Love The 
Students, Right?'', which corresponds to path ABDCE, has weight 15.

The sequence, ``We Can All Agree That The Students Love The CS262 Teaching 
Assistants, Right?'', which corresponds to path ACDBE, has weight 15.

\subsubsection*{(iii)}

\begin{center}
	\includegraphics[scale=0.5]{4ciii.pdf}
\end{center}

The sequence, ``We Can All Agree That The Students Love The CS262 Teaching 
Assistants, Right?'', which corresponds to path ACDBFE, has weight 24.

\end{document}
