\documentclass[11pt,letterpaper]{article}

\newcommand{\mytitle}{CS262 Homework 1}
\newcommand{\myauthor}{Kevin Lewi \\ \small Collaborators: Saket Patkar, Siddhi 
Soman, Valeria Nikolaenko}
\date{January 31, 2011}

\usepackage{hwformat}

\begin{document}

\maketitle

\section*{Problem 1}

\subsection*{Part A}

\subsubsection*{(a)}

AGTAGTTCCCACACT

\subsubsection*{(b)}

AGUGUGGGAACUACU

\subsubsection*{(c)}

Serine, Valine, Glycine, Threonine, Threonine

\subsection*{Part B}

\subsubsection*{(a)}

We would use a linear space dynamic programming algorithm for this scenario. We 
are interested in ends-free treatment, and so a bounded algorithm would not be 
appropriate in this case.

This is because we are looking for a global alignment with ends-free treatment 
and affine gap-penalty, and we are expecting the mature mRNA to look very 
similar to its corresponding position in the organism's genome. The reason for 
the affine gap-penalty is that we would prefer our algorithm to group gaps 
closer together, since we do not expect a scatter of gaps across the sequences. 
Furthermore, the advantages of a linear space algorithm are helpful in this case 
because the genome is very large and unreasonable to store in quadratic space.

\subsubsection*{(b)}

We would use a bounded dynamic programming algorithm for this scenario.

In this scenario, we would like the algorithm that apply for global alignment, 
ends-full treatment, and one that deals with a linear gap-penalty. The reason 
for global alignment is that we care about finding all changes between the two 
sequences, and since we are still interested in comparing the ends of the 
sequences, we use ends-full treatment. Finally, we would apply a linear 
gap-penalty since the insertions and deletions are assumed to be small and there 
are single nucleotide changes.


\subsubsection*{(c)}

We would use the BLAST algorithm in this scenario. This is because the BLAST 
database has already preprocessed many short sequences in the Protein Data Bank, 
and so this algorithm would be preferable to a dynamic programming algorithm. 
Also, since we are only interested in matches with a certain degree of accuracy, 
BLAST would fit perfectly in this scenario since it is able to save computation 
time by relaxing accuracy.

\subsection*{Problem 2}

\subsubsection*{(a)}

It is not too difficult to construct an example for which there are an 
exponential number of optimal global alignments. Consider the case where:

\[ m = 0, s = 0, d = 0 \]

Then, every alignment between any two sequences $x$ and $y$ is an optimal global 
alignment, since the optimal score is $0$ anyway. Thus, we can have $n/2$ gaps 
for one sequence, which can be placed in any of $n$ positions, and so the number 
of alignments is at least $\binom{n}{n/2}$. This quantity is already 
exponential, and certainly a lower bound on the number of optimal global 
alignments.

Thus, we have constructed a specific setting of $m,s,d$ and sequences $x,y$ such 
that there are exponentially many optimal global alignments.

\subsubsection*{(b)}

Given the first dynamic programming matrix, the goal is to compute the number of 
traceback paths (since they are in a bijection with the optimal global 
alignments). To do this, we simply would like to compute the number of traceback 
paths from any given square in the matrix, and the answer that we finally output 
will be the number of traceback paths in the final square of the matrix.

In order to do this in time $O(|x||y|)$, we can use dynamic programming 
(again!). The base case are simply the number of traceback paths from the first 
square to itself (for which the answer is that there is $1$ traceback path), and 
we can proceed inductively as follows. For a square $(i,j)$, we can look at the 
number of traceback paths in squares $(i-1,j)$ and $(i,j-1)$, and add them up to 
get the number of traceback paths for square $(i,j)$. It will take only 
$O(|x||y|)$ time to determine the number of traceback paths for each square in 
the matrix. Finally, the total number of traceback paths, which is the total 
number of optimal alignments, will simply be the number of traceback paths 
recorded in the final square of the matrix.

\subsubsection*{(c)}

For two sequences of length $n$, note that any alignment must use the same 
number of gaps for each sequence, if we allow the definition of a gap to be in 
the prefixes and suffixes of the sequences. So, let $k$ represent the number of 
gaps in each sequence. Note that $k$ can range from $0$ gaps to $n$ gaps.

Then, using $k$ as the fixed number of gaps, we know that the length of each 
sequence (including its gaps) is $n+k$. For the first sequence, we thus have 
$\binom{n+k}{k}$ choices for placements of the $k$ gaps. Then, for the second 
sequence, we must ensure that no gaps are placed such that an index of each 
sequence contains gaps in both strings. Thus, we cannot re-pick any of the first 
$k$ positions chosen as gaps for the first sequence. Thus, we have $(n+k) - k$ 
positions left to choose from, and $k$ of them will represent the gaps for the 
second sequence. There are $\binom{n}{k}$ ways to do this. Therefore, the number 
of alignments can be expressed as:

\[ \sum_{k=0}^{n} \binom{n+k}{k} \cdot \binom{n}{k} \]

\subsubsection*{(d)}

Define a sequence to be ``palindromic'' if its reverse complement is identical 
to itself. For example, the sequence $AAATTT$ is palindromic since its reverse 
complement is also $AAATTT$.

For each double-stranded DNA molecule $m$, we would like to count how many RNA 
molecules $r$ can be associated with $m$. Note that if $m$ is palindromic, then 
only one distinct RNA molecules can map to the DNA molecule. However, if $m$ is 
\emph{not} palindromic, then there are actually two RNA molecules that can be 
mapped to $m$, one for each strand.

Therefore, if $m$ is palindromic, there is $1$ such RNA molecules associated 
with it, and otherwise, there are $2$. The number of palindromic DNA molecules 
of length $n$ is simply $4^{\lceil n/2 \rceil}$, since if $n$ is even, it is 
just the same as picking the first $n/2$ letters and then copying them in 
reverse order for the last $n/2$ letters (yielding $4^{n/2}$), and if $n$ is 
odd, then we pick the first $(n-1)/2$ letters, copy them over, and then we have 
$4$ choices for the middle letter, which still yields $4^{(n-1)/2+1}$ choices.

We have determined that the number of palindromic RNA molecules is $4^{\lceil 
n/2 \rceil}$, and the number of non-palindromic RNA molecules is $4^n - 
4^{\lceil n/2 \rceil}$. Since the overcount is by a factor of $2$ for 
non-palindromic sequences, we have that the total number of DNA molecules is:

\[ 4^{\lceil n/2 \rceil} + \frac{1}{2} (4^n - 4^{\lceil n/2 \rceil}) \]

\section*{Problem 3}

\subsection*{(a)}

\subsubsection*{(i)}

It is possible to modify the linear-space global alignment to work for local 
alignment.

We first run a preprocessing procedure that identifies the starting and ending 
points of the segments in the sequence that correspond with the local alignment. 
This is done by finding the entry $(i,j)$ in the matrix such that, if $F$ 
represents the scoring function, $F(i,j)$ is such that for all $i' < i$ and $j' 
< j$, $F(i,j) \geq F(i',j')$. Thus, the entry $(i,j)$ provides a bounding box 
for the location of the optimal local alignment in the matrix. Next, we can fix 
the starting positions of the strings corresponding to entry $(i,j)$ and imagine 
the strings are oriented in reverse order. We can this repeat this procedure to 
obtain an entry $(k,l)$ such that for all $i',j'$ such that $k \geq i' \geq i$ 
and $l \leq j' \leq j$, $F(k,l) \geq F(i',j')$ (in reverse order). Thus, we then 
know that the optimal local alignment starts at the entry $(k,l)$. This is all 
doable in linear space since we are only keeping track of the maximum scoring 
function values rather than the exact values of each entry of the matrix.

Now, after this preprocessing step, we simply run the linear-space global 
alignment algorithm on the rectangular submatrix defined between entries $(k,l)$ 
and $(i,j)$. This will determine the exact arrangement of the local alignment, 
the desired solution.

\subsubsection*{(ii)}

It is also possible to modify the linear-space global alignment algorithm to 
work for affine gap penalties.

Note that in the quadratic space dynamic programming algorithm for affine gap 
penalties, we need to keep track of four matrices F, G, H, and V (as described 
in lecture). These matrices represent the fact that we need to remember the best 
score depending on whether or not a gap is ``open`` or ``not open``. Each of 
these four matrices are also computed in the dynamic programming manner. 
However, note that calculating each entry of each matrix only relies on local 
information --- in other words, we only need to compute the values of nearby 
entries to determine the value of a specific entry $(i,j)$. Thus, we can run the 
same recursive technique used in linear-space global alignment on each of the 
four matrices F, G, H, and V. This maintains the linear-space constraint and 
allows us to still compute the optimal alignment under affine gap penalties 
correctly.

\subsection*{(b)}

Here is one method for a linear space alignment algorithm that achieves balanced 
partitioning on a size $M \times N$ matrix. We first compute the entry with the 
maximum score function across the set of entries of the form $(i,N/2)$ --- call 
this entry $(i^*, N/2)$. This is done by iterating over all of the entries on 
the $N/2$th column. Next, we compute teh entry with the maximum score function 
across the set of entries of the form $(M/2,j)$ --- call this entry $(M/2, 
j^*)$. Again, this is done by iterating over all of the entries on the $M/2$th 
row.

Now, the entries $(i^*,N/2)$ and $(M/2, j^*)$ create a bounding box for the 
possible path of the optimal global alignment. Note that the bounding box cannot 
lie in the lower left or upper right corners of the matrix. Thus, we run this 
algorithm recursively on the sections of the matrix that are in the upper left 
and middle for one subproblem, and the middle and lower right for the other 
subproblem. These two subproblems are of approximately the same size since we 
ensured that the entries $(i^*,N/2)$ and $(M/2, j^*)$ lie on the mid-lines of 
the matrix.

Since the entries contained in each subproblem do not depend on each other, they 
can be parallelized easily, and all of this is done in linear space since we 
only rely on local information around each of the entries.

\section*{Problem 4}

\subsection*{(a)}

\subsubsection*{(i)}

The Smith-Waterman finds the longest common subseqeunce between the two strings 
when $m=1,s=0,d=0$.

\subsubsection*{(ii)}

The Smith-Waterman finds the longest common substring between the two strings 
when $m=1,s=\infty,d=\infty$.

\subsection*{(b)}

We can simply run the Needleman-Wunsch algorithm for settings $m = 1, s = 0, d = 
\infty$. It is a dynamic programming algorithm that will run in time 
$O(|x||y|)$, conveniently. It must be used instead of the Smith-Waterman 
algorithm since we want the optimal global alignment, not local alignment. Then, 
to actually output the shortest supersequence, we can just output all letters 
paired with gaps and output only one instance of each pair of matched letters in 
the exact order that they appear in the alignment. There will be no mismatched 
pairs since $d = \infty$.

\section*{Problem 5}

\subsection*{(a)}

The BWT matrix:
\begin{align*}
	& \$ASSASSIN \\
	& ASSASSIN\$ \\
	& ASSIN\$ASS \\
	& IN\$ASSASS \\
	& N\$ASSASSI \\
	& SASSIN\$AS \\
	& SIN\$ASSAS \\
	& SSASSIN\$A \\
	& SSIN\$ASSA
\end{align*}

Suffix array: \[ 9,1,4,7,8,3,6,2,5 \]

The final BWT transform: \[ N\$SSISSAA \]

\subsection*{(b)}

\subsubsection*{(i)}

The first column of the BWT matrix is always in sorted order, so the first row 
of the BWT matrix that starts with the letter $\alpha$ is simply the index of 
the first column that $\alpha$ first appears in. The rows that will appear 
before this one is those with symbols lexicographically smaller than $\alpha$, 
including the $\$$ symbol. Thus, there are exactly $C(\alpha)+1$ of these, and 
hence, the first row of the BWT matrix that starts with $\alpha$ is the row 
indexed $C(\alpha)+1$.

\subsubsection*{(ii)}

We proceed by induction on the length of the string $W$. For $W$ being length 
$1$, note that $L(W) = 1$ and so $F(a,L(W)-1) = F(a,0) = 0$, and hence $L(a) = 
C(a)+1$, which was verified in the previous section. At the same time, $U(W)$ is 
simply the last row of the matrix, and so $F(a,U(W))$ represents the number of 
occurrences of $a$ in $B$. Since $L(a) = C(\alpha)+1$, then the last row 
containing $a$ should be $L(a) + F(a,U(W)) - 1$, which is exactly $C(a) + 
F(a,U(W))$. Thus, we have verified that $U(a) = C(a) + F(a,U(W))$. It remains to 
show the inductive step. Assume that the equations hold true for $W$ of some 
fixed length $n$.

Since $L(W)$ represents the index of the first row of the BWT matrix that starts 
with $W$, the index of $L(W)$ is the first index of $B$ where the letter $a$ 
appears and is mapped to the prefix $aW$. Thus, $F(a,L(W)-1)$ counts the number 
of occurrences of $a$ that appear and are \emph{not} mapped to the prefix $aW$, 
but appear lexicographically before $W$. These occurrences should not be skipped 
when computing $L(aW)$, which is why they are added to $C(a)+1$. Thus, the 
$C(a)+1$ determines the first occurrence of $a$ in $B$, and then the 
$F(a,L(W)-1)$ term excludes the occurrences of $a$ that do not match the prefix 
$W$ but appear before $aW$. Thus, the index $C(a)+1+F(a,L(W)-1)$ represents the 
first occurrence of $aW$, which is equal to $L(aW)$.

For $U(aW)$, again note that $F(a,U(W)$ represents the number of occurrences of 
$a$ that do not associate with the prefix $aW$ but occur lexicographically 
before $aW$. Thus, by the same reasoning, we can see the term $C(a)+1$ as the 
first occurence of $a$, and then $F(a,U(W))-1$ as the number of occurrences of 
$a$ that do not match with the prefix $aW$ (but appear before $aW$), and so the 
addition of these two terms yields the last row in the BWT matrix that starts 
with $aW$, which is exactly the value $U(aW)$. This completes the inductive step 
for both $L$ and $U$.

\subsubsection*{(iv)}

We can use a dynamic programming algorithm here to compute the functions $L(W)$ 
and $U(W)$. To do this if $W = w_1 w_2 \cdots w_n$, we compute $L(w_n)$ and 
$U(w_n)$, then $L(w_{n-1} w_n)$ and $U(w_{n-1} w_n)$, and so on (using the 
recursive equations given in the previous section. This operation takes time 
$O(|W|)$, since it is linear in the length of the string $W$, and B, S, C, and F 
have been precomputed.

Now, to obtain the indices of the occurrences of $W$, we can iterate over all 
indices of $L(W)$ and $U(W)$ and use the suffix array to determine their 
location in $X$, since the suffix array provides a direct translation between 
the order of the indices in the BWT matrix and the indices of $X$.

\end{document}
