\documentclass[final]{jpaper_micro_2012}

\usepackage[normalem]{ulem}

\begin{document}

\title{Long k-mer: to be or not to be?} 
\author{
  Hongyi Xin \quad Onur Mutlu\\
  Carnegie Mellon University\\
  \authemail{\{hxin,onur\}@cmu.edu}
}
\date{}
\maketitle

\thispagestyle{empty}

\begin{abstract}

% Do not use inserted blank lines (ie \\) until main body of text.
%\paragraph*{Background} 
With the introduction of next-generation sequencing (NGS) technologies, we are
facing an exponential increase in the amount of genomic sequence data.  The
success of all medical and genetic applications of next-generation sequencing
critically depends on the existence of computational techniques that can
process and analyze the enormous amount of sequence data quickly and
accurately.  Unfortunately, the current read mapping algorithms have
difficulties in coping with the massive amounts of data generated by NGS.

%\paragraph*{Results}
We discuss the possibility of using longer k-mer to index the hash table and the
corresponding advantages and disadvantages. We also propose two new data
structures, hybrid hash table and virtualized hybrid hash table to efficiently
choose the k-mer length at run time in order to achieve the highest efficiency
of both short k-mers and long k-mers. Both data structure are generic and are
compatible with all seed-and-extend class read mapping algorithms.

%\paragraph*{Conclusion} 
We simulated the behavior of using both data structure and observed up to 60\%
reduction of calculation with no storage overhead.

\end{abstract}

\section{Introduction} \label{sec:introduction}

Massively parallel sequencing, or so-called next-generation sequencing
(NGS), technologies have substantially changed the way biological research is
performed since 2000~\cite{Brenner2000}. With these new DNA sequencing platforms, we
can now investigate human genome diversity between
populations~\cite{1000GP}, find genomic variants that are likely to cause
diseases~\cite{Antonacci2009, Antonacci2010, Bailey2006, Bailey2002, Bailey2008,
Bailey2001}, and investigate the genomes of the great ape species~\cite{Bailey2002a,
Bailey2004a, Marques-Bonet2009, Rozen2003, Scally2012, Ventura2011} and even
ancient hominids~\cite{Green2010, Reich2010} to understand our own evolution.
Despite all the revolutionary power these new sequencing platforms offer, they
also present difficult computational challenges due to 1) the massive amount of
data produced, 2) shorter read lengths, resulting in more mapping locations
and 3) higher sequencing errors when compared to the traditional
capillary-based sequencing. 

With NGS platforms, such as the popular Illumina platform, billions of raw
short reads are generated at a fast speed. Each short read represents a
contiguous DNA fragment (i.e., 100 base-pairs (bp)) from the sequencing
subject. After the short reads are generated, the first step is to {\it map}
(i.e., align) the reads to a known reference genome. The mapping process is
computationally very expensive since the reference genome is very large (e.g.,
the human genome has 3.2 gigabase-pairs). The software performing the mapping,
called the mapper, has to search (query) a very large reference genome database
to map millions of short reads. Even worse, each short read may contain {\it
edits} (base-pairs different from the reference fragment, including mismatches,
insertions and deletions) which requires expensive approximate searching. In
addition, the ubiquitous common repeats and segmental duplications within the
human genome complicate the task since a short read from such a genome segment
corresponds to a large number of mapping locations in the reference genome.

To simplify searching a large database such as the human genome, previous work
has developed several algorithms that fall into one of the two categories: {\it
seed-and-extend} heuristic methods and {\it suffix-array} mapping methods.

The {\it seed-and-extend} heuristic is developed based on the observation that
for a correct mapping, the short query read and its corresponding reference
fragment, which is the piece of the reference genome that the query read should
map to, must share some brief regions (usually 10-100 base-pair-long) of exact
or inexact matches. These shorter shared regions, which indicate high
similarity between the query read and the reference fragment, are called seeds.
By identifying the seeds of a query read, the mapper narrows down the searching
range from the whole genome to only the neighborhood region of each seed. Seeds
are generated by preprocessing the reference genome and storing the locations
of their occurrences in the reference genome in a separate data structure.
During mapping, a seed-and-extend mapper first analyzes the query read to
identify the seeds. Then, the mapper tries to extend the read at each of the
seed locations via dynamic programming algorithms such as the
Smith-Waterman~\cite{sw} or Neddleman-Wunsch~\cite{nw} algorithm.

On the other hand, the {\it suffix-array} mapping methods analyze the reference
genome and transfer the reference genome into a suffix-array data structure,
which mimics a suffix-tree of the reference genome. Each edge of this
suffix-tree is labeled with one of the four base-pair types and each node
containing all occurrence locations of a suffix. Walking through the tree from
the root to leaf while concatenating all the base-pairs on the edges along the
path together forms a unique suffix of the reference genome. Every leaf node of
the tree stores all mapping locations of this unique suffix in the reference
genome.  Searching for a query read is equivalent to walking through the
reference suffix-tree from the root to a leaf node following the query read's
sequence.  If there exists a path from the root to a leaf such that the
corresponding suffix of the path matches the query read, then all the locations
stored in the leaf node are returned as mapping locations. Suffix array uses
the Burrows-Wheeler Transform~\cite{Burrows94ablock-sorting} and the
Ferragina-Manzini index~\cite{Ferragina07compressedrepresentations} to mimic
the suffix-tree traversal process with much smaller memory footprint.

Several mappers have been developed over the past few years. These mappers can
be classified into two categories based on their mapping algorithms: 1) hash
table based, seed-and-extend mappers (hash table based mappers) similar to the
popular BLAST~\cite{blast} method, such as mrFAST/mrsFAST~\cite{Alkan2009,
Hach2010}, MAQ~\cite{Li2009a}, SHRiMP~\cite{shrimp}, Hobbes~\cite{hobbes},
drFAST~\cite{Hormozdiari2011} and RazerS~\cite{razers}; and 2) suffix-array and
genome compression based mappers that utilize the Burrows-Wheeler Transform and
the Ferragina-Manzini index (BWT-FM) such as BWA~\cite{bwa},
Bowtie~\cite{Langmead2009}, and SOAP2~\cite{soap2} which developed from
MUMmer\cite{MUMmer}. Both types of read mapping
algorithms have different strengths and weaknesses. To measure the performance
of different mappers, three general metrics are introduced: {\it speed} in
performing the mapping, {\it sensitivity} in mapping reads in the presence of
multiple edits (including mismatches, insertions and deletions) and {\it
comprehensiveness} in searching for all mapping locations across the reference
genome. The hash table based mappers are much slower, albeit more sensitive,
more comprehensive and more robust to sequence errors and genomic diversity
than suffix-array based mappers. For these reasons, hash table based mappers
are typically more suitable when comparing the genomes of different species,
such as mapping reads generated from a gorilla genome to the human reference
genome, or when mapping reads to highly repetitive genomic regions where
structural variants are more likely to occur~\cite{Alkan2011nrgreview,
Schuster2010, mills2011nature1000genomes}. On the contrary, suffix-array based
mappers (with the BWT-FM optimization) offer very high mapping speed (up to
30-fold faster than hash table based mappers), but their mapping sensitivity
and comprehensiveness suffer when the edit distance between the read and the
reference fragment is high or when the diversity of the read increases (e.g.,
when mapping reads from other species). Their fast speed makes the suffix-array
based mappers the first choice in single nucleotide polymorphism (SNP)
discovery studies where sensitivity is less important. In this work, we focus
on increasing the speed of hash table based mappers while preserving their high
sensitivity and comprehensiveness.

The relatively slow speed of hash table based mappers is due to their high
sensitivity and comprehensiveness. Such mappers first index {\it fixed-length
seeds} (also called {\it k-mers}), typically 10-13 base-pair-long DNA fragments
from the reference genome, into a hash table or a similar data structure. Next,
they divide each query read into smaller fixed-length seeds to query the hash
table for their associated {\it seed locations}.  Finally, they try to {\it
extend} the read at each of the seed locations by aligning the read to the
reference fragment at the seed location via dynamic programming algorithms such
as Needleman-Wunsch~\cite{nw} and Smith-Waterman~\cite{sw}, or simple Hamming
distance calculation for greater speed at the cost of missing potential
mappings that contain insertions/deletions (indels).  For simplicity, the rest
of the paper will use the term ``k-mer" representing the term ``fixed-length
seed". We will also use the terms ``location" and ``seed location"
interchangeably.

Using real data generated with the NGS platforms, we observed that most of the
{\it locations} fail to provide correct alignments. This is because the size of
the k-mers that form the hash table's indices are typically very short (e.g.,
12 bp as default for mrFAST/mrsFAST). These short k-mers appear in the
reference genome much more frequently than the undivided, hundreds of
base-pair-long query read. As a result, only a few of the locations of a
k-mer, if any, provide correct alignments. Naively extending (aligning the read
to the refernce genome) at {\it all} of the locations of {\it all} k-mers only
introduces unnecessary computation. In this paper, we define the seed locations
that the read cannot align to as ``false" locations. Reducing the large number
of false locations is the key to improving hash table based mappers' speed.

In this paper, we propose two new data structures, hybrid hash table and
virtualized hybrid hash table, that drastically reduce the number of false
locations with moderate or no storage overhead. Our initial simulation observes
an up to 60\% reduction of calculation on top of the state-of-the-art
algorithms. We also prove such data structure will not decrease the sensitivity
of the mapper.
%In this paper, we propose a new algorithm, FastHASH, that dramatically improves
%the speed of hash table based algorithms while maintaining their sensitivity and
%comprehensiveness. We introduce two key ideas for this purpose. First, we
%drastically reduce the potential locations considered for the extend step while
%still preserving comprehensiveness. We call this method {\it Cheap K-mer
%Selection}. Second, we quickly eliminate most of the false locations without
%invoking the extend step in the early stages of mapping. This method is called
%{\it Adjacency Filtering}.  We tested FastHASH by incorporating it into the
%mrFAST~\cite{Alkan2009} codebase.  Our initial CPU implementation of FastHASH
%provides up to 19-fold speedup over mrFAST, while still preserving
%comprehensiveness.
%
%In the next section, we describe the basics and the characteristics of Cheap
%K-mer Selection and Adjacency Filtering. In the Mechanisms section, we present
%the mechanism of FastHASH in detail.  In the Results section, we present the
%performance of mrFAST with FastHASH compared to the baseline mrFAST and several
%other read mapping tools. We then present more analysis in the Analysis section
%and draw concludsions in the Conclusion and Discussion section.
%

In "Background" section we talk about the history of DNA mappers and
related work. In
"Observation" section, we analyze the trade-off of using a longer k-mer to index
the hash table. In insight we reason that having a hybrid multi level hash table
is the best solution. We further describe our mechanism in "Mechanism" section,
and present results in "Results and Analysis section". We will discuss future
improvements in "Discussion" section and finally conclude in "Conclusion"
section.

\section{Background}\label{sec:bg}

Before hash table based mappers and suffix tree based mappers were introduced,
smith-waterman and needleman-wunsch like algorithms were widely used in DNA
sequence mapping. These algorithms are used for finding similar regions between
two long DNA strands, which is also called \emph{the best mapping}. The basic
idea behind these algorithms is that by assigning a similarity score for all
possible mappings between the two DNA stands, we can simply pick the mapping
that returns the highest score as the best mapping. However, as time went by,
there are orders of magnitudes more DNA strands subject to analysis, with each
DNA strands being orders of magnitudes longer. Calculating a similarity score
for all possible mappings is simply computationally infeasible for daily use
applications. To reduce the number of mappings subject to comparison, hash table
based mappers and suffix tree based mappers were later invented.

Even though suffix tree based mappers and hash table based mappers are
drastically different in their algorithms, they share the same internal idea
which made them different from the early smith-waterman and needleman-wunsch
like algorithms. The idea is simple: narrow down the scope of searching by only
looking at mappings that we may easily identify some common sub-strands between
the query DNA and the reference DNA.  Instead of evaluating the similarity score
for all possible mappings, both hash table based mappers and suffix tree based
mappers pre-process the reference DNA into special data structures, which later
helps identifying the potential mapping candidates. Given a cheap and fast
algorithm that is capable of searching for such common sub-strands, the total
amount of calculation is drastically reduced. The main difference between the
hash table based mappers and the suffix tree based mappers is the number and the
length of such required common sub-strands. Hash table based mappers typically
require multiple exact matching short DNA sub-strands (with each one being
k-base-pair long and k is commonly preset as 12) called \emph{k-mers} while
suffix tree based mappers usually require a single long perfect mapping (usually
more than 50 base pair long) called \emph{MEMmer} (Maximum Exact Matching). Each
type of the algorithms pre-process the reference DNA into data structures that
is fast for its algorithm.

The main trade-off between the hash table based mappers and the suffix tree
based mappers, which is speed against sensitivity, is actually determined by the
required common sub-stand length. A longer sub-strand usually appears less
frequently in the reference DNA than a shorter one, which means a longer
sub-strand achieves at a higher filtering rate. With a longer mandatory common
sub-strand between the query DNA and reference DNA, suffix tree based mappers
have a higher filtering rate with consequently less comparison computation.
However also because of the required long exact matching sub-strand, suffix tree
based mappers lose a lot of potential correct mappings (low sensitivity) where
there are only a few differences are scattered between the query DNA and the
reference DNA but lack of a single long common sub-strand (as the similar
sub-strands are cut off by the minor differences stemmed from the diversity of
species). Hash table based mappers do not suffer from low sensitivity against
differences, however since they require a multiple of very short common
sub-strands, their filtering mechanism is far less effective from the suffix
tree based mappers.

There is also another side effect associated with the required common sub-strand
length---the time-space trade-off. Both hash table based mappers and suffix tree
based mappers index the reference DNA into some data structure which maps
patterns of long sub-strands (as key) to all of the associated potential mapping
locations(as contents). With longer required common sub-strands, which requires
indexing the reference DNA using a longer key, there will be more unique
patterns asking for more storage although it achieves at a higher filtering
rate.

In general, a shorter required common sub-strand between the query DNA and the
reference DNA achieves higher sensitivity, less storage overhead at the cost of
longer execution time and vice versa.

This scenario has been worsened for the NGS platform, where now the query reads
are less than a hundred base-pair long and we have billions of query reads to
process. At such a short read length, a query read can map to multiple locations
to the reference DNA with very few differences and it is very hard to select the
best mapping among all the others without background knowledge possessed by
the researcher. As a result, a mapper is expected to search for all possible
mappings under a given diversity allowance (with some different base-pairs for
example), instead of only the best one.

To cope with such situation, several efforts have been made. They mainly fall
into three categories. They either 1) increases the sensitivity of suffix tree
based mappers, 2) increase the speed of hash table based mappers or 3) reduce
the storage of the data structure. BWA\cite{bwa}, bowtie\cite{Langmead2009} and
MUMmer2\cite{MUMmer} use Burrows-Wheeler transformation to mimic the suffix tree
traversal with a much smaller memory footprint. CUSHAW, BWA-sw\cite{Li2010a},
bowtie2\cite{bowtie2} and MUMmer3 requires a shorter MEM for higher sensitivity.
mrFAST\cite{Alkan2009} mrsFAST\cite{Hach2010} increases the mapping
speed by exploiting the hardware cache efficiency. LSH and
PatternHunter\cite{patternhunter} reduces
the storage of using a longer hash table key by only using selective base-pairs
from the long k-mers to form a shorter key, while preserving some of the benefit
of the long key.  FastHASH and hobbes\cite{hobbes} reduces the computation of hash table
based mappers by imposing a cheap secondary level filtering to further reduce
the comparison computation.

Given the situation that NGS suggests searching for all possible mappings and
the exponentially increasing in complexity of improving sensitivity for a long
MEM as shown in BWA and bowtie, we choose the direction of further increasing
the speed of hash table based mappers.

\begin{figure*}[h]
  \includegraphics[width=\textwidth]{./figures/MRFAST.pdf}
    \caption{
      The flow chart of hash table based mappers. 1) Divide the input read into smaller
      k-mers. 2) Search each k-mer in the
      hash table previously generated from the reference genome. 3) Probe location lists. 4) Retrieve the      reference
      sequence starting at the seed location. 5) Align the read against the reference
      sequence.  6) Move to the next location and redo steps 4 and 5.
    }
  \label{fig:hash-mapper}
\end{figure*}

To understand why hash table based mappers are slow and what is the main cause
of it, it is necessary to have an overview of how hash table mappers work.
Figure~\ref{fig:hash-mapper} shows the flow chart of a typical hash table based
mapper during the mapping stage. The mapper follows six steps to map a query
read to the reference genome. In step~1, the mapper divides the query read into
smaller k-mers, with each k-mer of equal length as the hash table keys. In
step~2, several of these k-mers are selected as query k-mers. Query k-mers are
then fed to the hash table as inputs. The hash table returns the location lists
for each query k-mer.  The location list stores all the occurrence locations of
the query k-mer in the reference genome. In step~3, the mapper probes the
location lists of all k-mers belonging to the query read. For each location,
the mapper accesses the reference genome and, in step~4, retrieves the
reference fragment from the reference genome at the seed location's
neighborhood. In step~5, the mapper aligns the query read against the reference
fragment using the Hamming distance or more complicated dynamic programming
algorithms such as the edit distance~\cite{levenshtein1966}, Needleman-Wunsch,
or Smith-Waterman, to verify if the number of edits between the query read and
the reference fragment exceeds the user-set edit distance $e$. This step is
also called the ``verification step". One can think of this step as a
complicated fuzzy string matching procedure that tries to match the base-pairs
between the query read and the reference fragment, with some edits permitted.
We will use the term ``alignment" or ``verification" to refer to this step for
the rest of the paper.  Finally in step~6, the mapper processes the next
location in the location list and repeats step~4 and step~5 until all the
locations of the k-mer are processed. This entire process (from step~2 to
step~6) is performed for each k-mer in the query read.

As described above and also shown in previous work, the main cause of the
slowness is the poor filtering effect of short k-mers. Since the k-mers are too
short, they have too many occasions in the reference DNA. As a result, the
location list returned in step~2 may contain thousands to tens of thousands
locations. However, among all the locations, very few of them provide a valid
mapping for a long query DNA.

FastHASH, our previous work, addresses this problem on two aspects. First, we
drastically reduce the potential locations considered for the extend step while
still preserving comprehensiveness. We call this method {\it Cheap K-mer
Selection (CKS)}. Second, we quickly eliminate most of the false locations
without invoking the extend step in the early stages of mapping. This method is
called {\it Adjacency Filtering (AF)}. By identifying and rejecting obvious
non-mapping locations at early stage, without accessing the memory in step~4 and
executing the computationally expensive step~5, FastHASH achieves up to 15x
performance improvement. However, even with Cheap K-mer Selection and Adjacency
Filtering, FastHASH still needs a long time to process millions of reads. The
reason is that even though AF identifies non-mapping locations with minimal
computation, there is still some calculation. With a large number of non-mapping
locations, AF eventually becomes the bottleneck.

The key to further increase the speed of hash table based mappers, lies in the
mechanism to further reduce the number of non-mapping locations.

\section{Observation}\label{sec:ob}

In this section, we will describe our main observation: longer k-mers reduces
the locations list length of the hash table, which reduces the number of
non-mapping locations that are fed to AF.

\begin{figure}[h]
   \includegraphics[width=\linewidth]{./figures/HashtableUneven.pdf}
   \caption{A snapshot of the hash table. Some k-mers have very large location
   list, while others have shorter lists. For example, AAAAAAAAAAAA
   has over 1 million of entries whereas TGAACGTAACAA only has 2.}
\label{fig:HT}
\end{figure}

The main observation from our previous work, FastHASH, is that the hash table is
highly unbalanced. The hash table maps a short fixed length pattern (k-mer) to
all the occurrence of such pattern in the reference DNA.
Figure~\ref{fig:HT} presents the hash table unbalanced problem. The hash
table follows a power-low degree distribution, that is, most of the patterns or
keys have a very short locations list length, which also means they appear
infrequently in the reference DNA.  However, there are also very few patterns
have very long location lists, which corresponds to their frequent occurrence in
the reference DNA. Probing large location lists (step~3 in
Figure~\ref{fig:hash-mapper}) burdens the mapper since it has to verify a large
number of locations; thus, we call patterns have long location lists as {\it
expensive k-mers}. On the other hand, patterns with short location lists are
denoted as {\it cheap k-mers}.  Expensive k-mers drastically slows down the
mapper because not only do they have a long location list to probe, but also
they are so frequent in the reference DNA that if we do a random read from the
reference DNA, we end up with very high chance encountering one.  In fact, even
though there are just a few patterns of such expensive k-mers, they have a great
lot of instances in absolute number in an individual's DNA.  When translated to
computational cost, this means that if we are doing a shotgun sequencing, which
is many random reads on the individual's DNA sample, we end up with many query
DNAs in which a lot of them contain at least one expensive k-mer.

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/SingleFreq.pdf}
    \caption{The frequency analysis of the human first chromosome, by sweeping
through it at a 12 base-pair k-mer's granularity. The redder a pixel is, the
more repetitive the k-mer at that locations is, throughout the chromosome.
	}
  \label{fig:singlefreq}
\end{figure}

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/fraction.pdf}
    \caption{The composition of the hash table with k-mers of different location
list length at various indexing k-mer.
	}
  \label{fig:htfraction}
\end{figure}

To pictorially show this, we swept the first chromosome of human DNA. In
Figure~\ref{fig:singlefreq} we present the frequency information of the chromosome with 12
base-pair k-mers. Each pixel represents the frequency of the k-mer at the
location, with the whole bar representing the entire chromosome. The redder the
pixel is, the more frequent that k-mer at the location appears in the
chromosome and the more computationally expensive when we are mapping a query
read that is read from that location. The greener vice versa. Even though there
are only very few expensive k-mer patterns, the chromosome is nearly completely
red, instead of being mostly green. This is simply because a few types of
expensive k-mers show up so frequently in the chromosome, that every here and
there we end up with a k-mer having thousands of locations that are distributed
throughout the chromosome. Figure~\ref{fig:htfraction} shows the composition of the hash
table with different k-mer length. As we can see, with a longer k-mer length
(>14), a larger fraction of the k-mer have location list with fewer than ten
locations, whereas with a short k-mer length (=10), most of the k-mers have
have more than hundreds of locations store in their list.

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/MultipleFreq.pdf}
    \caption{The frequency analysis of the human first chromosome, with multiple
sweepings at different k-mer length. As the k-mer length grow larger, the
chromosome gets less repetitive.
	}
  \label{fig:multifreq}
\end{figure}

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/AverageKmerCost.pdf}
    \caption{The average repetitive frequency of all the k-mers at each location
throughout the chromosome, at different k-mer length.
	}
  \label{fig:avelength}
\end{figure}

Will longer k-mer alleviates the high frequency cost imposed by expensive
k-mers? The answer is consistent to our assumption: it does.
Figure~\ref{fig:multifreq}
shows the reduction of chromosome frequency with longer k-mers (longer pattern
that is used to index the hash table). From the figure, we may observe that
with longer k-mer (the k-mer length is labeled to the right of the frequency
bar), the chromosome is getting greener. This is not only because that with
longer indexing k-mer, the frequency of each k-mer decreases (the location list
of each k-mer is shorter), but also because the fewer expensive k-mers now show
up at less frequently in the chromosome, which reduces their chance of emerging
in a shotgun sequencing. Figure~\ref{fig:avelength} gives the average location list
length (frequency) for all k-mers across the entire chromosome with different
k-mer length. With longer k-mer, the average location list length drops
accordingly.

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/HTSizeSweep.pdf}
    \caption{The size and composition of the hash table at various indexing k-mer
length.
	}
  \label{fig:htcomposition}
\end{figure}

So far it seems as if longer indexing k-mer always provides more benefit.
However, longer indexing k-mer also requires larger storage, just like the
classic time-storage trade-off: fewer frequent patterns means there are more
intrinsic infrequent patterns, which requires more storage to bookkeep them.
Figure~\ref{fig:htcomposition} shows the number of unique patterns at different k-mer length.
The total number of unique patterns nearly quadruples when increasing the k-mer
length from 10 to 13, doubles when increasing from 13 to 15 and increases for
around 10\% from there on while saturates at around 25. This basically means
the number of unique patterns saturates when the length of the k-mer gets to 25
base-pair long.

\begin{figure}[h]
  \center
  \includegraphics[width=0.7\linewidth]{./figures/HT2Array.pdf}
    \caption{The internal data structure of the hash table, which consist of two
	levels of arraies. 
	}
  \label{fig:ht2array}
\end{figure}

In terms of real implementations, few mappers support a k-mer length beyond 15.
Among the few mappers that do support a longer index, they are usually only
available for servers\cite{ZhangZ2004}. This is because for a hash table based mapper, the
performance of the tool largely depends on the access latency of the hash table.
As a result, the designer usually choose to index the hash table with all
permutations of the fixed length key, following the lexicographical order,
starting from AAAA...AAA to TTTT...TTT.  Accessing the hash table then becomes a
trivial look up, with the index simply being the query k-mer itself. No extra
calculation is needed. To better visualize the data structure,
Figure~\ref{fig:ht2array}
provides the detailed organization of the hash table. In
Figure~\ref{fig:ht2array}, the
hash table is decomposed into a two-level-array structure: the first level array
contains all the permutations of the fix-length k-mer, with each entry stores a
pointer pointing to the secondary array and a location list length; and the
second level array concatenates the locations list of all the k-mers together,
again following the lexicographical order, into an one-dimension array. A hash
table lookup simply retrieves the pointer and the length of the query k-mer from
the first level array and use such information to get the location list from the
second level array. For patterns that are absent from the chromosome, since not all
patterns show up in a finite length reference DNA, the first level array simply
stores a NULL pointer with zero location list length in it.

The structure of the hash table answers the question of why few mappers support
a k-mer length longer than 15. With a k-mer length being 15, there are $4^{15}$
unique patterns which translates into $4^{15}$ elements in the first level
array. With the pointer being 8 bytes and the location list length being
8 bytes. The total size of the first level array alone would be 16GB.
Further increasing the k-mer length generates a hash table that is too big to
fit into the main memory of modern desktop computer. The only choice to use a
hash table with longer k-mer, is to store only the k-mers who has a not NULL
pointer, instead of the whole permutations. However, this also increases the
latency of accessing the hash table, since now we have to do a non-trivial
search rather than a simple array lookup.

\section{Insight}\label{sec:ins}

In this section, we describe our insight, which is adaptively choosing the k-mer
length at runtime. The basic idea is that instead of using a single k-mer length
to generate the hash table, we can combine the benefit of both long k-mer hash
table and short k-mer hash table by using a multi-level hash table that
contains both of them, and only extends shorter k-mers into a long k-mers for
those that are getting too expensive (having a location list that is longer
than a threshold).

Based on our observation from section~\ref{sec:ob}, we would conclude that with
longer k-mer to index the hash table, the computation of FastHASH will reduce at
the cost of a larger memory footprint. Contrarily, the computation of FastHASH
is not strictly decreasing with longer indexing k-mer. In fact, the benefit of
reduced computation of AF saturates at k-mer length being 15.

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/AFSweep.pdf}
    \caption{The number of potential locations send to AF at various k-mer
length and various error number $e$.
	}
  \label{fig:afsweep}
\end{figure}

Figure~\ref{fig:afsweep} shows the number of AF function calls made by FastHASH with
various length of k-mer. From the figure, we see that the number of AF calls
strictly decreases when the k-mer length is increased from 10 to 15. However
beyond 15, the number stays roughly similar, and even spikes at 23 and 26. To
understand why this is happening, it is beneficial to know how the other
mechanism of FastHASH, namely the Cheap K-mer Selection, works in detail.

The Cheap K-mer Selection is developed based on the q-gram (q-gram \emph{is}
k-mer) lemma and the pigeon hole theorem\cite{Rasmussen2006}. The q-gram lemma states that
two aligned sequences $S_{1}$ and $S_{2}$ with an edit distance $e$
(differences) share at least $t$ q-grams (k-mers) where t = $max(|S_{1}|,
|S_{2}|) - q + 1 - q \times e $. For non-overlapping k-mers, one error can
destroy only one k-mer and $e$ errors can destroy up to $e$ k-mers, which is the
same as the pigeon hole theorem. Thus by selecting $e + 1$ non-overlapping
k-mers to query the hash table, we guarantee that the union set of all the
location lists for the $e + 1$ query k-mers includes all of the mapping
locations where the query read and the reference DNA have fewer than $e$
edit-distances (differences).

Granted the theorem, that only $e + 1$ k-mers are required, FastHASH divides the
query read into several non-overlapping k-mers, sorts the k-mer based on their
location list length and picks only the $e + 1$ k-mers what has the shortest
location list length. With this location list length aware selection, FastHASH
minimize the number of potential mapping locations that are pass to AF for
further evaluation.

The effectiveness of CKS is proportional to the number of candidates that we are
selecting from. For instance, for two query reads $S_{1}$ and $S_{2}$ with
$S_{1}$ being 180 base-pair long and $S_{2}$ being 120 base-pair long. $S_{2}$
is the head 120-base-pair long sub-strand of $S_{1}$---$S_{2} = S_{1}[1:120]$.
When mapping both reads at $e = 3$ with a 12 base-pair k-mer hash table, $S_{1}$
will strictly have equal or cheaper k-mers to query the hash table. This is
because with 180 base-pairs, we can divide $S_{1}$ into 15 k-mers while we can
only divide $S_{2}$ into 10 k-mers. When it comes to selecting the cheaper
k-mers, $S_{1}$ gets a better selection because it has a larger pool to choose
from.

Similarly, with longer indexing k-mer, we are reducing the number of candidates
we can choose from. For example, with a 180 base-pair read, we can divide the
read into either twelve 15-base-pair long k-mers or six 30-base-pair long
k-mers. Although 15-base-pair long k-mers are in average cheaper than
30-base-pair long k-mers, when we can actively select the cheaper k-mers, the
cheapest four out of the twelve 15-base-pair k-mers may by chance be cheaper
than the cheapest four out of the six 30-base-pair k-mers. In short, with longer
k-mer, the benefit of having average cheaper k-mers might be offset by the
overhead of less effective Cheap Key Selection.

To sum up, simply adopting longer k-mers not only incurs larger storage,
longer hash table access latencies, but also decreases the effectiveness of the
CKS. However, longer k-mers also reduces the location list length. Given these
trade-offs, what we need is really a hybrid multi-level hash table, which uses
short k-mer hash table as first level for its high access speed, low storage
overhead and better cheap key selection effectiveness while adapting into a longer
k-mer hash table at situations where the query k-mers after CKS are still very
expensive and the only way to reduce the number of potential mapping locations
is to increase the k-mer length.

\begin{figure}[h]
  \center
  \includegraphics[width=0.7\linewidth]{./figures/2lvHT.pdf}
    \caption{The data structure of a two level hash table. The first level maps
	all permutations of the short k-mer to the reference DNA. Whenever the k-mer
	in first level hash table becomes expensive, it will be extended into many
	long k-mer and the second level hash table stores all of the extended
	long k-mers with their locations. In order to save storage, the second level
	hash table does not store k-mers with zero location list length.
	}
  \label{fig:2lvht}
\end{figure}

In this paper, we use a two level hash table data structure, with the second
level hash table having k-mers twice as long as the k-mers of the first level
hash table. The framework of the two level hash table is shown in
Figure~\ref{fig:2lvht}. The first level hash table is a full size hash table. The
second level hash table is a partial hash table which stores only the extended
long k-mers of the short k-mers in the first level hash table whose location
list length exceeds a preset threshold $L$. The idea behind this organization is
that, for k-mers that are already cheap (having a location list length being
equal or smaller than $L$) in the first level hash table, we do not want to
extend them into longer k-mers as this will not provide much reduction of their
already short location lists. As an example from the figure, where $L$ is preset
to 32, k-mer "AAAAAAAAAAAA" has a location list length greater than 32 thus is
further extended to "AAAAAAAAAAAAAAAAAAAAAAAA", "AAAAAAAAAAAAAAAAAAAAAAAG", etc.
On the other hand, k-mer "AAAAAAAAAACA" is not expensive thus is not expended at
all. The first level hash table stores all permutation of the patterns for fast
access whereas the second level hash table stores only the patterns that has a
non-zero location list for storage efficiency.

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/CompareFreq.pdf}
    \caption{The frequency analysis of the human first chromosome, swept with
2 two level hash table configurations. With two level hash tables, the
k-mers frequencies of the chromosome is reduced to the similar level of a single
level 24 base-pair hash table.
	}
  \label{fig:comparefreq}
\end{figure}

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/AverageKmerCost2Lv.pdf}
    \caption{The average repetitive frequency of all the k-mers at each location
throughout the chromosome, generated with two level hash tables. With the two
level hash table, the average repetitive frequency of all the k-mers is reduced
to the similar level of a single level 24 base-pair hash table.
	}
  \label{fig:avelength2lv}
\end{figure}

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/HTSizeCompare.pdf}
    \caption{The number of unique patterns throughout the chromosome. With more
unique patterns, it requires a larger storage of the hash table. Two level hash
table reduces the storage overhead of a single level 24 base-pair hash table.
	}
  \label{fig:htsizecomp}
\end{figure}

Figure~\ref{fig:comparefreq} shows the reduced k-mer frequencies at the chromosome level of
using two-level hash tables. The frequency bar labeled with "12\_24\_32" shows
the frequency of all the k-mers in the human first chromosome evaluated with a
two level hash table. The two level hash table has a 12 base-pair k-mer first
level hash table and a 24 base-pair k-mer second level hash table. The hash
table extends from the first level to the second level whenever the location
list length of a first level k-mer exceeds a preset threshold length 32. The
other frequency bar labeled with "12\_24\_64" has a similar configuration, but
with a larger threshold 64. In the graph we also showed two other chromosome
frequency bars generated with two single level hash tables: a single 12
base-pair k-mer hash table and a single 24 base-pair k-mer hash table labeled
with "12" and "24" respectively. When sweeping the chromosome with the two level
hash table, at each location we check if the short k-mer at that location is a
cheap k-mer. If it is, we will mark pixel at the location as the short k-mer's
location list length. Otherwise, we check the long k-mer at the location and
mark the pixel as the longer k-mer's location list length. This mimicking the
effect that we only extend the short k-mer when it has a long location list. As
the figure shows, both configurations have similar frequency result compared to
the single level 24 base-pair k-mer hash table, as their color schemes are
nearly exactly the same. Figure\ref{fig:avelength2lv} shows the average frequency (location
list length) of all k-mers across the entire chromosome. From the figure we can
see that the two level hash table reduces the average k-mer frequency
drastically when compared to the single level 12 base-pair k-mer hash table,
although not to the level of ato the similar level of a single level 24
base-pair k-mer level. single 24 base-pair k-mer hash table. However, as
Figure~\ref{fig:htsizecomp} shows, the hybrid hash table also contains fewer unique
patterns than the single level 24 base-pair k-mer hash table, requiring less
storage.

\section{Mechanism}\label{sec:mech}

In this section, we describe our two main mechanisms two implement the hybrid
hash table. The first mechanism is pretty straight forward: simply adopting the
data structure from Figure~\ref{fig:2lvht} and tries to extend when the situation
permits. The second mechanism tries to overcome the storage overhead of the
second level hash table by virtualizing the second level hash table with
information from the first level hash table alone.

\subsection{Two level hash table}

With the hash table data structure shown in Figure~\ref{fig:2lvht}, we can
adaptively choose the k-mer length at runtime for verifying a minimal number of
potential locations. During the process of mapping a query read, we first divide
the query read into short k-mers and pick the cheapest ones through CKS as query
k-mers.  If all the query k-mers' location list lengths are smaller than the
preset threshold $L$, which means the k-mers are all indeed cheap k-mers, we
will keep the short query k-mers and proceed to AF. However, whenever some of
the short query k-mers become expensive, as their locations lists lengths become
larger than $L$, we then try to merge the expensive query k-mer with surrounding
short k-mers and use the merged longer k-mer to query the second level hash
table. The intuition behind this is that by dividing the query read into short
k-mers at first, we maximize the effect of CKS (which will benefit from having
more candidate as described in section~\ref{sec:ins}) and only use the second
level hash table as a fail safe plan. With the hope that CKS seldom fails, we
preserve the speed of common case, where all the query k-mers after CKS are
cheap k-mers while drastically increase the speed of the worst case, where there
are expensive k-mers even after CKS.

\begin{figure*}[h]
  \center
  \includegraphics[width=0.8\textwidth]{./figures/EX1.pdf}
    \caption{An example of using two level hybrid hash table. After CKS, one of
	the elected k-mers' location list length still exceeds the preset threshold
	$L=32$, which triggers an extension. After merging the surrounding k-mers into
	a longer k-mer, the location list length reduced to 1.
	}
  \label{fig:ex1}
\end{figure*}

Figure~\ref{fig:ex1} gives an example of mapping a read using a two level hybrid
hash table. The first level hash table is indexed with k-mer length being 12 and
the second level hash table is indexed with k-mer length being 24. The preset
threshold $L$ is 32. In this example, we are mapping a query read with an error
allowance $e = 3$ and the query read is 72 base-pair long, which can be divided
into six short k-mers.  By the pigeon hole theorem, we have to select $e + 1 =
4$ k-mers as query k-mers throughout the six potential candidates. After CKS, we
select the four cheapest k-mers based on their location list length (as lightly
shaded in the figure).  Unfortunately, within the four cheapest k-mers, one of
them have a location list length greater than the preset threshold $L = 32$.
Under such circumstance, we can extend the expensive k-mer by merging it with
the other adjacent short k-mer to form a 24 base-pair longer k-mer and by
querying the second level hash table with this longer k-mer, we reduce the total
number of potential location.

\begin{figure*}[h]
  \center
  \includegraphics[width=0.8\textwidth]{./figures/EX2.pdf}
    \caption{Another example where real two level hybrid hash table cannot help
	but virtualized hybrid hash table can. In this case the real hybrid hash
	table cannot merge the expensive short k-mer with surrounding short k-mers since
	surrounding short k-mers are all preserved as query k-mers. Virtualized
	hybrid hash table on the other hand does not require the two short k-mers to
	be merged as contiguous k-mers. Notice for virtualized hybrid hash table,
	since it allows gap in the middle of a merged longer k-mer, the required
	distance between the two short k-mers is a range instead of a fixed number.
	}
  \label{fig:ex2}
\end{figure*}

Unfortunately, there are situations where this mechanism does not work. Take
another example from Figure~\ref{fig:ex2}, where the selected expensive k-mer is
surrounded by two other selected cheap k-mers. In this case, we cannot simply
merge the expensive k-mer with surrounding k-mers since surrounding k-mers are
also selected as query k-mers. If we do merge them, we will reduce the number of
query k-mers which is required to be four. Notice that the other two unselected
k-mers are even more expensive than the expensive k-mer. For simplicity, we choose
to do nothing under such situation and fall back to use the original four k-mers
return by CKS.

\subsection{Virtualized two level hash table}

To cope with the limitation of the two level hash table as described above, we
developed the virtualized two level hash table, which uses a single level hash
table but can mimic a two level or even multi-level hash table. The basic idea
is the following: for a merged long k-mer, its locations must be the joint
set of the locations from the two short k-mers' location lists. In another word,
if two short k-mers can be merged into a single longer k-mer, then at the
locations where the long k-mer presents in the reference DNA, the two short
k-mers must also present at that location with one k-mer followed by another.
As a result, instead of having a second level hash table with longer k-mers, we
can simulate the long k-mer location list by searching for the joint locations
from the two short k-mers' location lists.

With the algorithm described above, we can not only simulate a longer k-mer by
merging two contiguous short k-mers together, but also simulate situations where
we can merge two k-mers that are apart. Take the same example from above. If we
can merge the expensive k-mer with one of the unselected k-mer to form a long
k-mer with some gap (base-pairs that we do not require matching) in the middle,
then we overcome the limitation of the real two level hash table. The insight of
merging two apart short k-mers is very similar to merging two contiguous k-mers.
The only difference is that now the required joint location set is not defined
as locations that are one k-mer length away in each individual k-mer's location
list, but a multiple of k-mer lengths, depending on how far apart the two k-mers
are.

Back to the same example presented in Figure~\ref{fig:ex2}. In the figure, we merge the
expensive selected k-mer (darkly shaded) with one of the unselected k-mer (blank)
to form a longer gapped k-mer as described above. The unselected k-mer is 2
k-mers apart from the selected k-mer starting location. Thus, in such case, to
virtualize the long gapped k-mer's location list, we search for locations that
are 24 ($3 \times 12$) base-pair apart from each other, which is the location
393300.

There is one concern associated with long gapped k-mers. Unlike merging two
contiguous short k-mers into a single continuous long k-mer, where the two short
k-mers must be one k-mer length away from each other, merging two
non-neighboring short k-mers into a gapped long k-mer does not insist on
contiguous placement of the short k-mers. The direct result is that there can be
insertions and deletions in the gapped area which may shift the relative position
of the two k-mers a little bit. Based on the q-gram lemma, given $e$ total edit
distances, which can be both mutation, insertion and deletions, the two k-mers
are not required to be strictly multiple k-mer length away from each other. The
required gap distance should be a range with up to $e$ base-pairs drifted away
from the estimated distance. Back to the example above, when we are merging the
two k-mers, instead of searching for locations that are exactly 36 base-pair
away, we will search for a range that is [24-$e$, 24+$e$] which is [21, 27] in
this case, as $e=3$. With this new relaxed criteria, location 520031 is also
considered as a valid location for the merged long gapped k-mer.

\section{Methodology}\label{sec:meth}

To evaluate the benefit of both real two level hash table and the virtualized
two level hash table, we count how many potential locations are send to AF by
using each configuration. We have not yet integrate the algorithm into any of
the main stream mapper thus we cannot evaluate the real time performance.
However, as AF is the bottleneck of the program, reduction of AF calculation
directly translates into performance gain of the program.

We evaluate the algorithms by simulating mapping 1 million synthetic 180
base-pair long read to the human first chromosome. The synthetic read is
generated by randomly reading 500,000 180 short read from the first 20
chromosomes of human with each one added up to 3 random errors, including
mismatch, insertion and deletion.

\section{Results and Analysis}

In this section, we present preliminary results of using the two level hash
table and the virtualized two level hash table.

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/SpeedUp.pdf}
    \caption{The number of potential locations send to AF with different hash
table configurations.
	}
  \label{fig:speedup}
\end{figure}

The results are shown in Figure~\ref{fig:speedup}. The x-axis is the user set allowable
error number $e$ while the y-axis is the number of potential locations after CKS
that are passed to AF. Based on the discussion above, both hash table
configuration shall not alter the sensitivity of the mapper. We believe after
integrating the data structure into the mainstream software we can verify this
claim. For now, we will assume the sensitivity is not changed and the reduction
of the number of potential locations directly translates into performance
improvements. We further divide the x-axis into several bins with each bin
stores the results across a group of hash table configurations under the same
error number. Within each bin, from the left to right we have: a single level 12
base-pair hash table, a two level 12 base-pair and 24 base-pair hybrid hash
table with extension threshold $L=64$, a virtualized two level hash table with
extension extension threshold $L=64$, a two level 12 base-pair and 24 base-pair
hybrid hash table with extension threshold $L=32$, a virtualized two level hash
table with extension extension threshold $L=32$ and finally a single level 24
base-pair hash table.

As Figure~\ref{fig:speedup} shows, both real hybrid hash table and virtualized hybrid
hash table reduce the number of potential locations passed to AF when compared
against the single level 12 base-pair hash table. The reduction rate of both
hybrid hash tables increases with increased allowable error number $e$. This is
expected because with higher $e$, there will be more k-mers selected as query
k-mers by CKS (with pigeon hole theorem, we have to select $e+1$ k-mers as query
k-mers). With more selected query k-mers, there is a higher chance selecting
expensive k-mers, k-mers with a location list longer than the k-mer extension
threshold $L$, which triggers the extension mechanism to construct a longer
k-mer.

For both hybrid hash table data structure, they achieve a higher potential
location reduction rate with a smaller extension threshold $L$. This is because
with a smaller $L$, a extension of a short k-mer is more frequently evoked since
a k-mer becomes more likely to be defined as an expensive k-mer with a smaller
$L$. With the same extension threshold $L$, virtualized hybrid hash table has a
better reduction rate, this is because real hybrid hash table has limitations
where it cannot extend a short k-mer surrounded by other selected query
short k-mers; whereas virtualized hash table can handle all situations, with
gapped long k-mers.

However, on this set of synthetic reads, no hybrid hash table can reduce
potential locations as many as a single level 24 base-pair hash table does. We
assume this is because the read is too long (180 base-pair can be divided into
seven long k-mers, which is still a lot and is able to benefit from CKS). We
believe this situation will change with a shorter read set. However, such
hypothesis is yet to be verified.

\begin{figure}[h]
  \includegraphics[width=\linewidth]{./figures/StorageCompare.pdf}
    \caption{The number of unique patterns with each hash table configuration,
which is also the size of required storage.
	}
  \label{fig:sizecomp}
\end{figure}

Figure~\ref{fig:sizecomp} provides the storage requirement with multiple hash
table data structure. From the figure, we can see one main trade-off as we
described in section~\ref{sec:ins}: with lower extensive threshold $L$, there
will be more unique patterns which requires more storage. On the other hand,
virtualized hybrid hash table do not require any extra storage since it is
simulating a two level hash table with a single level 12 base-pair hash table.
As a result, the virtualized hybrid hash table occupies the same space with a
single level 12 base-pair hash table.

\section{Discussion}

For real two level hybrid hash table, there is a clear time-storage trade-off
with different extension threshold $L$. However, with virtualized hybrid hash
table, the trade-off of different extension threshold $L$ is unclear. In the
future, after integrating the data structure into the main stream software, we
will have a clearer insight with it.

\section{Conclusion}

In this paper, we discussed the background of the hash table based mappers. We
analysed the benefit and overhead of using different k-mer length to index the
hash table and provided two mechanisms to get the benefit from having hash
tables with various k-mer length while avoiding imposing a large storage
overhead.

We are looking forward to integrate our mechanism into a main stream hash table
based mapper to provide further analysis.

\bstctlcite{bstctl:etal, bstctl:nodash, bstctl:simpurl}
\bibliographystyle{IEEEtranS}
\bibliography{references}

\end{document}

