\section*{Introduction}

In the past decade, the emergence of new sequencing technologies triggered a
revolution in the field of genomics. These massively parallel sequencing
technologies, commonly known as high-throughput sequencing (HTS) platforms, are
able to sequence a mammalian size genome in a few days, enabling applications
such as investigating human genome diversity between populations~\cite{1000GP},
finding genomic variants that are likely to cause diseases~\cite{Antonacci2009,
Antonacci2010, Bailey2006, Bailey2002, Bailey2008, Bailey2001}, and learning the
genomes of the great ape species~\cite{Bailey2002a, Bailey2004a,
Marques-Bonet2009, Rozen2003, Scally2012, Ventura2011} and even ancient
hominids~\cite{Green2010, Reich2010} to understand human evolution.  Despite
the advantages these new sequencing platforms offer, they
increase the computational burden of sequence mapping, which is one of the main
post processing procedures in reconstructing the genome from raw outputs
produced from the sequencing platforms. Specifically, HTS platforms impose three
computational challenges: 1) the vast quantity of sequenced data subjected to
mapping, which increases the workload of mapping, 2) shorter read lengths and less
information per unit piece of data, which increase the difficulty of mapping and
3) higher sequencing errors when compared to the traditional capillary-based
sequencing, which decreases the precision of mapping.

%With HTS platforms, such as the popular Illumina sequencing platform, billions
%of short reads, which are small fragments (e.g., around 100 bases in length) of
%contiguous DNA from the sequencing subject, are captured in a few days. These
%short reads are sent to a computer to reconstruct the subject's genome or to
%search for differences from a reference genome. Other sequencing platforms
%generate data with different properties, but the main computational problems
%are similar. One common practice in genome analysis is read mapping---mapping
%the short reads to a known reference genome of the same species or to a closely
%related species.  This is feasible because genomes of the individuals of the
%same species are usually highly identical. % ``highly identical'' -> genomics lingo 
%As a result, if a DNA fragment matches to a sequence in the reference
%genome, it is very likely that the fragment is located at the same place in the
%subject's own genome. 

%In order to keep up with the throughput of the sequencing machine, we need
%to map each read very quickly. This is challenging for two reasons: 1) the reads
%are very short while the reference genomes are usually very long. 
% Mapping a read to a reference genome is similar to finding a needle in an
% ocean. 
%2) A short read may map to multiple locations with the same properties, due to
%the repetitive nature of most genomes. Among these mappings there is only one
%of them presenting real genomic variants captured by the read.
%Due to the repetitive nature of most genomes, a short read often finds
%multiple mappings throughout the reference while each mapping contains a
%different set of variants.
%It is very difficult to determine with the mappers alone which mapping reflects the
%true origin of the read from the subject's genome. A read perfectly mapped to
%one place in the reference might originate from a completely different location
%in the subject's genome and it might just be mapped inexactly due to small
%factors that cause errors such as sequencer errors or genome diversity. Hence
%it is very desirable to have a mapper that searches for as many potential
%mappings of a read while tolerating as many errors within each mapping as
%possible. Designing such a comprehensive mapper is the challenge we target in
%this work.

%Since the reference sequence is long, it is prohibitive to examine all locations
%across the entire reference genome for every read. Rather, mappers usually
%index the reference into a lookup table which stores multiple common sequence
%patterns in the reference, called ``seeds", as well as their locations of
%appearances. These seeds serve as a shortcut to the reference genome: if a read
%possesses one or more seeds while assuming there is no error in the seed, then
%the mapper can examine the vicinities of these seeds in the reference and only
%search for potential mappings around them, rather than searching within the
%entire genome. This latter searching is also known as verification or local
%alignment, which is a read-long edit-distance calculation between the read and
%the reference. This so-called ``seed-and-extend" heuristic was first introduced
%in BLAST\cite{??} 
%Altschul1990/1 something 
%and it was further extended by subsequent mappers.

%Due to limited system memory and computing capability at the time, when the
%``seed-and-extend" heuristic was first proposed, only a few short seeds
%(10$\sim$12 bases) in the reference were indexed and the lookup
%process was a simple hash lookup\cite{??}. Later with faster computer systems,
%more advanced mappers were introduced which indexed more seeds, used longer
%seeds and employed more complex data structures.

%Today, 
%with faster processor and more system memory, 
%many popular mappers are
%able to index the entire reference genome in to seeds with different lengths. 
We categorize the current state-of-the-art  mappers into two categories:
\emph{suffix-array based mappers} and \emph{hash-based mappers}.  
%Hash-based mappers typically use a hash table data structure to store short
%seeds ($\sim$12-14 bases), which enables fast seed lookup. 
Suffix-array mappers (based on
BWT-FM~\cite{Burrows94ablock-sorting,Ferragina07compressedrepresentations}),
provide superior mapping speed faster, but they typically suffer greatly from
increased number of errors.  On the other hand, hash-based mappers are very
resilient to errors and are capable of searching for all possible mappings of a
read within a certain number of errors\cite{Xin2013} but they are also very slow
because short seeds tend to appear at multiple locations in the reference
genome. Not surprisingly, the efficiency and power of hash-based mappers directly
depend on the length of the selected seeds. 

%index the reference genome into a suffix-array which uses
%Burrows-Wheeler Transform~\cite{} and Ferragina-Manzini
%index
%to mimic a suffix-tree.
% A seed
%lookup in a suffix-array based mapper involves non-trivial computation based on
%binary search.  
%Luckily, suffix-array based mappers use much longer seeds (can
%be as long as the entire read) which returns only a few locations to verify and
%hence are very fast to search for ``perfect mappings"---mappings that match
%exactly to the reference sequence. Error tolerance is provided through
%backtracing non-mappable suffix of the read with read-base changes (to guess an
%error and try to correct it). However, when configured to search for all
%possible mappings and/or with the potential large quantity of errors,
%suffix-array based mappers suffer from slowing down exponentially, making them
%infeasible when high error-tolerance is required.


%***Examples***   ???
% CALKAN:  I think the intro is too long

%While previous works focused on the differences of data structures and the
%underlying query algorithms of the lookup table of different mappers, we
%demonstrate that a major difference among these mappers that is not examined
%exclusively is the seed length. When seed length is short, the seeds are
%frequent in the reference hence providing more locations to verify which slows
%down the mapper drastically. However, short seeds consume less system memory
%because there are fewer seeds and they provide higher error-tolerance because
%short seeds assumes a shorter error-free region in the read. To the contrary,
%long seeds speed up the mapper but consume more memory and are less error
%tolerant because they are less frequent in the reference genome but are more in
%number and assume a longer error-free region.

In this paper, we provide a detailed analysis of how different seed lengths
affect the speed, the memory usage and the error-tolerance of a mapper. We
propose a new metric, ``mapping cost", to estimate the amount of computation
different seed lengths lead to. We observe that the few expensive short seeds,
i.e. seeds that are very frequent in the reference genome, are responsible for
the large mapping cost of short seeds. We then demonstrate that by extending
only the expensive short seeds into cheap longer seeds which become cheap at
different lengths, the mapper achieves the maximum reduction in mapping cost
at the minimum increase in memory overhead. Hence, we conclude that it is optimal
to apply different lengths to different seeds according to their frequencies in the
reference genome in order to achieve both high mapping speed, low memory
overhead and high error-tolerance at the same time. We call this concept
``Heterogeneous Seeds". We also propose a new data structure ``Heterogeneous
Lookup Table" and a novel seed dividing algorithm ``Jigsaw Seeds and
Overlapping Seeds" to implement the ``Heterogeneous Seeds" concept.

%In the following sections, we first analyze the effects of different seed lengths and
%describe the metric ``mapping cost" in the ``Dilemma: Long Seeds or Short Seeds"
%section. We briefly summarize previous works in ``Related Works"
%section (Appendix). We demonstrate the need for ``Heterogeneous Seeds" and
%describe the ``Heterogeneous Lookup Table" and ``Jigsaw Seeds and Overlapping
%Seeds" concepts in the ``Methods" section. Finally, we present our results in
%``Results" section and conclude in ``Conclusion" section. 
Our experimental
evaluations show that using heterogeneous seeds, we can approximate the benefits
of both short seeds and long seeds: 1) low mapping cost (similar to long seeds),
2) high error-tolerance (similar to short seeds) and 3) low storage cost
(similar to short seeds).

