\section*{Related works}

The seed length dilemma has concerned researches for a long time. Multiple data
structures and mapping mechanisms attempted to resolve the dilemma yet they all
fall short in certain aspects and failed to provide a efficient data structure
and mechanism which occupies small amount of memory, provides fast accesses,
generates very few locations to verify and at the same time tolerates
many potential errors.

\subsection*{mrFAST and FastHASH: use short seeds for higher error tolerance}
mrFAST~\cite{Xin2013} is a typical hash based mapper which uses short seeds
(11-13 bases) for higher error tolerance. One of mrFAST unique features is
guaranteeing finding all possible mappings of the read with up to as many
errors as 8\% of the entire read with relatively small memory footprint (around
4GB). As with other short seed mappers, mrFAST suffers from verifying too many
locations provided by the short seeds which greatly reduces the speed of the
mapper. FastHASH alleviate the burden of mrFAST by artificially selecting the
seeds with fewer locations to query the lookup table in order to reduce the
number of locations to verify. While this technique works fine when the number
or errors is small, it does not work as well when the number of errors is
large. When the number of errors is large, most of the seeds will be selected
anyways, suggested by the Pigeonhole Principle, hence the selection of seeds
becomes unnecessary.

\subsection*{BWA: use long seeds for fast perfect mapping}

As its name suggests, BWA~\cite{Li2010a} uses Burrows-Wheeler transformation
and is a typical BWT-FM mapper. Similar to all BWT-FM mappers, BWA uses long
seeds (as long as the entire read); has relatively small memory footprint
(4GB); and is very fast to find the perfect mapping. If there are errors in the
read, BWA tries to fix the error by artificially altering the read base by base
in a brute-force manner until it founds an acceptable mapping. Although BWA
finds a perfect mapping very fast, when it is configured to find all possible
mappings of the read, it gets slowed down exponentially, because it has to
search for all possible alternations of the read.

\subsection*{SNAP: use longer seeds for faster lookup}

SNAP~\cite{Matei2011} is another hash based mapper which uses longer seeds (20 bases). Unlike
BWA, SNAP do not use a BWT-FM or any complex lookup mechanism, rather, it
uses a ``sparse'' permutation array for fast access with a hashing function
which reduces the number of ``empty seeds'' in the table, saving memory space.
Besides, SNAP filters out any seed having more than one locations, which further
simplifies the data structure from a ``seed $\rightarrow$ locations'' map to a
``location[seed]'' array. With the simple lookup function and the long seed,
SNAP achieves a speed much faster than BWA. Nonetheless, the fast speed of SNAP
also come at a price. First of all, SNAP still consumes a lot of main memory. To map with
human reference genome, SNAP requires 64 GB of main memory. Second of all, the
long seeds and the heuristic filtering reduce the error tolerance of SNAP and
cancel out the guarantee of finding all possible mappings provided by the
``Pigeon-Hole'' Principle.

\subsection*{Hobbes: find the best seed placement}

Instead of changing the seed length, hobbes~\cite{hobbes} proposes to search for the best seed
placement which yields the fewest locations. By the Pigeonhole Principle,
to tolerate $e$ errors, a mapper only needs to verify the locations of $e+1$
non-overlapping seeds. The only requirement of the placement of the seeds is
that the seeds do not overlap each other. There is no requirement about where
should the seeds locate in the read. In fact, they can be anywhere: they can be
contiguous or they can be discrete. Hobbes takes advantage of this property
and list out all possible seed placements and select the placement yields the
fewest locations to verify. However, the computation to evaluate all seed
placements is non-trivial, since there can be roughly $O(\dbinom{L}{e+1})$ total
possible placements ($L$ is the length of the read and $e$ is the number of allowed
errors). Further more, in order to calculate the number or locations provided
by all potential placements, hobbes needs to query the lookup table many times
to get the number of locations for all possible seeds. Last but not the least,
for some cases, a read contains a long repeating segment (more than 20
bases), which presents at many places in the genome. If that is the case,
merely changing the placement of the seed do not help much, as the number of
locations between two close-by seeds may be very similar.

\subsection*{PatternHunter: mimic long seeds---redistributes the locations to the seeds for fewer verifications}

So far we have been assumed seeds are consist of continuous bases (e.g.,
ACTCATTACATC). However, this is not a requirement---seeds can also be made up
of intermittent bases (e.g., A\_T\_AT\_A\_AT\_C\_TACG from the continuous
sequence ACTCATTACATCCATACG).  As long as seeds do not overlap or interleave
each other, we can still use the Pigeonhole Principle for intermittent seeds
(if two intermittent seeds interleave but not overlap each other, a single
insertion or deletion can destroy both of them, violates the ``one error
destroys one seed" principle). Essentially, seeds with intermittent bases can
be thought as using fewer bases (as many as a short seed) to mimic a long seed,
because they are treated exactly like a long seeds throughout the mapping process except
that they occupies less memory space. For example, during read division, a read
is first divided into ``long seeds'' and afterwards, only when the mapper uses
the ``long seeds'' to query the lookup table, the mapper extracts the
intermittent bases from the ``long seed" (ACTCATTACATCCATACG) to form the
intermittent seed (A\_T\_AT\_A\_AT\_C\_TACG).

Intermittent seeds provides many benefits. One of the main benefits is a more
balanced location distribution among all seeds. With the freedom to select
which bases to take from the long seed, PatternHunter finds the best pattern
which redistributes the locations the most evenly to the seeds (e.g., for human
genome, the pattern of taking 12 bases from a 20-base long seed would be
11101001010011011101. ``1" is taking and ``0" is not taking).  Compared to short
seeds, PatternHunter does not increase the memory size as it uses as many
bases as the short seeds to form the permutation array.

Still, PatternHunter has several limitations. First of all, PatternHunter
divides the read into long seeds, which implies the error tolerance of it
will be weaker than short-seed mappers. More importantly, its power of
redistribution is limited. PatternHunter only provides a universal pattern that
is applied to all of the seeds. However, based on our observation, the seeds
contribute the most to the mapping cost are the expensive seeds.  PatternHunter
does not focus on reducing the mapping cost of the expensive seeds, rather it
selects the ``best pattern" which reduces the total mapping cost for all seeds,
in the hope that it will reduce the mapping cost of the expensive seeds as
well, which is not guaranteed. Sometimes a pattern that reduces the mapping
cost of some expensive seeds will increase the mapping cost of the other
expensive seeds, limiting its effectiveness.
