/**
 * <p>Sequence Alignment.</p>
 *
 * <p>Sequence alignment problem is to measure the similarity between sequences. Needleman-Wunsch introduced the idea of
 * scoring the matches, mismatches and gaps. For example, given 2 sequences: ATCTA and ATTTTTA, there are many ways
 * of aligning these sequences with different scores:</p>
 * <pre>
 *     AT-C-TA                      ATC---TA                        ATC--TA
 *     ATTTTTA                      AT-TTTTA                        ATTTTTA
 * </pre>
 *
 * <p>Needleman-Wunsch's algorithm is a simple dynamic programming with $O(mn)$ time and $O(mn)$ space. Given two sequences
 * $A (a_1a_2...a_m)$, $B (b_1b_2...b_n)$, let g be the cost of a gap and $s(p,q)$ be the similarity between
 * character $p$ and character $q$. Let $M[i,j]$ be the optimal score of aligning the two sequences.</p>
 *
 * $$\table M[0,0] ,=, 0;
 *          M[i,j] ,=, \max \{ \table M[i-1,j-1] + s(a_ib_j);
 *                                    M[i-1,j] + g;
 *                                    M[i,j-1] + g$$
 *
 * <p>For example, let $g = -2$, let all matches have the same score 1 (i.e. $s(p,p)=1 ∀ p ∈ Σ$), let all
 * mismatches have the same score -1 (i.e. $s(p,q) = -1, ∀ p≠q $). The alignment of ATCTA and ATTTTTA:</p>
 *
 * <pre>
 *     AT-C-TA                      ATC---TA                        ATC--TA
 *     ATTTTTA                      AT-TTTTA                        ATTTTTA
 *     Score: -1                    Score: -4                       Score: -1
 * </pre>
 *
 * <p>In reality, all the matches may have different scores and mismatches also have different scores. For example,
 * BLOSUM40 provides score matrix for each pair of 1-letter amino acid codes.</p>
 *
 * <p>The dynamic programming approach is similar to finding the shortest path from point $(0,0)$ to $(m,n)$ in this
 * matrix:</p>
 *
 * <pre>
 *      -   A   T   T   T   T   T   A             -   A   T   T   T   T   T   A
 *    ┌───────────────────────────────┐         ┌───┬───┬───┬───┬───┬───┬───┬───┐
 *  - │ ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ │       - │ 0 │-2 │-4 │-6 │-8 │-10│-12│-14│
 *    │ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ │         ├───┼───┼───┼───┼───┼───┼───┼───┤
 *  A │ ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ │       A │-2 │ 1 │-1 │   │   │   │   │   │
 *    │ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ │         ├───┼───┼───┼───┼───┼───┼───┼───┤
 *  T │ ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ │       T │-4 │-1 │ 2 │   │   │   │   │   │
 *    │ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ │         ├───┼───┼───┼───┼───┼───┼───┼───┤
 *  C │ ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ │       C │-6 │   │   │   │   │   │   │   │
 *    │ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ │         ├───┼───┼───┼───┼───┼───┼───┼───┤
 *  T │ ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ │       T │-8 │   │   │   │   │   │   │   │
 *    │ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ ↘ ↓ │         ├───┼───┼───┼───┼───┼───┼───┼───┤
 *  A │ ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ → ◦ │       A │-10│   │   │   │   │   │   │   │
 *    └───────────────────────────────┘         └───┴───┴───┴───┴───┴───┴───┴───┘
 * </pre>
 * <p>where the cost of → and ↓ are $g$ and the cost of ↘ is $s(a_ib_j)$:</p>
 * <pre>
 *                   ┆
 *      (i-1,j-1)
 *          ◎        ◎
 *            ↘      │
 *          s(i,j)   g
 *                ↘  │
 *    ...   ◎──╴g╶─→ ◎(i,j)
 * </pre>
 *
 * <p>The Needleman-Wunsch's algorithm finds the global alignment of two sequences. Smith-Waterman's algorithm aims to
 * find local maximum alignment of the two sequences by zeroing the negative scores:</p>
 *
 * $$\table M[0,0] ,=, 0;
 *          M[0,j] ,=, 0;
 *          M[i,0] ,=, 0;
 *          M[i,j] ,=, \max \{ \table 0;
 *                                    M[i-1,j-1] + s(a_ib_j);
 *                                    M[i-1,j] + g;
 *                                    M[i,j-1] + g$$
 *
 * <p>The only difference from before is the 0 in the max.</p>
 *
 * <p>To find the local alignment, find the max score, then trace back until meeting a zero.</p>
 *
 * <h3>Gap score</h3>
 * <p>Both Needleman-Wunsch and Smith-Waterman simplified the gap cost where the cost of one gap of length 3 is the same
 * as the cost of three gaps of length 1. However, gaps are different in nature, fewer long gaps is better
 * than more small gaps, so we denote $g(k)$ be the cost of gap length k. Normally $g(k) &lt; kg(1)$.</p>
 *
 * <p>There are many gap models studied:</p>
 * <div class="markdown">
 *  |Model             |Function Form      |      Time       |
 *  |:-----------------|:------------------|:---------------:|
 *  |General           |                   |\$O(mn^2+m^2n)\$ |
 *  |Affine            |\$g(k)=α+βk\$      |\$O(mn)\$        |
 *  |Logarithmic       |\$g(k)=α+β\log k \$|\$O(mn)\$?       |
 *  |Concave Downwards |                   |\$O(nm\log n) \$ |
 *  |Piecewise linear  |                   |\$O(smn)\$       |
 * </div>
 * <p>Only general and affine models are presented here.</p>
 *
 * <h4>General gap penalty:</h4>
 * <p>General case: where $g(k)$ can be anything. Waterman-Smith-Beyer's algorithm is a modified dynamic programming
 * to address the general model:</p>
 *
 * <pre>
 *                                                  (i-k,j)
 *                                                      ◎───────┐
 *                                                            g(k)
 *                                                              │
 *                                                      ┆       │
 *                                                              │
 *                                                              │
 *                                                      ◎─────┐ │
 *                                                          g(3)│
 *                                                            │ │
 *                                                            │ │
 *                                                      ◎───┐ │ │
 *                                                        g(2)│ │
 *                                                          │ │ │
 *                                         (i-1,j-1)        │ │ │
 *                                             ◎        ◎   │ │ │
 *                                               ↘    g(1)  │ │ │
 *                                             s(i,j)   │   │ │ │
 *        (i,j-k)                                    ↘  │   │ │ │
 *          ◎      ...       ◎        ◎        ◎╴g(1)─→ ◎ ←─┴←┴←┘
 *          │                │        └─╴g(2)──────────→┤(i,j)
 *          │                └─╴g(3)───────────────────→┤
 *          └──╴g(k)───────────────────────────────────→┘
 * </pre>
 *
 * $$\table M[0,0] ,=, 0;
 *          M[0,j] ,=, g(j);
 *          M[i,0] ,=, g(i);
 *          M[i,j] ,=, \max \{ \table M[i-1,j-1] + s(a_ib_j);
 *                                    \max ↙{k=1}↖i \{ M[i-k,j] + g(k) \};
 *                                    \max ↙{k=1}↖j \{ M[i,j-k] + g(k) \}$$
 *
 * <p>Another way of solving the above dynamic programming is to use three matrices. The three-matrix approach is later
 * applied to Affine gap penalty using Gotoh's algorithm.</p>
 *
 * <p>Let $M[i,j]$ be the best alignment of $A_i$ and $B_j$ ending with a character-character match or mismatch (i.e.
 * the best score entering the node from top-left in diagonal direction). Let
 * $H[i,j]$ be the best alignment of $A_i$ and $B_j$ ending with space in $A$ (i.e. entering the node from the left).
 * Let $V[i,j]$ be the best alignment of $A_i$ and $B_i$ ending with space in $B$ (i.e. entering the node from above), so
 * it can be seen that the best score at node $(i,j)$ is $\max \{ M[i,j],H[i,j],V[i,j] \}$</p>
 *
 * $$\table M[i,j] ,=, {s(a_i, b_j) + \max \{ \table M[i-1,j-1];
 *                                                   H[i-1,j-1];
 *                                                   V[i-1,j-1]};
 *          H[i,j] ,=, {\max ↙{k=1}↖j \{\table M[i,j-k] + g(k);
 *                                             V[i,j-k] + g(k)};
 *          V[i,j] ,=, {\max ↙{k=1}↖i \{\table M[i-k,j] + g(k);
 *                                             H[i-k,j] + g(k)}$$
 *
 *
 * <p>The modified dynamic programming for local maximum alignment:</p>
 *
 * $$\table M[0,0] ,=, 0;
 *          M[0,j] ,=, 0;
 *          M[i,0] ,=, 0;
 *          M[i,j] ,=, \max \{ \table 0;
 *                                    M[i-1,j-1] + s(a_ib_j);
 *                                    \max ↙{k=1}↖i \{ M[i-k,j] + g(k) \};
 *                                    \max ↙{k=1}↖j \{ M[i,j-k] + g(k) \}$$
 *
 *
 * <p>The above algorithm runs $O(mn^2 + m^2n)$ time and uses $O(mn)$ space.</p>
 *
 * <h4>Affine gap penalty:</h4>
 *
 * <p>The $O(mn^2 + m^2n)$ running time for general gap penalty is expensive. By using some specific formula the gap
 * penalty function $g(k)$, the running time can be faster. The most common gap penalty function is affine:
 * $g(k) = α+βk, α,β &lt; 0$. Basically, α is the penalty for starting a gap and β is the penalty for extending
 * a gap by one more space. So obviously, the cost for a gap length of 3 $(α+3β)$ is cheaper than the cost of
 * 3 gaps of length 1 $(3α+3β)$. Gotoh's algorithm uses 3 matrices: $M[i,j]$ be the best alignment of $A_i$ and $B_j$;
 * $H[i,j]$ is the best score entering from the left, $V[i,j]$ is the best score entering from above:</p>
 *
 * $$\table H[i,j] ,=, {\max \{\table H[i,j-1] + β;
 *                                    M[i,j-1] + α + β};
 *          V[i,j] ,=, {\max \{\table V[i-1,j] + β;
 *                                    M[i-k,j] + α + β};
 *          M[i,j] ,=, {\max \{ \table M[i-1,j-1] + s(a_i,b_j);
 *                                    H[i,j];
 *                                    V[i,j]}$$
 *
 *
 * <p>The running time is $O(mn)$ and space is $O(mn)$.</p>
 *
 * <h4>Linear space</h4>
 *
 * <p>All of the above algorithms use $O(mn)$ space which is prohibitive if the sequences are large. Myers and Millers
 * applied Hirschberg's method to reduce the quadratic space $O(mn)$ to $O(m+n)$ space. Their paper applies the method to
 * Gotoh's algorithm. However, for simplicity, the linear-space method is applied to Needleman-Wunsch's algorithm.</p>
 *
 * <p>A quick observation is that if we only need to know the optimal score without reconstructing the actual alignment,
 * we only need linear space. Recall the Needleman-Wunsch's algorithm:</p>
 *
 * $$\table M[0,0] ,=, 0;
 *          M[i,j] ,=, \max \{ \table M[i-1,j-1] + s(a_ib_j);
 *                                    M[i-1,j] + g;
 *                                    M[i,j-1] + g$$
 *
 * <p>This dynamic programming approach builds the array $M$ of size $m × n$ to store the optimal scores.
 * Notice that to calculate row $i$ of $M$, we only need the previous row $i-1$ of $M$.
 * Therefore, we only need $2n$ storage to calculate the final optimal score $M[m,n]$.</p>
 *
 * <p>However, with only two rows of information, it's not enough information to trace back the actual alignment.</p>
 *
 * <p>Myers and Millers' method is to use divide-conquer to combine the above observation and store the optimal alignment
 * using linear space:</p>
 *
 * <pre>
 *        -   1   2    ...    k    ...   n-1  n
 *      ┌───┬───┬───┬─     ─┬───┬─     ─┬───┬───┐
 *    - │   │   │   │       │   │       │   │   │
 *      ├───┼───┼───┼─     ─┼───┼─     ─┼───┼───┤
 *    1 │   │   │   │       │   │       │   │   │
 *      ├───┼───┼───┼─     ─┼───┼─     ─┼───┼───┤
 *    2 │   │   │   │       │   │       │   │   │
 *      ├───┼───┼───┼─     ─┼───┼─     ─┼───┼───┤
 *    .
 *    .
 *      ├───┼───┼───┼─     ─┼───┼─     ─┼───┼───┤
 *  m/2 │ ✕ │ ✕ │ ✕ │       │ ✓ │       │ ✕ │ ✕ │
 *      ├───┼───┼───┼─     ─┼───┼─     ─┼───┼───┤
 *    .
 *    .
 *      ├───┼───┼───┼─     ─┼───┼─     ─┼───┼───┤
 *  m-1 │   │   │   │       │   │       │   │   │
 *      ├───┼───┼───┼─     ─┼───┼─     ─┼───┼───┤
 *    m │   │   │   │       │   │       │   │   │
 *      └───┴───┴───┴─     ─┴───┴─     ─┴───┴───┘
 * </pre>
 *
 * <p>$M[i,j]$ is the shortest path from node $(0,0)$ to node $(i,j)$. Let $G[i,j]$ be the shortest path from
 * node $(i,j)$ to node $(m,n)$. Use the above observation, calculating the row $m/2$ values: $M[m/2,i], 0≤i≤n$.
 * Use the same strategy with the reverse graph to calculate the row $m/2$ of $G$: $G[m/2,i], 0≤i≤n$. Let k be the index of
 * $\max ( M[m/2,i] + G[m/2,i])$. Then the shortest path from $(0,0)$ to $(m,n)$ must go through $(m/2, k)$. So far,
 * we have used $O(n)$ space and runs in $O(mn)$ time.</p>
 *
 * <p>Now, we can do recursively for region $(0,0)-(m/2,k)$ and $(m/2,k)-(m,n)$.</p>
 *
 * <p>In total, we need to use $O(n)$ space to calculate scores and $O(m)$ space to store the optimal alignment nodes.
 * So the total space is $O(m+n)$.</p>
 *
 * <h4>Circular sequences</h4>
 *
 * <p>Sequences in nature are circular.</p>
 *
 * <p>Given two circular sequences $A(a_1a_2...a_m)$ and $B(b_1b_2...b_n)$, the naive algorithm is to do alignment for
 * all possible combination of straightened $A$ and $B$. That would take $O(m^2n^2)$ time because there are $mn$ combinations.</p>
 *
 * <p>The running time can be easily reduced to $O(m^2n)$ because we can take one straightened sequence $B$ and combine
 * with all possible forms of straighted $A$.</p>
 *
 * <p>To improve the running time further, we can apply one trick:</p>
 *
 * <pre>
 *
 *         -   b1   b2   ...     bn   b1   b2   ...    bn
 *     -   ↘                      ↓
 *     a1      →                      →
 *     a2           ↓                      ↓
 *     .                →                     ↘
 *     .                   ↘                    ↘
 *     .                      ↓                    ↘
 *     am                        ↘                     ↘
 * </pre>
 *
 *
 * @author Trung Phan
 */
package net.tp.algo.seqalign;