{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# Exploring Rabin-Karp-style Min-Hash Fingerprinting\n",
        "\n",
        "This document showcases the differences between different numeric types that one can use to implement a Rabin-Karp-style min-hash fingerprinting algorithm.\n",
        "It answers several important questions:\n",
        "\n",
        "- How to use floating-point numbers for a traditionally integer-based task - \"hashing\"?\n",
        "- How to properly compose many such hash functions to maximize the quality of fingerprints?"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Rabin-Karp Rolling Hashing\n",
        "\n",
        "Rabin-Karp algorithm is a polynomial rolling hash function built around modulo arithmetic.\n",
        "Once the hashing window rolls forward, the leftmost character is removed and a new rightmost character is added.\n",
        "Thus, the cost of computing each slices hash is just $O(1)$, if the previous window's hash is known.\n",
        "\n",
        "Assuming, many such rolling hashes will be used later, we can parameterize the algorithm with a few parameters:\n",
        "- `window_width` - the length of the substring to hash;\n",
        "- `multiplier` - the multiplier for the polynomial hash;\n",
        "- `modulo` - the modulo to use for the hash, generally prime;\n",
        "- `alphabet_size` - the size of the alphabet used in the string, e.g. 256 for ASCII;\n",
        "- `salt` - an optional salt to add to each character's ordinal value, usually 1 to avoid adding zeroes;\n",
        "- `seed` - an optional seed for the first hash, can be 0."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {},
      "outputs": [],
      "source": [
        "from typing import Generator\n",
        "\n",
        "\n",
        "def rabin_karp_ints(\n",
        "    s: str,\n",
        "    window_width: int,\n",
        "    multiplier: int,\n",
        "    modulo: int,\n",
        "    alphabet_size: int = 256,\n",
        "    salt: int = 1,\n",
        ") -> Generator[int, None, None]:\n",
        "    \"\"\"Return the rolling polynomial hashes of every length-`window_width` substring of `s`\"\"\"\n",
        "\n",
        "    assert window_width > 0, \"Window width must be positive\"\n",
        "    assert multiplier > 0, \"Multiplier must be positive\"\n",
        "    assert modulo > 0, \"Modulo must be positive\"\n",
        "    assert multiplier < modulo, \"Multiplier must be less than modulo\"\n",
        "\n",
        "    if len(s) < window_width:\n",
        "        return\n",
        "\n",
        "    current_hash: int = 0\n",
        "    for char in s[:window_width]:\n",
        "        new_term = ord(char) + salt\n",
        "        assert new_term < (alphabet_size + salt), \"Pass correct `alphabet_size`\"\n",
        "        current_hash = (current_hash * multiplier + new_term) % modulo\n",
        "    yield current_hash\n",
        "\n",
        "    discarding_multiplier: int = pow(multiplier, window_width - 1, modulo)\n",
        "    total_hashes = len(s) - window_width + 1\n",
        "    for i in range(1, total_hashes):  # First hash is already yielded\n",
        "        old_term = ord(s[i - 1]) + salt\n",
        "        new_term = ord(s[i + window_width - 1]) + salt\n",
        "\n",
        "        # Remove leftmost char and add the new rightmost one.\n",
        "        # All operations must be modulo `modulo`, but assuming the infinite precision of integers,\n",
        "        # we don't care in this draft.\n",
        "        current_hash = (current_hash - old_term * discarding_multiplier) % modulo\n",
        "        current_hash = (current_hash * multiplier + new_term) % modulo\n",
        "        yield current_hash\n",
        "\n",
        "\n",
        "# Quick sanity-check\n",
        "assert list(rabin_karp_ints(\"abcd\", 3, 31, 1_000_000_007)) == [\n",
        "    next(rabin_karp_ints(\"abc\", 3, 31, 1_000_000_007)),\n",
        "    next(rabin_karp_ints(\"bcd\", 3, 31, 1_000_000_007)),\n",
        "]\n",
        "assert list(rabin_karp_ints(\"abcdefdhijklmnopqr\", 17, 31, 65521)) == [\n",
        "    next(rabin_karp_ints(\"abcdefdhijklmnopq\", 17, 31, 65521)),\n",
        "    next(rabin_karp_ints(\"bcdefdhijklmnopqr\", 17, 31, 65521)),\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Rabin-Karp Rolling Hashing via Floats\n",
        "\n",
        "The Python's `int` type is unbounded, so it can be used to implement the Rabin-Karp rolling hash algorithm without worrying about overflow.\n",
        "It is, however, insanely expensive to use, and doesn't allow us to explore optimization opportunities.\n",
        "The `float`, on the other hand, is just a double-precision IEEE 754 floating-point number, which can exactly represent 52-bit integers!\n",
        "Thus, we can convert our arithmetic to use `float`s, if we guarantee, that no intermediate result will exceed that limit."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {},
      "outputs": [],
      "source": [
        "from typing import Generator\n",
        "\n",
        "LARGEST_INTEGRAL_FLOAT: float = 4503599627370495.0\n",
        "\n",
        "\n",
        "def rabin_karp_floats(\n",
        "    s: str,\n",
        "    window_width: int,\n",
        "    multiplier: int,\n",
        "    modulo: int,\n",
        "    alphabet_size: int = 256,\n",
        "    salt: int = 1,\n",
        ") -> Generator[int, None, None]:\n",
        "    \"\"\"Return the rolling polynomial hashes of every length-`window_width` substring of `s`\"\"\"\n",
        "\n",
        "    assert window_width > 0, \"Window width must be positive\"\n",
        "    assert multiplier > 0, \"Multiplier must be positive\"\n",
        "    assert modulo > 0, \"Modulo must be positive\"\n",
        "    assert multiplier < modulo, \"Multiplier must be less than modulo\"\n",
        "\n",
        "    if len(s) < window_width:\n",
        "        return\n",
        "\n",
        "    multiplier = float(multiplier)\n",
        "    modulo = float(modulo)\n",
        "    assert (\n",
        "        modulo < LARGEST_INTEGRAL_FLOAT\n",
        "    ), \"Modulo can't exceed the largest integral float value\"\n",
        "\n",
        "    # Ensure, we won't overflow the floating-point representation\n",
        "    largest_post_modulo = modulo - 1\n",
        "    max_possible_term = alphabet_size\n",
        "    assert (\n",
        "        largest_post_modulo * multiplier + max_possible_term <= LARGEST_INTEGRAL_FLOAT\n",
        "    ), \"Will overflow\"\n",
        "\n",
        "    # All of the operations will happen with a modulo:\n",
        "    def mul_mod(a: float, b: float) -> float:\n",
        "        return (a * b) % modulo\n",
        "\n",
        "    def add_mod(a: float, b: float) -> float:\n",
        "        return (a + b) % modulo\n",
        "\n",
        "    def sub_mod(a: float, b: float) -> float:\n",
        "        return (a - b) % modulo\n",
        "\n",
        "    # Precompute the discarding multiplier\n",
        "    discarding_multiplier: float = 1.0\n",
        "    for _ in range(window_width - 1):\n",
        "        discarding_multiplier = mul_mod(discarding_multiplier, multiplier)\n",
        "\n",
        "    # Handle the first window - without dropping any characters\n",
        "    current_hash: float = 0.0\n",
        "    for char in s[:window_width]:\n",
        "        new_term = float(ord(char) + salt)\n",
        "        assert new_term < (alphabet_size + salt), \"Pass correct `alphabet_size`\"\n",
        "        current_hash = add_mod(mul_mod(current_hash, multiplier), new_term)\n",
        "    yield int(current_hash)\n",
        "\n",
        "    # Roll through the rest of the string\n",
        "    total_hashes = len(s) - window_width + 1\n",
        "    for i in range(1, total_hashes):  # First hash is already yielded\n",
        "        old_term = float(ord(s[i - 1]) + salt)\n",
        "        new_term = float(ord(s[i + window_width - 1]) + salt)\n",
        "\n",
        "        # Remove leftmost char and add the new rightmost one.\n",
        "        current_hash = sub_mod(current_hash, mul_mod(old_term, discarding_multiplier))\n",
        "        current_hash = add_mod(mul_mod(current_hash, multiplier), new_term)\n",
        "        yield int(current_hash)\n",
        "\n",
        "\n",
        "# Quick sanity-check\n",
        "assert list(rabin_karp_floats(\"abcd\", 3, 31, 1_000_000_007)) == [\n",
        "    next(rabin_karp_floats(\"abc\", 3, 31, 1_000_000_007)),\n",
        "    next(rabin_karp_floats(\"bcd\", 3, 31, 1_000_000_007)),\n",
        "]\n",
        "assert list(rabin_karp_floats(\"abcdefdhijklmnopqr\", 17, 31, 65521)) == [\n",
        "    next(rabin_karp_floats(\"abcdefdhijklmnopq\", 17, 31, 65521)),\n",
        "    next(rabin_karp_floats(\"bcdefdhijklmnopqr\", 17, 31, 65521)),\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's load some data and ensure that the outputs are identical between the `int` and `float` implementations."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {},
      "outputs": [],
      "source": [
        "from pathlib import Path\n",
        "\n",
        "dataset_directory = Path(\"..\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {},
      "outputs": [],
      "source": [
        "textual_dataset_path = dataset_directory / \"leipzig1M.txt\"\n",
        "textual_dataset = open(textual_dataset_path, \"r\").read().strip()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Loaded 1,000,000 lines of mean length 128.64 characters\n"
          ]
        }
      ],
      "source": [
        "textual_lines = textual_dataset.split(\"\\n\")\n",
        "print(\n",
        "    f\"Loaded {len(textual_lines):,} lines of mean length {sum(len(line) for line in textual_lines) / len(textual_lines):.2f} characters\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {},
      "outputs": [],
      "source": [
        "def compare_hashes(line, make_baseline_generator, make_test_generator):\n",
        "    int_hashes = list(make_baseline_generator(line))\n",
        "    float_hashes = list(make_test_generator(line))\n",
        "    if int_hashes != float_hashes:\n",
        "        print(f\"Int Hashes:   {int_hashes}\")\n",
        "        print(f\"Float Hashes: {float_hashes}\")\n",
        "\n",
        "\n",
        "for line in textual_lines[:2]:\n",
        "    compare_hashes(\n",
        "        line,\n",
        "        lambda l: rabin_karp_ints(l, 17, 31, 65521),\n",
        "        lambda l: rabin_karp_floats(l, 17, 31, 65521),\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "A bigger question now is, will the same hold, if we use much larger modulo values?"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Passed for window width: 3!\n",
            "Passed for window width: 17!\n",
            "Passed for window width: 64!\n"
          ]
        }
      ],
      "source": [
        "LARGEST_SAFE_MODULO = 4503599626977\n",
        "\n",
        "for window_width in [3, 17, 64]:\n",
        "    for line in textual_lines[:50]:\n",
        "        compare_hashes(\n",
        "            line,\n",
        "            lambda l: rabin_karp_ints(\n",
        "                l, window_width=window_width, multiplier=257, modulo=LARGEST_SAFE_MODULO\n",
        "            ),\n",
        "            lambda l: rabin_karp_floats(\n",
        "                l, window_width=window_width, multiplier=257, modulo=LARGEST_SAFE_MODULO\n",
        "            ),\n",
        "        )\n",
        "    print(f\"Passed for window width: {window_width}!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Rabin-Karp Rolling Hashing via FMAs\n",
        "\n",
        "- How aggressively can we use **FMA** (Fused Multiply-Add) operations to optimize the algorithm?\n",
        "- How many of the modulo operations can we avoid?\n",
        "- How can we simplify the `%` modulo operation?"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {},
      "outputs": [],
      "source": [
        "import math\n",
        "from typing import Generator\n",
        "\n",
        "LARGEST_INTEGRAL_FLOAT: float = 4503599627370495.0\n",
        "\n",
        "\n",
        "def rabin_karp_fma(\n",
        "    s: str,\n",
        "    window_width: int,\n",
        "    multiplier: int,\n",
        "    modulo: int,\n",
        "    alphabet_size: int = 256,\n",
        "    salt: int = 1,\n",
        ") -> Generator[int, None, None]:\n",
        "    \"\"\"Return the rolling polynomial hashes of every length-`window_width` substring of `s`\n",
        "    using Fused-Multiply-Add (FMA) operations & Barrett reduction for performance.\"\"\"\n",
        "\n",
        "    assert window_width > 0, \"Window width must be positive\"\n",
        "    assert multiplier > 0, \"Multiplier must be positive\"\n",
        "    assert modulo > 0, \"Modulo must be positive\"\n",
        "    assert multiplier < modulo, \"Multiplier must be less than modulo\"\n",
        "\n",
        "    if len(s) < window_width:\n",
        "        return\n",
        "\n",
        "    multiplier = float(multiplier)\n",
        "    modulo = float(modulo)\n",
        "    assert (\n",
        "        modulo < LARGEST_INTEGRAL_FLOAT\n",
        "    ), \"Modulo can't exceed the largest integral float value\"\n",
        "\n",
        "    # Ensure, we won't overflow the floating-point representation\n",
        "    largest_post_modulo = modulo - 1\n",
        "    max_possible_term = alphabet_size\n",
        "    assert (\n",
        "        largest_post_modulo * multiplier + max_possible_term <= LARGEST_INTEGRAL_FLOAT\n",
        "    ), \"Will overflow\"\n",
        "\n",
        "    inverse_modulo: float = 1.0 / modulo\n",
        "\n",
        "    # Barrett reduction function\n",
        "    # It will be used to reduce the intermediate results to the modulo range\n",
        "    def barrett_mod(x: float) -> float:\n",
        "        q = math.floor(x * inverse_modulo)\n",
        "        result = x - q * modulo\n",
        "        # Handle potential off-by-one errors\n",
        "        if result >= modulo:\n",
        "            result -= modulo\n",
        "        elif result < 0:\n",
        "            result += modulo\n",
        "        assert int(result) == int(x % modulo), \"Barrett reduction failed\"\n",
        "        return result\n",
        "\n",
        "    # All of the operations will happen with a modulo:\n",
        "    def fma_mod(a: float, b: float, c: float) -> float:\n",
        "        intermediate = a * b + c\n",
        "        assert intermediate <= LARGEST_INTEGRAL_FLOAT, \"FMA did exceed integral range\"\n",
        "        return barrett_mod(intermediate)\n",
        "\n",
        "    # Precompute the discarding multiplier\n",
        "    negative_discarding_multiplier: float = 1.0\n",
        "    for _ in range(window_width - 1):\n",
        "        negative_discarding_multiplier = fma_mod(\n",
        "            negative_discarding_multiplier, multiplier, 0.0\n",
        "        )\n",
        "    negative_discarding_multiplier = (\n",
        "        -negative_discarding_multiplier\n",
        "    )  # Negate for FMA compatibility\n",
        "\n",
        "    # Handle the first window - without dropping any characters\n",
        "    current_hash: float = 0.0\n",
        "    for char in s[:window_width]:\n",
        "        new_term = float(ord(char) + salt)\n",
        "        assert new_term < (alphabet_size + salt), \"Pass correct `alphabet_size`\"\n",
        "        current_hash = fma_mod(current_hash, multiplier, new_term)\n",
        "    yield int(current_hash)\n",
        "\n",
        "    # Roll through the rest of the string\n",
        "    total_hashes = len(s) - window_width + 1\n",
        "    for i in range(1, total_hashes):  # First hash is already yielded\n",
        "        old_term = float(ord(s[i - 1]) + salt)\n",
        "        new_term = float(ord(s[i + window_width - 1]) + salt)\n",
        "\n",
        "        # Remove leftmost char and add the new rightmost one.\n",
        "        current_hash = fma_mod(old_term, negative_discarding_multiplier, current_hash)\n",
        "        assert (\n",
        "            current_hash >= -modulo\n",
        "        ), \"Intermediate hash may be negative, but within modulo range\"\n",
        "        current_hash = fma_mod(current_hash, multiplier, new_term)\n",
        "        assert current_hash >= 0, \"Current hash should not be negative\"\n",
        "        yield int(current_hash)\n",
        "\n",
        "\n",
        "# Quick sanity-check\n",
        "assert list(rabin_karp_fma(\"abcd\", 3, 31, 1_000_000_007)) == [\n",
        "    next(rabin_karp_fma(\"abc\", 3, 31, 1_000_000_007)),\n",
        "    next(rabin_karp_fma(\"bcd\", 3, 31, 1_000_000_007)),\n",
        "]\n",
        "assert list(rabin_karp_fma(\"abcdefdhijklmnopqr\", 17, 31, 65521)) == [\n",
        "    next(rabin_karp_fma(\"abcdefdhijklmnopq\", 17, 31, 65521)),\n",
        "    next(rabin_karp_fma(\"bcdefdhijklmnopqr\", 17, 31, 65521)),\n",
        "]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 9,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Passed for window width: 3!\n",
            "Passed for window width: 17!\n",
            "Passed for window width: 64!\n"
          ]
        }
      ],
      "source": [
        "LARGEST_SAFE_MODULO = 4503599626977\n",
        "\n",
        "for window_width in [3, 17, 64]:\n",
        "    for line in textual_lines[:50]:\n",
        "        compare_hashes(\n",
        "            line,\n",
        "            lambda l: rabin_karp_ints(\n",
        "                l, window_width=window_width, multiplier=257, modulo=LARGEST_SAFE_MODULO\n",
        "            ),\n",
        "            lambda l: rabin_karp_fma(\n",
        "                l, window_width=window_width, multiplier=257, modulo=LARGEST_SAFE_MODULO\n",
        "            ),\n",
        "        )\n",
        "    print(f\"Passed for window width: {window_width}!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "As we can handle typical texts, let's try several tricky inputs... where we'll be at a brink of an overflow! Some uncomfortable character values are: `\\x00`, `\\x01`, `\\x7F`, `\\xFF`. To really stress-test, let's pick the largest prime number below `LARGEST_INTEGRAL_FLOAT`, that can be used safely for a given alphabet size."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 10,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "4,503,599,627,370,449\n"
          ]
        }
      ],
      "source": [
        "from typing import Final, List, Generator\n",
        "\n",
        "# Fixed witnesses that make Miller-Rabin exact for n < 2**64\n",
        "MR_BASES: Final[List[int]] = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]\n",
        "\n",
        "\n",
        "def _is_prime_64(n: int) -> bool:\n",
        "    \"\"\"Exact primality for 0 < n < 2**64.\"\"\"\n",
        "    if n < 2:\n",
        "        return False\n",
        "    # Quick reject: small prime factors\n",
        "    for p in MR_BASES:  # covers all primes ≤ 37\n",
        "        if n == p:\n",
        "            return True\n",
        "        if n % p == 0:\n",
        "            return False\n",
        "\n",
        "    # Write n-1 = d · 2ˢ  with d odd\n",
        "    d, s = n - 1, 0\n",
        "    while d & 1 == 0:\n",
        "        d >>= 1\n",
        "        s += 1\n",
        "\n",
        "    # Strong-probable-prime test for each base\n",
        "    for a in MR_BASES:\n",
        "        x = pow(a, d, n)\n",
        "        if x in (1, n - 1):  # self-loop or −1 ⇒ may be prime\n",
        "            continue\n",
        "        for _ in range(s - 1):  # square until −1 or cycle\n",
        "            x = pow(x, 2, n)\n",
        "            if x == n - 1:\n",
        "                break\n",
        "        else:  # never hit −1 ⇒ composite\n",
        "            return False\n",
        "    return True\n",
        "\n",
        "\n",
        "def prev_primes(n: int) -> Generator[int, None, None]:\n",
        "    \"\"\"\n",
        "    Yield the largest primes strictly less than n (n must be > 2).\n",
        "    Average cost: O(log n * log log n) because the prime gap ~ log n.\n",
        "    \"\"\"\n",
        "    if n <= 2:\n",
        "        raise ValueError(\"Threshold must exceed 2.\")\n",
        "    n -= n % 2 == 0  # make n odd\n",
        "    while n > 2:\n",
        "        if _is_prime_64(n):\n",
        "            yield n\n",
        "        n -= 2\n",
        "\n",
        "\n",
        "def next_primes(n: int) -> Generator[int, None, None]:\n",
        "    \"\"\"\n",
        "    Yield the smallest primes strictly greater than n (n must be > 2).\n",
        "    Average cost: O(log n * log log n) because the prime gap ~ log n.\n",
        "    \"\"\"\n",
        "    if n <= 2:\n",
        "        raise ValueError(\"Threshold must exceed 2.\")\n",
        "    n += n % 2 == 0  # make n odd\n",
        "    while True:\n",
        "        if _is_prime_64(n):\n",
        "            yield n\n",
        "        n += 2\n",
        "\n",
        "\n",
        "LARGEST_INTEGRAL_FLOAT_PRIME = next(prev_primes(int(LARGEST_INTEGRAL_FLOAT)))\n",
        "print(f\"{LARGEST_INTEGRAL_FLOAT_PRIME:,}\")  # This will be used for stress-testing"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 11,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Passed for window width: 3, modulo: 17,523,733,958,369!\n",
            "Passed for window width: 17, modulo: 17,523,733,958,369!\n",
            "Passed for window width: 64, modulo: 17,523,733,958,369!\n",
            "Passed for window width: 707, modulo: 17,523,733,958,369!\n"
          ]
        }
      ],
      "source": [
        "import random\n",
        "\n",
        "all_0 = \"\\x00\" * 1_000\n",
        "all_1 = \"\\x01\" * 1_000\n",
        "all_127 = \"\\x7f\" * 1_000\n",
        "all_255 = \"\\xff\" * 1_000\n",
        "all_0_255 = \"\\x00\\xff\" * 500  # alternating 0 and 255 characters\n",
        "all_uncomfortable = \"\\x00\\x01\\x7f\\xfe\\xff\" * 250  # all uncomfortable characters\n",
        "\n",
        "long_random_strings = [\n",
        "    \"\".join(random.choices(\"\\x00\\x01\\x7f\\xfe\\xff\", k=10_000)) for _ in range(10)\n",
        "]  # 10 long random strings with uncomfortable characters\n",
        "\n",
        "alphabet_size = 256\n",
        "multiplier = 257\n",
        "largest_term = alphabet_size + 1  # in this specific case, same as `multiplier`\n",
        "large_modulo = next(\n",
        "    prev_primes(int(LARGEST_INTEGRAL_FLOAT) // multiplier - largest_term)\n",
        ")\n",
        "\n",
        "for window_width in [3, 17, 64, 707]:\n",
        "    for line in [\n",
        "        all_0,\n",
        "        all_1,\n",
        "        all_127,\n",
        "        all_255,\n",
        "        all_0_255,\n",
        "        all_uncomfortable,\n",
        "        *long_random_strings,\n",
        "    ]:\n",
        "        compare_hashes(\n",
        "            line,\n",
        "            lambda l: rabin_karp_ints(\n",
        "                l,\n",
        "                window_width=window_width,\n",
        "                multiplier=multiplier,\n",
        "                modulo=large_modulo,\n",
        "                alphabet_size=alphabet_size,\n",
        "            ),\n",
        "            lambda l: rabin_karp_fma(\n",
        "                l,\n",
        "                window_width=window_width,\n",
        "                multiplier=multiplier,\n",
        "                modulo=large_modulo,\n",
        "                alphabet_size=alphabet_size,\n",
        "            ),\n",
        "        )\n",
        "    print(f\"Passed for window width: {window_width}, modulo: {large_modulo:,}!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Min-Hash Fingerprinting\n",
        "\n",
        "Min-Hash fingerprints transform variable length text representations into **fixed-length vectors**, where each dimension stores the minimum hash value of a certain hash function across the whole document.\n",
        "It's great for large-scale information retrieval using Hamming Distance or Jaccard Similarity ($|A ∩ B| / |A ∪ B|$) or its weighted alternative.\n",
        "\n",
        "A potentially more informative alternative is \"weighted Min-Hash\", which takes into account the frequency of each element in the document. This makes the fingerprints compatible with **TF-IDF**-like algorithms, and makes the system more robust especially for narrow rolling windows."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 12,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Defaulting to user installation because normal site-packages is not writeable\n",
            "Requirement already satisfied: tqdm in /home/ubuntu/.local/lib/python3.10/site-packages (4.67.1)\n",
            "Requirement already satisfied: numpy in /home/ubuntu/.local/lib/python3.10/site-packages (2.2.4)\n"
          ]
        }
      ],
      "source": [
        "!pip install tqdm numpy"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 13,
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "(array([2256051662, 1712240109], dtype=uint32),\n",
              " array([3, 2], dtype=uint32),\n",
              " array(['abc', 'abcd'], dtype=StringDType()))"
            ]
          },
          "execution_count": 13,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import numpy as np\n",
        "from numpy.dtypes import StringDType\n",
        "from typing import List, Tuple\n",
        "from stringzilla import hash as sz_hash\n",
        "\n",
        "\n",
        "def count_min_sketch(\n",
        "    text: str,\n",
        "    window_widths: List[int],\n",
        "    seeds: List[int],\n",
        "    hash_resolution: np.dtype = np.uint32,\n",
        ") -> Tuple[np.ndarray, np.ndarray, np.ndarray]:\n",
        "    \"\"\"\n",
        "    Produces a weighted Min-Hash fingerprint also called a Count-Min Sketch.\n",
        "    Uses StringZilla's native hash function, as opposed to the Rabin Karp.\n",
        "\n",
        "    https://en.wikipedia.org/wiki/Count%E2%80%93min_sketch\n",
        "    \"\"\"\n",
        "\n",
        "    fingerprint_hashes = np.empty((len(window_widths),), dtype=hash_resolution)\n",
        "    fingerprint_weights = np.empty((len(window_widths),), dtype=np.uint32)\n",
        "    fingerprint_ngrams = np.empty((len(window_widths),), dtype=StringDType())\n",
        "\n",
        "    skipped_final_hash = np.iinfo(hash_resolution).max\n",
        "    skipped_u64_intermediary = np.iinfo(np.uint64).max\n",
        "\n",
        "    for i, (window_width, seed) in enumerate(zip(window_widths, seeds)):\n",
        "        assert window_width > 0, \"Window width must be positive\"\n",
        "\n",
        "        smallest_hash = skipped_u64_intermediary\n",
        "        smallest_count = 0\n",
        "        smallest_example = None\n",
        "\n",
        "        for j in range(len(text) - window_width + 1):\n",
        "            text_window = text[j : j + window_width]\n",
        "            rolling_intermediate_u64_hash = sz_hash(text_window, seed)\n",
        "            new_smallest_hash = min(smallest_hash, rolling_intermediate_u64_hash)\n",
        "            if new_smallest_hash < smallest_hash:\n",
        "                smallest_count = 1\n",
        "                smallest_hash = new_smallest_hash\n",
        "                smallest_example = text_window\n",
        "            elif new_smallest_hash == smallest_hash:\n",
        "                smallest_count += 1\n",
        "\n",
        "        smallest_hash &= skipped_final_hash  # Ensure we don't exceed the `uint32` range\n",
        "        fingerprint_hashes[i] = smallest_hash\n",
        "        fingerprint_weights[i] = smallest_count\n",
        "        fingerprint_ngrams[i] = smallest_example\n",
        "\n",
        "    return fingerprint_hashes, fingerprint_weights, fingerprint_ngrams\n",
        "\n",
        "\n",
        "count_min_sketch(\"abcde\", window_widths=[3, 4], seeds=[257, 258])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 14,
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "(array([   6498345, 1706860248], dtype=uint32),\n",
              " array([3, 2], dtype=uint32),\n",
              " array(['abc', 'abcd'], dtype=StringDType()))"
            ]
          },
          "execution_count": 14,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "import numpy as np\n",
        "from numpy.dtypes import StringDType\n",
        "from typing import List, Tuple\n",
        "\n",
        "\n",
        "def rolling_count_min_sketch(\n",
        "    text: str,\n",
        "    window_widths: List[int],\n",
        "    multipliers: List[int],\n",
        "    salts: List[int],\n",
        "    modulo: int,\n",
        "    hash_resolution: np.dtype = np.uint32,\n",
        ") -> Tuple[np.ndarray, np.ndarray, np.ndarray]:\n",
        "    \"\"\"\n",
        "    Produces a weighted Min-Hash fingerprint also called a Count-Min Sketch.\n",
        "    Uses Rabin-Karp rolling hash function for algorithmic efficiency.\n",
        "\n",
        "    https://en.wikipedia.org/wiki/Count%E2%80%93min_sketch\n",
        "    \"\"\"\n",
        "\n",
        "    count_widths = len(window_widths)\n",
        "    count_multipliers = len(multipliers)\n",
        "    assert count_widths == count_multipliers, f\"{count_widths=} != {count_multipliers=}\"\n",
        "\n",
        "    fingerprint_hashes = np.empty((len(window_widths),), dtype=hash_resolution)\n",
        "    fingerprint_weights = np.empty((len(window_widths),), dtype=np.uint32)\n",
        "    fingerprint_ngrams = np.empty((len(window_widths),), dtype=StringDType())\n",
        "\n",
        "    skipped_final_hash = np.iinfo(hash_resolution).max\n",
        "    skipped_u64_intermediary = np.iinfo(np.uint64).max\n",
        "    hashers = [\n",
        "        rabin_karp_ints(\n",
        "            text,\n",
        "            window_width=width,\n",
        "            multiplier=multiplier,\n",
        "            modulo=modulo,\n",
        "            salt=salt,\n",
        "        )\n",
        "        for width, multiplier, salt in zip(window_widths, multipliers, salts)\n",
        "    ]\n",
        "\n",
        "    for i, hasher in enumerate(hashers):\n",
        "        smallest_hash = skipped_u64_intermediary\n",
        "        smallest_count = 0\n",
        "        smallest_example = None\n",
        "\n",
        "        for j, rolling_intermediate_u64_hash in enumerate(hasher):\n",
        "            new_smallest_hash = min(smallest_hash, rolling_intermediate_u64_hash)\n",
        "            if new_smallest_hash < smallest_hash:\n",
        "                smallest_count = 1\n",
        "                smallest_hash = new_smallest_hash\n",
        "                # Extract N-gram from the correct position where minimum hash occurred\n",
        "                smallest_example = text[j : j + window_widths[i]]\n",
        "            elif new_smallest_hash == smallest_hash:\n",
        "                smallest_count += 1\n",
        "\n",
        "        smallest_hash &= skipped_final_hash  # Ensure we don't exceed the `uint32` range\n",
        "        fingerprint_hashes[i] = smallest_hash\n",
        "        fingerprint_weights[i] = smallest_count\n",
        "        fingerprint_ngrams[i] = smallest_example\n",
        "\n",
        "    return fingerprint_hashes, fingerprint_weights, fingerprint_ngrams\n",
        "\n",
        "\n",
        "rolling_count_min_sketch(\n",
        "    \"abcde\",\n",
        "    window_widths=[3, 4],\n",
        "    multipliers=[257, 258],\n",
        "    salts=[1, 2],\n",
        "    modulo=4503599626977,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "A good set of hyper-parameters for Min-Hashing binary text would be:\n",
        "\n",
        "- `window_widths`: ${3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 18, 21, 24, 27, 30}$ - 16 widths\n",
        "- `alphabet_size`: $256$ for ASCII & binary UTF-8 content\n",
        "- `ndim`: $16...1024$, something like 192 should be great for X/Twitter\n",
        "- `multipliers`: ${257, 258, 259, 260, 261, 262, ..., 1024 + 256}$\n",
        "\n",
        "When processing less usual inputs, like the DNA sequences, parameters may be different, e.g.:\n",
        "\n",
        "- `window_widths`: ${3, 6, 9, 12, 15, 30, 60, 120}$\n",
        "- `alphabet_size`: $4$ for DNA sequences\n",
        "- `ndim`: should be probably proportional to $√n$, where $n$ is the typical length of sequences\n",
        "- `multipliers`: ${5, 6, 7, 8, 9, ..., 4 * n + 1}$\n",
        "\n",
        "In every case, the `modulo` should be co-prime to the multiplier.\n",
        "The easiest option is to use a large prime, that can be obtained via:\n",
        "\n",
        "```python\n",
        "largest_prime_below(int(LARGEST_INTEGRAL_FLOAT) // max(multipliers) - (alphabet_size + 1))\n",
        "```"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 15,
      "metadata": {},
      "outputs": [],
      "source": [
        "import numpy as np\n",
        "from typing import Tuple\n",
        "\n",
        "\n",
        "def jaccard_similarity(a: np.ndarray, b: np.ndarray) -> float:\n",
        "    if a.shape != b.shape:\n",
        "        raise ValueError(\"Fingerprints must have identical length\")\n",
        "\n",
        "    return float(np.mean(a == b))\n",
        "\n",
        "\n",
        "def weighted_jaccard_similarity(\n",
        "    a: Tuple[np.ndarray, np.ndarray],\n",
        "    b: Tuple[np.ndarray, np.ndarray],\n",
        ") -> float:\n",
        "    hashes_a, weights_a = a\n",
        "    hashes_b, weights_b = b\n",
        "\n",
        "    if hashes_a.shape != hashes_b.shape or weights_a.shape != weights_b.shape:\n",
        "        raise ValueError(\"Both fingerprints must have identical dimensions\")\n",
        "\n",
        "    magnitude_i = (weights_a * weights_b)[hashes_a == hashes_b].sum()\n",
        "    magnitude_a = (weights_a * weights_a).sum()\n",
        "    magnitude_b = (weights_b * weights_b).sum()\n",
        "    magnitude_u = magnitude_a + magnitude_b - magnitude_i\n",
        "\n",
        "    return float(magnitude_i) / float(magnitude_u)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 16,
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "165161"
            ]
          },
          "execution_count": 16,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "NDIM: int = 192\n",
        "window_widths = [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 18, 21, 24, 27, 30]\n",
        "window_widths *= NDIM // len(window_widths)\n",
        "\n",
        "# For Rabin-Karp rolling hashes let's take different prime multipliers,\n",
        "# with the smallest being a function of the window width and the largest easily representable integer:\n",
        "smallest_multiplier = int(pow(LARGEST_INTEGRAL_FLOAT, 1 / min(window_widths)))\n",
        "smallest_prime_multiplier = next(next_primes(smallest_multiplier))\n",
        "smallest_prime_multiplier"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's compute the rolling fingerprints:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 17,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Loaded 1,000,000 lines of mean length 128.64 characters\n"
          ]
        }
      ],
      "source": [
        "textual_dataset_path = dataset_directory / \"leipzig1M.txt\"\n",
        "textual_dataset = open(textual_dataset_path, \"r\").read().casefold().strip()\n",
        "textual_lines = textual_dataset.split(\"\\n\")\n",
        "print(\n",
        "    f\"Loaded {len(textual_lines):,} lines of mean length {sum(len(line) for line in textual_lines) / len(textual_lines):.2f} characters\"\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 18,
      "metadata": {},
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Fingerprinting lines: 100%|██████████| 10000/10000 [01:35<00:00, 104.49line/s]\n"
          ]
        }
      ],
      "source": [
        "from tqdm import tqdm\n",
        "from itertools import islice\n",
        "import random\n",
        "\n",
        "\n",
        "def take_first_n(iterable, n):\n",
        "    return islice(iterable, n)\n",
        "\n",
        "\n",
        "def keep_each_nth(iterable, k):\n",
        "    return (x for i, x in enumerate(iterable, 1) if i % k == 0)\n",
        "\n",
        "\n",
        "prime_multipliers = list(\n",
        "    take_first_n(keep_each_nth(next_primes(smallest_prime_multiplier), 7), NDIM)\n",
        ")\n",
        "random_multipliers = [random.randint(257, 1024 * 1024 * 16) for _ in range(NDIM)]\n",
        "consecutive_multipliers = list(range(256, 256 + NDIM))\n",
        "\n",
        "salts = range(1, NDIM + 1)  # Use different salts for each window width\n",
        "alphabet_size = 256\n",
        "largest_term = alphabet_size + max(salts)\n",
        "LARGEST_SAFE_MODULO = next(\n",
        "    prev_primes(int(LARGEST_INTEGRAL_FLOAT) // max(prime_multipliers) - largest_term)\n",
        ")\n",
        "HASH_DTYPE = np.uint64\n",
        "\n",
        "fingerprint_hashes = []\n",
        "fingerprint_counts = []\n",
        "fingerprint_ngrams = []\n",
        "\n",
        "DATASET_SIZE_LIMIT = 10_000\n",
        "\n",
        "default_static_sketcher = lambda line: count_min_sketch(\n",
        "    text=line,\n",
        "    window_widths=window_widths,\n",
        "    seeds=prime_multipliers,\n",
        "    hash_resolution=HASH_DTYPE,\n",
        ")\n",
        "# For Rabin-Karp rolling hashes we pass more parameters:\n",
        "default_rolling_sketcher = lambda line: rolling_count_min_sketch(\n",
        "    text=line,\n",
        "    window_widths=window_widths,\n",
        "    multipliers=random_multipliers,\n",
        "    salts=salts,\n",
        "    modulo=LARGEST_SAFE_MODULO,\n",
        "    hash_resolution=HASH_DTYPE,\n",
        ")\n",
        "\n",
        "for line in tqdm(\n",
        "    textual_lines[:DATASET_SIZE_LIMIT],\n",
        "    desc=\"Fingerprinting lines\",\n",
        "    unit=\"line\",\n",
        "):\n",
        "    hashes, counts, ngrams = default_rolling_sketcher(line)\n",
        "    fingerprint_hashes.append(hashes)\n",
        "    fingerprint_counts.append(counts)\n",
        "    fingerprint_ngrams.append(ngrams)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "24819100627"
            ]
          },
          "execution_count": 19,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "LARGEST_SAFE_MODULO"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's cross-reference the fingerprints counting the number of hash collisions without our test set."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 20,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Dimension 0: 492 unique hashes, 0 collisions\n",
            "Dimension 1: 1,552 unique hashes, 0 collisions\n",
            "Dimension 2: 2,663 unique hashes, 0 collisions\n",
            "Dimension 3: 2,445 unique hashes, 0 collisions\n",
            "Dimension 4: 4,936 unique hashes, 0 collisions\n",
            "Dimension 5: 6,149 unique hashes, 0 collisions\n",
            "Dimension 6: 7,266 unique hashes, 0 collisions\n",
            "Dimension 7: 8,090 unique hashes, 0 collisions\n",
            "Dimension 8: 8,556 unique hashes, 0 collisions\n",
            "Dimension 9: 9,095 unique hashes, 0 collisions\n",
            "Dimension 10: 9,654 unique hashes, 0 collisions\n",
            "Dimension 11: 9,859 unique hashes, 0 collisions\n",
            "Dimension 12: 9,927 unique hashes, 1 collisions\n",
            "Dimension 13: 9,943 unique hashes, 0 collisions\n",
            "Dimension 14: 9,923 unique hashes, 0 collisions\n",
            "Dimension 15: 9,879 unique hashes, 0 collisions\n",
            "Dimension 16: 470 unique hashes, 0 collisions\n",
            "Dimension 17: 1,492 unique hashes, 0 collisions\n",
            "Dimension 18: 2,358 unique hashes, 0 collisions\n",
            "Dimension 19: 3,769 unique hashes, 0 collisions\n",
            "Dimension 20: 5,362 unique hashes, 0 collisions\n",
            "Dimension 21: 6,214 unique hashes, 0 collisions\n",
            "Dimension 22: 7,165 unique hashes, 0 collisions\n",
            "Dimension 23: 7,778 unique hashes, 0 collisions\n",
            "Dimension 24: 8,727 unique hashes, 1 collisions\n",
            "Dimension 25: 9,035 unique hashes, 0 collisions\n",
            "Dimension 26: 9,689 unique hashes, 0 collisions\n",
            "Dimension 27: 9,861 unique hashes, 0 collisions\n",
            "Dimension 28: 9,902 unique hashes, 0 collisions\n",
            "Dimension 29: 9,936 unique hashes, 0 collisions\n",
            "Dimension 30: 9,926 unique hashes, 0 collisions\n",
            "Dimension 31: 9,867 unique hashes, 0 collisions\n",
            "Dimension 32: 419 unique hashes, 0 collisions\n",
            "Dimension 33: 1,392 unique hashes, 0 collisions\n",
            "Dimension 34: 2,428 unique hashes, 0 collisions\n",
            "Dimension 35: 4,020 unique hashes, 0 collisions\n",
            "Dimension 36: 4,429 unique hashes, 0 collisions\n",
            "Dimension 37: 6,583 unique hashes, 0 collisions\n",
            "Dimension 38: 7,302 unique hashes, 0 collisions\n",
            "Dimension 39: 8,100 unique hashes, 0 collisions\n",
            "Dimension 40: 8,722 unique hashes, 0 collisions\n",
            "Dimension 41: 9,033 unique hashes, 0 collisions\n",
            "Dimension 42: 9,689 unique hashes, 0 collisions\n",
            "Dimension 43: 9,873 unique hashes, 0 collisions\n",
            "Dimension 44: 9,922 unique hashes, 0 collisions\n",
            "Dimension 45: 9,928 unique hashes, 0 collisions\n",
            "Dimension 46: 9,923 unique hashes, 0 collisions\n",
            "Dimension 47: 9,863 unique hashes, 0 collisions\n",
            "Dimension 48: 355 unique hashes, 0 collisions\n",
            "Dimension 49: 1,016 unique hashes, 0 collisions\n",
            "Dimension 50: 2,388 unique hashes, 0 collisions\n",
            "Dimension 51: 3,898 unique hashes, 0 collisions\n",
            "Dimension 52: 5,295 unique hashes, 0 collisions\n",
            "Dimension 53: 6,398 unique hashes, 0 collisions\n",
            "Dimension 54: 7,403 unique hashes, 0 collisions\n",
            "Dimension 55: 7,948 unique hashes, 0 collisions\n",
            "Dimension 56: 8,646 unique hashes, 0 collisions\n",
            "Dimension 57: 8,946 unique hashes, 1 collisions\n",
            "Dimension 58: 9,648 unique hashes, 0 collisions\n",
            "Dimension 59: 9,849 unique hashes, 0 collisions\n",
            "Dimension 60: 9,909 unique hashes, 0 collisions\n",
            "Dimension 61: 9,928 unique hashes, 0 collisions\n",
            "Dimension 62: 9,915 unique hashes, 0 collisions\n",
            "Dimension 63: 9,863 unique hashes, 0 collisions\n",
            "Dimension 64: 607 unique hashes, 0 collisions\n",
            "Dimension 65: 809 unique hashes, 0 collisions\n",
            "Dimension 66: 2,237 unique hashes, 0 collisions\n",
            "Dimension 67: 3,450 unique hashes, 0 collisions\n",
            "Dimension 68: 4,635 unique hashes, 0 collisions\n",
            "Dimension 69: 6,308 unique hashes, 0 collisions\n",
            "Dimension 70: 7,594 unique hashes, 0 collisions\n",
            "Dimension 71: 8,234 unique hashes, 0 collisions\n",
            "Dimension 72: 8,535 unique hashes, 0 collisions\n",
            "Dimension 73: 8,981 unique hashes, 0 collisions\n",
            "Dimension 74: 9,643 unique hashes, 0 collisions\n",
            "Dimension 75: 9,879 unique hashes, 0 collisions\n",
            "Dimension 76: 9,921 unique hashes, 0 collisions\n",
            "Dimension 77: 9,932 unique hashes, 1 collisions\n",
            "Dimension 78: 9,922 unique hashes, 0 collisions\n",
            "Dimension 79: 9,867 unique hashes, 0 collisions\n",
            "Dimension 80: 394 unique hashes, 0 collisions\n",
            "Dimension 81: 1,296 unique hashes, 0 collisions\n",
            "Dimension 82: 2,071 unique hashes, 0 collisions\n",
            "Dimension 83: 3,764 unique hashes, 0 collisions\n",
            "Dimension 84: 4,886 unique hashes, 0 collisions\n",
            "Dimension 85: 6,365 unique hashes, 0 collisions\n",
            "Dimension 86: 7,446 unique hashes, 0 collisions\n",
            "Dimension 87: 7,888 unique hashes, 0 collisions\n",
            "Dimension 88: 8,647 unique hashes, 0 collisions\n",
            "Dimension 89: 8,906 unique hashes, 0 collisions\n",
            "Dimension 90: 9,622 unique hashes, 1 collisions\n",
            "Dimension 91: 9,874 unique hashes, 1 collisions\n",
            "Dimension 92: 9,932 unique hashes, 0 collisions\n",
            "Dimension 93: 9,919 unique hashes, 0 collisions\n",
            "Dimension 94: 9,906 unique hashes, 1 collisions\n",
            "Dimension 95: 9,870 unique hashes, 0 collisions\n",
            "Dimension 96: 681 unique hashes, 0 collisions\n",
            "Dimension 97: 1,241 unique hashes, 0 collisions\n",
            "Dimension 98: 2,499 unique hashes, 0 collisions\n",
            "Dimension 99: 3,866 unique hashes, 0 collisions\n",
            "Dimension 100: 4,805 unique hashes, 0 collisions\n",
            "Dimension 101: 6,347 unique hashes, 0 collisions\n",
            "Dimension 102: 7,291 unique hashes, 0 collisions\n",
            "Dimension 103: 7,909 unique hashes, 0 collisions\n",
            "Dimension 104: 8,601 unique hashes, 0 collisions\n",
            "Dimension 105: 8,999 unique hashes, 0 collisions\n",
            "Dimension 106: 9,680 unique hashes, 0 collisions\n",
            "Dimension 107: 9,861 unique hashes, 0 collisions\n",
            "Dimension 108: 9,938 unique hashes, 0 collisions\n",
            "Dimension 109: 9,927 unique hashes, 0 collisions\n",
            "Dimension 110: 9,913 unique hashes, 0 collisions\n",
            "Dimension 111: 9,871 unique hashes, 0 collisions\n",
            "Dimension 112: 423 unique hashes, 0 collisions\n",
            "Dimension 113: 684 unique hashes, 0 collisions\n",
            "Dimension 114: 2,054 unique hashes, 0 collisions\n",
            "Dimension 115: 3,796 unique hashes, 0 collisions\n",
            "Dimension 116: 5,107 unique hashes, 0 collisions\n",
            "Dimension 117: 5,942 unique hashes, 0 collisions\n",
            "Dimension 118: 7,303 unique hashes, 0 collisions\n",
            "Dimension 119: 7,934 unique hashes, 0 collisions\n",
            "Dimension 120: 8,373 unique hashes, 0 collisions\n",
            "Dimension 121: 9,024 unique hashes, 0 collisions\n",
            "Dimension 122: 9,657 unique hashes, 0 collisions\n",
            "Dimension 123: 9,874 unique hashes, 0 collisions\n",
            "Dimension 124: 9,907 unique hashes, 0 collisions\n",
            "Dimension 125: 9,927 unique hashes, 0 collisions\n",
            "Dimension 126: 9,915 unique hashes, 0 collisions\n",
            "Dimension 127: 9,872 unique hashes, 0 collisions\n",
            "Dimension 128: 510 unique hashes, 0 collisions\n",
            "Dimension 129: 1,205 unique hashes, 0 collisions\n",
            "Dimension 130: 2,864 unique hashes, 0 collisions\n",
            "Dimension 131: 2,888 unique hashes, 0 collisions\n",
            "Dimension 132: 5,104 unique hashes, 0 collisions\n",
            "Dimension 133: 6,185 unique hashes, 0 collisions\n",
            "Dimension 134: 7,591 unique hashes, 0 collisions\n",
            "Dimension 135: 7,889 unique hashes, 0 collisions\n",
            "Dimension 136: 8,516 unique hashes, 0 collisions\n",
            "Dimension 137: 8,822 unique hashes, 0 collisions\n",
            "Dimension 138: 9,675 unique hashes, 0 collisions\n",
            "Dimension 139: 9,882 unique hashes, 0 collisions\n",
            "Dimension 140: 9,909 unique hashes, 0 collisions\n",
            "Dimension 141: 9,927 unique hashes, 0 collisions\n",
            "Dimension 142: 9,916 unique hashes, 0 collisions\n",
            "Dimension 143: 9,867 unique hashes, 0 collisions\n",
            "Dimension 144: 691 unique hashes, 0 collisions\n",
            "Dimension 145: 1,401 unique hashes, 0 collisions\n",
            "Dimension 146: 2,549 unique hashes, 0 collisions\n",
            "Dimension 147: 4,065 unique hashes, 0 collisions\n",
            "Dimension 148: 4,908 unique hashes, 0 collisions\n",
            "Dimension 149: 6,141 unique hashes, 0 collisions\n",
            "Dimension 150: 7,289 unique hashes, 0 collisions\n",
            "Dimension 151: 7,991 unique hashes, 0 collisions\n",
            "Dimension 152: 8,693 unique hashes, 1 collisions\n",
            "Dimension 153: 8,986 unique hashes, 0 collisions\n",
            "Dimension 154: 9,641 unique hashes, 0 collisions\n",
            "Dimension 155: 9,822 unique hashes, 0 collisions\n",
            "Dimension 156: 9,916 unique hashes, 0 collisions\n",
            "Dimension 157: 9,936 unique hashes, 0 collisions\n",
            "Dimension 158: 9,920 unique hashes, 0 collisions\n",
            "Dimension 159: 9,863 unique hashes, 0 collisions\n",
            "Dimension 160: 530 unique hashes, 0 collisions\n",
            "Dimension 161: 1,749 unique hashes, 0 collisions\n",
            "Dimension 162: 2,507 unique hashes, 0 collisions\n",
            "Dimension 163: 4,411 unique hashes, 0 collisions\n",
            "Dimension 164: 5,242 unique hashes, 0 collisions\n",
            "Dimension 165: 6,319 unique hashes, 0 collisions\n",
            "Dimension 166: 7,316 unique hashes, 0 collisions\n",
            "Dimension 167: 7,856 unique hashes, 0 collisions\n",
            "Dimension 168: 8,560 unique hashes, 0 collisions\n",
            "Dimension 169: 8,924 unique hashes, 0 collisions\n",
            "Dimension 170: 9,681 unique hashes, 0 collisions\n",
            "Dimension 171: 9,845 unique hashes, 0 collisions\n",
            "Dimension 172: 9,935 unique hashes, 0 collisions\n",
            "Dimension 173: 9,942 unique hashes, 0 collisions\n",
            "Dimension 174: 9,917 unique hashes, 0 collisions\n",
            "Dimension 175: 9,876 unique hashes, 0 collisions\n",
            "Dimension 176: 604 unique hashes, 0 collisions\n",
            "Dimension 177: 1,569 unique hashes, 0 collisions\n",
            "Dimension 178: 2,690 unique hashes, 0 collisions\n",
            "Dimension 179: 3,509 unique hashes, 0 collisions\n",
            "Dimension 180: 5,181 unique hashes, 0 collisions\n",
            "Dimension 181: 6,340 unique hashes, 0 collisions\n",
            "Dimension 182: 7,285 unique hashes, 0 collisions\n",
            "Dimension 183: 8,280 unique hashes, 0 collisions\n",
            "Dimension 184: 8,628 unique hashes, 0 collisions\n",
            "Dimension 185: 9,028 unique hashes, 0 collisions\n",
            "Dimension 186: 9,631 unique hashes, 0 collisions\n",
            "Dimension 187: 9,872 unique hashes, 0 collisions\n",
            "Dimension 188: 9,910 unique hashes, 0 collisions\n",
            "Dimension 189: 9,926 unique hashes, 0 collisions\n",
            "Dimension 190: 9,918 unique hashes, 0 collisions\n",
            "Dimension 191: 9,875 unique hashes, 0 collisions\n"
          ]
        }
      ],
      "source": [
        "from typing import Dict, Set\n",
        "\n",
        "for dim in range(len(window_widths)):\n",
        "    hash_to_ngram: Dict[int, str] = {}\n",
        "    hash_collisions: Set[int] = set()\n",
        "    for hashes, ngrams in zip(fingerprint_hashes, fingerprint_ngrams):\n",
        "        hash_value = hashes[dim]\n",
        "        ngram_value = ngrams[dim]\n",
        "        if hash_value not in hash_to_ngram:\n",
        "            hash_to_ngram[hash_value] = ngram_value\n",
        "        elif hash_to_ngram[hash_value] != ngram_value:\n",
        "            hash_collisions.add(hash_value)\n",
        "\n",
        "    print(\n",
        "        f\"Dimension {dim}: {len(hash_to_ngram):,} unique hashes, {len(hash_collisions):,} collisions\"\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's estimate Recall @ 1, but before we do that - let's find a way to highlight N-gram matches between strings."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 21,
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "(\"A short <span style='color:#ff8000'><span style='color:#ffff00'>s<span style='color:#00ff00'>trin</span></span>g</span> w<span style='color:#0080ff'><span style='color:#ff00ff'>ith </span></span>an <span style='color:#ff0000'>n<span style='color:#800080'><span style='color:#ff0000'>-gr</span></span>am</span>\",\n",
              " \"Longer <span style='color:#ff8000'><span style='color:#ffff00'>s<span style='color:#00ff00'>trin</span></span>g</span>s w<span style='color:#0080ff'><span style='color:#ff00ff'>ith </span></span>different <span style='color:#ff0000'>n<span style='color:#800080'><span style='color:#ff0000'>-gr</span></span>am</span>s\")"
            ]
          },
          "execution_count": 21,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "from typing import Tuple\n",
        "from IPython.display import HTML\n",
        "import numpy as np\n",
        "\n",
        "HTML_COLORS = [\n",
        "    \"#ff0000\",\n",
        "    \"#ff8000\",\n",
        "    \"#ffff00\",\n",
        "    \"#00ff00\",\n",
        "    \"#0080ff\",\n",
        "    \"#ff00ff\",\n",
        "    \"#800080\",\n",
        "]\n",
        "ASCII_COLORS = [\n",
        "    \"\\033[38;5;196m\",  # red\n",
        "    \"\\033[38;5;208m\",  # orange\n",
        "    \"\\033[38;5;226m\",  # yellow\n",
        "    \"\\033[38;5;082m\",  # green\n",
        "    \"\\033[38;5;039m\",  # blue\n",
        "    \"\\033[38;5;201m\",  # magenta\n",
        "    \"\\033[38;5;129m\",  # purple\n",
        "]\n",
        "\n",
        "\n",
        "def color_code_matches(\n",
        "    query_text: str,\n",
        "    document_text: str,\n",
        "    query_hashes: np.ndarray,\n",
        "    document_hashes: np.ndarray,\n",
        "    query_ngrams: np.ndarray,\n",
        "    document_ngrams: np.ndarray,\n",
        "    *,\n",
        "    html: bool = True,\n",
        ") -> Tuple[str, str]:\n",
        "    \"\"\"Highlight matching n-grams / hash-collisions in the two texts.\"\"\"\n",
        "\n",
        "    COLOR_ARRAY = (\n",
        "        [f\"<span style='color:{hex_}'>\" for hex_ in HTML_COLORS]\n",
        "        if html\n",
        "        else ASCII_COLORS\n",
        "    )\n",
        "    COLOR_COLLISION = (\n",
        "        \"<span style='color:#888888'>\" if html else \"\\033[38;5;244m\"\n",
        "    )  # grey\n",
        "    COLOR_RESET = \"</span>\" if html else \"\\033[0m\"\n",
        "\n",
        "    def number_of_matches_in_dimension(dim: int) -> int:\n",
        "        if len(query_ngrams[dim]) == 0 or len(document_ngrams[dim]) == 0:\n",
        "            return 0\n",
        "        return min(\n",
        "            query_text.count(query_ngrams[dim]),\n",
        "            document_text.count(document_ngrams[dim]),\n",
        "        )\n",
        "\n",
        "    def ngram_length_in_dimension(dim: int) -> int:\n",
        "        return len(query_ngrams[dim]) if dim < len(query_ngrams) else 0\n",
        "\n",
        "    all_dims = [\n",
        "        d for d in range(len(query_hashes)) if number_of_matches_in_dimension(d)\n",
        "    ]\n",
        "    all_dims.sort(key=ngram_length_in_dimension, reverse=True)\n",
        "\n",
        "    color_index = 0\n",
        "    for dim in all_dims:\n",
        "        if number_of_matches_in_dimension(dim) == 0:\n",
        "            continue\n",
        "\n",
        "        is_hash_eq = query_hashes[dim] == document_hashes[dim]\n",
        "        is_ngram_eq = query_ngrams[dim] == document_ngrams[dim]\n",
        "        token = query_ngrams[dim]\n",
        "        assert token, \"N-gram must not be empty\"\n",
        "\n",
        "        if is_ngram_eq:\n",
        "            color_tag = COLOR_ARRAY[color_index % len(COLOR_ARRAY)]\n",
        "            replacement = f\"{color_tag}{token}{COLOR_RESET}\"\n",
        "            color_index += 1\n",
        "        elif is_hash_eq:\n",
        "            replacement = f\"{COLOR_COLLISION}{token}{COLOR_RESET}\"\n",
        "        else:\n",
        "            continue\n",
        "\n",
        "        query_text = query_text.replace(token, replacement)\n",
        "        document_text = document_text.replace(token, replacement)\n",
        "\n",
        "    return query_text, document_text\n",
        "\n",
        "\n",
        "query_text = \"A short string with an n-gram\"\n",
        "document_text = \"Longer strings with different n-grams\"\n",
        "query_hashes, query_weights, query_ngrams = default_rolling_sketcher(query_text)\n",
        "document_hashes, document_weights, document_ngrams = default_rolling_sketcher(\n",
        "    document_text\n",
        ")\n",
        "color_code_matches(\n",
        "    query_text=query_text,\n",
        "    document_text=document_text,\n",
        "    query_hashes=query_hashes,\n",
        "    document_hashes=document_hashes,\n",
        "    query_ngrams=query_ngrams,\n",
        "    document_ngrams=document_ngrams,\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 22,
      "metadata": {},
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Searching: 100%|██████████| 10/10 [00:00<00:00, 26.40doc/s]\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "<pre style='font-family:monospace'><b>Matched query 0 with document 1,559 with score 0.0625</b><br/>- a rebel statement sent to lisbon from jamba said 86 government soldiers and 13 gu<span style='color:#ff0000'>er<span style='color:#ff8000'>ril<span style='color:#ff00ff'>l<span style='color:#ff0000'>as w</span>er</span>e<span style='color:#0080ff'><span style='color:#800080'> <span style='color:#ff8000'>kil</span></span>led i</span>n</span> </span>the fighting that ended jan. 3. it said<span style='color:#ffff00'><span style='color:#00ff00'> the rebe</span>l</span> forces sill held mavinga.<br/>- hours later, six leftist gu<span style='color:#ff0000'>er<span style='color:#ff8000'>ril<span style='color:#ff00ff'>l<span style='color:#ff0000'>as w</span>er</span>e<span style='color:#0080ff'><span style='color:#800080'> <span style='color:#ff8000'>kil</span></span>led i</span>n</span> </span>a battle with a special army brigade created to fight<span style='color:#ffff00'><span style='color:#00ff00'> the rebe</span>l</span>s.<br/><b>Matched query 1 with document 3,483 with score 0.0417</b><br/>- authoriti<span style='color:#ff0000'>e<span style='color:#ff8000'>s <span style='color:#ffff00'><span style='color:#0080ff'>las</span>t we</span>e</span></span>k issued a vacate order for a club in manhattan a<span style='color:#00ff00'>nd </span>closed another in the bronx.<br/>- pric<span style='color:#ff0000'>e<span style='color:#ff8000'>s <span style='color:#ffff00'><span style='color:#0080ff'>las</span>t we</span>e</span></span>k were off 41% from six years ago a<span style='color:#00ff00'>nd </span>11% from <span style='color:#0080ff'>las</span>t year's sale.<br/><b>Matched query 2 with document 8,745 with score 0.0469</b><br/>- at <span style='color:#ff0000'>th<span style='color:#ffff00'>e <span style='color:#00ff00'><span style='color:#0080ff'>first</span></span></span></span> pan am bankruptcy hearing, <span style='color:#ff8000'>fo<span style='color:#ff00ff'>r <span style='color:#800080'>exa</span></span>m</span>ple, at least five airlines were represented.<br/>- <span style='color:#ff8000'>fo<span style='color:#ff00ff'>r <span style='color:#800080'>exa</span></span>m</span>ple, libya was <span style='color:#ff0000'>th<span style='color:#ffff00'>e <span style='color:#00ff00'><span style='color:#0080ff'>first</span></span></span></span> to break the $30-a-barrel level and others followed.<br/><b>Matched query 3 with document 2,243 with score 0.0469</b><br/>- mr. neigum, poker-faced<span style='color:#ff0000'> <span style='color:#ff8000'>d<span style='color:#ffff00'>uring t</span>he</span> </span>difficult task, manages a 46-second showing.<br/>- the june contract of the long gilt future on liffe traded between 108 21/32 and 107 5/8<span style='color:#ff0000'> <span style='color:#ff8000'>d<span style='color:#ffff00'>uring t</span>he</span> </span>day.<br/><b>Matched query 4 with document 9,748 with score 0.0417</b><br/>- this,<span style='color:#ff0000'> co<span style='color:#ff8000'>mb<span style='color:#ffff00'>ined </span>wit</span>h </span>the container division talks, suggests the group's bankers might be considering a<span style='color:#00ff00'>n o</span>rderly disposal of<span style='color:#0080ff'> al</span>l assets.<br/>- the near-extinctio<span style='color:#00ff00'>n o</span>f the buffalo herd,<span style='color:#ff0000'> co<span style='color:#ff8000'>mb<span style='color:#ffff00'>ined </span>wit</span>h </span>pressure from local missionaries,<span style='color:#0080ff'> al</span>l but wiped out the rituals that had united the omahas for centuries.<br/><b>Matched query 5 with document 8,400 with score 0.0625</b><br/>- she told the post in <span style='color:#ff0000'>an interv<span style='color:#ff8000'>i<span style='color:#0080ff'>ew </span>p</span>u</span>blished sunday that some of the money may have become \"mingled<span style='color:#00ff00'>\" i</span>nto improvements on her home that included a swimming pool, a $2,500 wide-screen televisio<span style='color:#ffff00'>n a</span>nd renovations to her basement.<br/>- eisner asked. \"i<span style='color:#ffff00'>n a</span> word, panic.<span style='color:#00ff00'>\" i</span>n <span style='color:#ff0000'>an interv<span style='color:#ff8000'>i<span style='color:#0080ff'>ew </span>p</span>u</span>blished in friday's contra costa times, eisner, 48, said he rarely rests easy.<br/><b>Matched query 6 with document 5,453 with score 0.0573</b><br/>- <span style='color:#ffff00'>accor</span>di<span style='color:#ff0000'><span style='color:#ff8000'>n<span style='color:#00ff00'>g to</span></span> a s</span>tud<span style='color:#0080ff'>y b</span>y the marshall institute, the average nasa employee's age in 1963 was 30; now most of its senior and middle-managers will be eligible to retire in five years.<br/>- the resolution was passed tuesda<span style='color:#0080ff'>y b</span>y the afl-cio executive council, <span style='color:#ffff00'>accor</span>di<span style='color:#ff0000'><span style='color:#ff8000'>n<span style='color:#00ff00'>g to</span></span> a s</span>tatement.<br/><b>Matched query 7 with document 8,218 with score 0.0990</b><br/>- preston tisch, 62, is <span style='color:#ffff00'><span style='color:#00ff00'>pre<span style='color:#ff0000'>side</span>nt </span></span>and co-<span style='color:#ff0000'><span style='color:#ff8000'>chi<span style='color:#0080ff'>ef <span style='color:#ff00ff'><span style='color:#800080'>e<span style='color:#ff8000'><span style='color:#ffff00'>xec</span></span></span>u</span></span>t</span>ive offic</span>er of loews corp. and is a former postmaster general.<br/>- hormel said charles b. olson, <span style='color:#ffff00'><span style='color:#00ff00'>pre<span style='color:#ff0000'>side</span>nt </span></span>of jennie-o, will also be named <span style='color:#ff0000'><span style='color:#ff8000'>chi<span style='color:#0080ff'>ef <span style='color:#ff00ff'><span style='color:#800080'>e<span style='color:#ff8000'><span style='color:#ffff00'>xec</span></span></span>u</span></span>t</span>ive offic</span>er of the new subsidiary.<br/><b>Matched query 8 with document 3,568 with score 0.0469</b><br/>- \"we're dealing with an <span style='color:#ff8000'>o<span style='color:#00ff00'>wner</span></span> who couldn't give<span style='color:#ff00ff'> a </span>rip. th<span style='color:#ff0000'>ey </span>cut off her mail a<span style='color:#800080'>nd </span>she got<span style='color:#ff00ff'> a </span>post office box.\" starting friday, a<span style='color:#0080ff'>n a</span>nimal-control officer is ac<span style='color:#ff0000'>c<span style='color:#ffff00'>ompa</span>n</span>ying finster on his route.<br/>- i<span style='color:#0080ff'>n a</span> letter to the <span style='color:#ff0000'>c<span style='color:#ffff00'>ompa</span>n</span>y's board of directors, united airline's pilots, flight attendants a<span style='color:#800080'>nd </span>machinists said th<span style='color:#ff0000'>ey </span>were prepared to negotiate as<span style='color:#ff00ff'> a </span>group for the acquisition of stock through one or more employee stock <span style='color:#ff8000'>o<span style='color:#00ff00'>wner</span></span>ship plans, or esops.<br/><b>Matched query 9 with document 8,323 with score 0.0469</b><br/>- asked if he might br<span style='color:#ff0000'><span style='color:#ff8000'>i<span style='color:#ffff00'><span style='color:#00ff00'><span style='color:#0080ff'>n<span style='color:#ff00ff'>g th</span></span>e w</span>o</span>r</span></span>ld leaders to texas, possibly to san antonio, the president remarked, \"tha<span style='color:#800080'>t's </span>a distinct possibility.<br/>- i<span style='color:#800080'>t's </span>sweep<span style='color:#ff0000'><span style='color:#ff8000'>i<span style='color:#ffff00'><span style='color:#00ff00'><span style='color:#0080ff'>n<span style='color:#ff00ff'>g th</span></span>e w</span>o</span>r</span></span>ld.</pre>"
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from tqdm import tqdm\n",
        "from IPython.display import display\n",
        "\n",
        "QUERIES_TO_COMPARE = 10\n",
        "\n",
        "log_lines = []\n",
        "\n",
        "for i, query_hashes, query_counts, query_ngrams in tqdm(\n",
        "    zip(\n",
        "        range(QUERIES_TO_COMPARE),\n",
        "        fingerprint_hashes[:QUERIES_TO_COMPARE],\n",
        "        fingerprint_counts[:QUERIES_TO_COMPARE],\n",
        "        fingerprint_ngrams[:QUERIES_TO_COMPARE],\n",
        "    ),\n",
        "    desc=\"Searching\",\n",
        "    unit=\"doc\",\n",
        "    total=QUERIES_TO_COMPARE,\n",
        "):\n",
        "\n",
        "    # Compare with all other fingerprints\n",
        "    best_score, best_index = 0.0, -1\n",
        "    for j, dataset_hashes, dataset_counts, dataset_ngrams in zip(\n",
        "        range(len(fingerprint_hashes)),\n",
        "        fingerprint_hashes,\n",
        "        fingerprint_counts,\n",
        "        fingerprint_ngrams,\n",
        "    ):\n",
        "        if i == j:\n",
        "            continue\n",
        "\n",
        "        score = jaccard_similarity(query_hashes, dataset_hashes)\n",
        "        if score > best_score:\n",
        "            best_score = score\n",
        "            best_index = j\n",
        "\n",
        "    query = textual_lines[i]\n",
        "    doc = textual_lines[best_index]\n",
        "    colored_query, colored_doc = color_code_matches(\n",
        "        query_text=query,\n",
        "        document_text=doc,\n",
        "        query_hashes=query_hashes,\n",
        "        document_hashes=fingerprint_hashes[best_index],\n",
        "        query_ngrams=query_ngrams,\n",
        "        document_ngrams=fingerprint_ngrams[best_index],\n",
        "    )\n",
        "    log_lines.extend(\n",
        "        [\n",
        "            f\"<b>Matched query {i:,} with document {best_index:,} with score {best_score:.4f}</b>\",\n",
        "            f\"- {colored_query}\",\n",
        "            f\"- {colored_doc}\",\n",
        "        ]\n",
        "    )\n",
        "\n",
        "concatenated_log = \"<br/>\".join(log_lines)\n",
        "monospaced_log = HTML(f\"<pre style='font-family:monospace'>{concatenated_log}</pre>\")\n",
        "display(monospaced_log)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 23,
      "metadata": {},
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "Searching: 100%|██████████| 10/10 [00:00<00:00, 17.60doc/s]\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "<pre style='font-family:monospace'><b>Matched query 0 with document 3,668 with score 0.0305</b><br/>- a rebel statement sent to lisbon from jamba<span style='color:#ffff00'> sai</span>d 86 g<span style='color:#ff0000'><span style='color:#ff8000'>over</span>nme</span>nt soldiers and 13 guerrillas were <span style='color:#00ff00'>kil</span>led in the fighting that ended jan. 3. it<span style='color:#ffff00'> sai</span>d the rebel forces sill held mavinga.<br/>- the peruvian g<span style='color:#ff0000'><span style='color:#ff8000'>over</span>nme</span>nt has<span style='color:#ffff00'> sai</span>d that since the group turned from propaganda to violence in may 1980, it has <span style='color:#00ff00'>kil</span>led more than 15,000 people and caused more than $10 billion in damage.<br/><b>Matched query 1 with document 3,483 with score 0.0508</b><br/>- authoriti<span style='color:#ff0000'>e<span style='color:#ff8000'>s <span style='color:#ffff00'><span style='color:#0080ff'>las</span>t we</span>e</span></span>k issued a vacate order for a club in manhattan a<span style='color:#00ff00'>nd </span>closed another in the bronx.<br/>- pric<span style='color:#ff0000'>e<span style='color:#ff8000'>s <span style='color:#ffff00'><span style='color:#0080ff'>las</span>t we</span>e</span></span>k were off 41% from six years ago a<span style='color:#00ff00'>nd </span>11% from <span style='color:#0080ff'>las</span>t year's sale.<br/><b>Matched query 2 with document 2,193 with score 0.0525</b><br/>- at <span style='color:#ff0000'><span style='color:#ff8000'>the fir</span>st</span> pan am bankruptcy hearing, for example, at least five airlines were represented.<br/>- in <span style='color:#ff0000'><span style='color:#ff8000'>the fir</span>st</span> nine months, frozen food reduced long-term debt to $8.8 million from $19.2 million.<br/><b>Matched query 3 with document 1,436 with score 0.0459</b><br/>- mr. neigum, poker-faced <span style='color:#ff0000'><span style='color:#ff8000'>du<span style='color:#ffff00'>ri<span style='color:#00ff00'>n<span style='color:#0080ff'>g th</span></span></span></span>e</span> difficult task, manages a 46-second showing.<br/>- <span style='color:#ff0000'><span style='color:#ff8000'>du<span style='color:#ffff00'>ri<span style='color:#00ff00'>n<span style='color:#0080ff'>g th</span></span></span></span>e</span> last meeting, the sandinistas presented their most liberal proposal.<br/><b>Matched query 4 with document 7,018 with score 0.0369</b><br/>- this,<span style='color:#ff0000'> comb<span style='color:#ff8000'>ine<span style='color:#ffff00'>d wi</span>th</span> </span>the container division talks, suggests the group's bankers might be considering an orderly disposal of all assets.<br/>- that,<span style='color:#ff0000'> comb<span style='color:#ff8000'>ine<span style='color:#ffff00'>d wi</span>th</span> </span>lower prices, could get gnp back up to zero growth during the first quarter of 1991, he says.<br/><b>Matched query 5 with document 8,400 with score 0.0528</b><br/>- she told the post in <span style='color:#ff0000'>an interv<span style='color:#ff8000'>i<span style='color:#0080ff'>ew </span>p</span>u</span>blished sunday that some of the money may have become \"mingled<span style='color:#00ff00'>\" i</span>nto improvements on her home that included a swimming pool, a $2,500 wide-screen televisio<span style='color:#ffff00'>n a</span>nd renovations to her basement.<br/>- eisner asked. \"i<span style='color:#ffff00'>n a</span> word, panic.<span style='color:#00ff00'>\" i</span>n <span style='color:#ff0000'>an interv<span style='color:#ff8000'>i<span style='color:#0080ff'>ew </span>p</span>u</span>blished in friday's contra costa times, eisner, 48, said he rarely rests easy.<br/><b>Matched query 6 with document 3,224 with score 0.0498</b><br/>- accord<span style='color:#ff0000'>i<span style='color:#ff8000'>n<span style='color:#0080ff'>g to</span></span> </span>a study by the marshall institute, the average nasa employee's age in 1963 was 30; now most of its senior and middle-m<span style='color:#ffff00'>anage</span><span style='color:#00ff00'>rs wi</span>ll be eligible to retire in five years.<br/>- it is try<span style='color:#ff0000'>i<span style='color:#ff8000'>n<span style='color:#0080ff'>g to</span></span> </span>agree redundancy with 150 branch m<span style='color:#ffff00'>anage</span><span style='color:#00ff00'>rs wi</span>thin the next few months, and 120 computer and 80 clerical staff are also affected.<br/><b>Matched query 7 with document 4,990 with score 0.0619</b><br/>- preston tisch, 62, is<span style='color:#ff8000'> pre<span style='color:#0080ff'>side</span>nt</span> and co-<span style='color:#ff0000'>chief <span style='color:#00ff00'>e<span style='color:#ff00ff'><span style='color:#800080'>xec</span></span></span><span style='color:#ffff00'>utive off</span>ic</span>er of loews corp. and is a former postmaster general.<br/>- thomas m. egan,<span style='color:#ff8000'> pre<span style='color:#0080ff'>side</span>nt</span> and <span style='color:#ff0000'>chief <span style='color:#00ff00'>e<span style='color:#ff00ff'><span style='color:#800080'>xec</span></span></span><span style='color:#ffff00'>utive off</span>ic</span>er of stotler group, said the company had been in negotiations with the creditors in efforts to reach a solution.<br/><b>Matched query 8 with document 4,686 with score 0.0309</b><br/>- \"we're dealing with an owner wh<span style='color:#ffff00'><span style='color:#00ff00'>o c</span></span>ould<span style='color:#ff8000'>n't </span>giv<span style='color:#ff0000'>e<span style='color:#0080ff'> a </span></span>rip. they cut off her mail and she got<span style='color:#0080ff'> a </span>post office box.\" starting friday, an animal-control officer is accompanying finster on his route.<br/>- we do<span style='color:#ff8000'>n't </span>have t<span style='color:#ffff00'><span style='color:#00ff00'>o c</span></span>hang<span style='color:#ff0000'>e<span style='color:#0080ff'> a </span></span>single thing,\" bush told campaign staff workers<span style='color:#0080ff'> a </span>day after beating sen. bob dole of kansas, his chief rival for the gop presidential nomination, in new hampshire.<br/><b>Matched query 9 with document 1,831 with score 0.0288</b><br/>- asked if he might bri<span style='color:#ff8000'>n<span style='color:#ffff00'>g th</span></span>e world leaders to t<span style='color:#00ff00'>exa</span>s, possibly to san antonio, the <span style='color:#ff0000'>president </span>remarked, \"that's a distinct possibility.<br/>- the t<span style='color:#00ff00'>exa</span>ns' success in persuadi<span style='color:#ff8000'>n<span style='color:#ffff00'>g th</span></span>e <span style='color:#ff0000'>president </span>suggests that mexico may get better treatment in the bush administration than it has in the past.</pre>"
            ],
            "text/plain": [
              "<IPython.core.display.HTML object>"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        }
      ],
      "source": [
        "from tqdm import tqdm\n",
        "from IPython.display import display\n",
        "\n",
        "QUERIES_TO_COMPARE = 10\n",
        "\n",
        "log_lines = []\n",
        "\n",
        "for i, query_hashes, query_counts, query_ngrams in tqdm(\n",
        "    zip(\n",
        "        range(QUERIES_TO_COMPARE),\n",
        "        fingerprint_hashes[:QUERIES_TO_COMPARE],\n",
        "        fingerprint_counts[:QUERIES_TO_COMPARE],\n",
        "        fingerprint_ngrams[:QUERIES_TO_COMPARE],\n",
        "    ),\n",
        "    desc=\"Searching\",\n",
        "    unit=\"doc\",\n",
        "    total=QUERIES_TO_COMPARE,\n",
        "):\n",
        "\n",
        "    # Compare with all other fingerprints\n",
        "    best_score, best_index = 0.0, -1\n",
        "    for j, dataset_hashes, dataset_counts, dataset_ngrams in zip(\n",
        "        range(len(fingerprint_hashes)),\n",
        "        fingerprint_hashes,\n",
        "        fingerprint_counts,\n",
        "        fingerprint_ngrams,\n",
        "    ):\n",
        "        if i == j:\n",
        "            continue\n",
        "\n",
        "        score = weighted_jaccard_similarity(\n",
        "            (query_hashes, query_counts),\n",
        "            (dataset_hashes, dataset_counts),\n",
        "        )\n",
        "        if score > best_score:\n",
        "            best_score = score\n",
        "            best_index = j\n",
        "\n",
        "    query = textual_lines[i]\n",
        "    doc = textual_lines[best_index]\n",
        "    colored_query, colored_doc = color_code_matches(\n",
        "        query_text=query,\n",
        "        document_text=doc,\n",
        "        query_hashes=query_hashes,\n",
        "        document_hashes=fingerprint_hashes[best_index],\n",
        "        query_ngrams=query_ngrams,\n",
        "        document_ngrams=fingerprint_ngrams[best_index],\n",
        "    )\n",
        "    log_lines.extend(\n",
        "        [\n",
        "            f\"<b>Matched query {i:,} with document {best_index:,} with score {best_score:.4f}</b>\",\n",
        "            f\"- {colored_query}\",\n",
        "            f\"- {colored_doc}\",\n",
        "        ]\n",
        "    )\n",
        "\n",
        "concatenated_log = \"<br/>\".join(log_lines)\n",
        "monospaced_log = HTML(f\"<pre style='font-family:monospace'>{concatenated_log}</pre>\")\n",
        "display(monospaced_log)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Min-Hash Fingerprinting DNA & Protein Sequences"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 24,
      "metadata": {},
      "outputs": [],
      "source": [
        "dna_dataset_path = dataset_directory / \"acgt_10k.txt\"\n",
        "dna_dataset = open(dna_dataset_path, \"r\").read().strip()"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "StringZilla",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.11.11"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}