{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "e3wdiK5fGUoZ"
   },
   "source": [
    "# 💡 What's new in txtai 9.0\n",
    "\n",
    "[txtai](https://github.com/neuml/txtai) is an all-in-one AI framework for semantic search, LLM orchestration and language model workflows.\n",
    "\n",
    "The 9.0 release adds first class support for sparse vector models (i.e. [SPLADE](https://en.wikipedia.org/wiki/Learned_sparse_retrieval)), late interaction models (i.e. [ColBERT](https://huggingface.co/colbert-ir/colbertv2.0)), fixed dimensional encoding (i.e. [MUVERA](https://arxiv.org/abs/2405.19504)) and reranking pipelines ✨ \n",
    "\n",
    "The embeddings framework was overhauled to seamlessly support both sparse and dense vector models. Previously, sparse vector support was limited to keyword/term indexes. Now learned sparse retrieval models such as SPLADE are supported. These models can help improve the accuracy of retrieval/search operations, which also improves RAG and Agents.\n",
    "\n",
    "Support for late interaction models, such as ColBERT, were also added to the embeddings framework. Unlike traditional vector models that pool outputs into single vector outputs, late interaction models produce multiple vectors. These models are paired with the MUVERA algorithm to transform multiple vectors into fixed dimensional single vectors for search.\n",
    "\n",
    "LLMs are quickly converging to produce similar outputs for similar inputs and becoming standard commodities. The retrieval or context layer makes or breaks projects. This is known as putting the R in RAG!\n",
    "\n",
    "**Standard upgrade disclaimer below**\n",
    "\n",
    "While everything is backwards compatible, it's prudent to backup production indexes before upgrading and test before deploying."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "p8BbfjrhH-V2"
   },
   "source": [
    "# Install dependencies\n",
    "\n",
    "Install `txtai` and all dependencies."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "id": "-OXsTQgaGQPM"
   },
   "outputs": [],
   "source": [
    "%%capture\n",
    "!pip install git+https://github.com/neuml/txtai#egg=txtai[ann,vectors]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "n4EXrtcYIIYE"
   },
   "source": [
    "# Sparse vector indexes\n",
    "\n",
    "The first major change added with this release is `learned sparse retrieval` (aka sparse vector indexes) models. This effort was multi-faceted in that it required both changes to how vectors were generated as well as how they are stored.\n",
    "\n",
    "`txtai` uses approximate nearest neighbor (ANN) search for it's vector search operations. The default library is [Faiss](https://github.com/facebookresearch/faiss). There is support for other libraries but in all cases the existing ANN backends only supported dense (i.e. NumPy) vectors.\n",
    "\n",
    "There aren't many options out there for sparse ANN search that supports `txtai` requirements, so IVFSparse was introduced. IVFSparse is an Inverted file (IVF) index with flat vector file storage and sparse array support. There is also support for storing sparse vectors in Postgres via [pgvector](https://github.com/pgvector/pgvector).\n",
    "\n",
    "Let's see it in action.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "id": "hzPD8_cQJNtN"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': '0',\n",
       "  'text': 'US tops 5 million confirmed virus cases',\n",
       "  'score': 0.019873601198196412},\n",
       " {'id': '1',\n",
       "  'text': \"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg\",\n",
       "  'score': 0.018737798929214476}]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from txtai import Embeddings\n",
    "\n",
    "# Works with a list, dataset or generator\n",
    "data = [\n",
    "  \"US tops 5 million confirmed virus cases\",\n",
    "  \"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg\",\n",
    "  \"Beijing mobilises invasion craft along coast as Taiwan tensions escalate\",\n",
    "  \"The National Park Service warns against sacrificing slower friends in a bear attack\",\n",
    "  \"Maine man wins $1M from $25 lottery ticket\",\n",
    "  \"Make huge profits without work, earn up to $100,000 a day\"\n",
    "]\n",
    "\n",
    "# Create an embeddings\n",
    "embeddings = Embeddings(sparse=True, content=True)\n",
    "embeddings.index(data)\n",
    "embeddings.search(\"North America\", 10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Late interaction models\n",
    "\n",
    "Late interaction models encode data into multi-vector outputs. In other words, multiple input tokens map to multiple output vectors. Then at search time, the maximum similarity algorithm is used to find the best matches between the corpus and a query. This algorithm has achieved excellent results on retrieval benchmarks such as [MTEB](https://github.com/embeddings-benchmark/mteb).\n",
    "\n",
    "The downside of this approach is that it produces multiple vectors as opposed a single vector for each input. For example, if a text element tokenizes to many input tokens, there will be many output vectors vs a single one as with standard pooled vector approaches.\n",
    "\n",
    "Starting with the 9.0 release, late interaction models are supported with embeddings instances. Late interaction vectors will be transformed into fixed dimensional vectors using the MUVERA algorithm. See below. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': '0',\n",
       "  'text': 'US tops 5 million confirmed virus cases',\n",
       "  'score': 0.04216160625219345},\n",
       " {'id': '1',\n",
       "  'text': \"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg\",\n",
       "  'score': 0.029944246634840965},\n",
       " {'id': '3',\n",
       "  'text': 'The National Park Service warns against sacrificing slower friends in a bear attack',\n",
       "  'score': 0.015931561589241028}]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from txtai import Embeddings\n",
    "\n",
    "# Works with a list, dataset or generator\n",
    "data = [\n",
    "  \"US tops 5 million confirmed virus cases\",\n",
    "  \"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg\",\n",
    "  \"Beijing mobilises invasion craft along coast as Taiwan tensions escalate\",\n",
    "  \"The National Park Service warns against sacrificing slower friends in a bear attack\",\n",
    "  \"Maine man wins $1M from $25 lottery ticket\",\n",
    "  \"Make huge profits without work, earn up to $100,000 a day\"\n",
    "]\n",
    "\n",
    "# Create an embeddings\n",
    "embeddings = Embeddings(path=\"colbert-ir/colbertv2.0\", content=True)\n",
    "embeddings.index(data)\n",
    "embeddings.search(\"North America\", 10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Reranking pipeline\n",
    "\n",
    "Another major new component in this release is the Reranker pipeline. This pipeline takes an embeddings instance, a similarity instance and uses the similarity instance to rerank outputs. This is a key component of the MUVERA paper - using the standard vector index to retrieve candidates then reranking the outputs using the late interaction model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': '1',\n",
       "  'text': \"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg\",\n",
       "  'score': 0.3324427008628845},\n",
       " {'id': '0',\n",
       "  'text': 'US tops 5 million confirmed virus cases',\n",
       "  'score': 0.24423550069332123},\n",
       " {'id': '3',\n",
       "  'text': 'The National Park Service warns against sacrificing slower friends in a bear attack',\n",
       "  'score': 0.16353240609169006}]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from txtai import Embeddings\n",
    "from txtai.pipeline import Reranker, Similarity\n",
    "\n",
    "# Works with a list, dataset or generator\n",
    "data = [\n",
    "  \"US tops 5 million confirmed virus cases\",\n",
    "  \"Canada's last fully intact ice shelf has suddenly collapsed, forming a Manhattan-sized iceberg\",\n",
    "  \"Beijing mobilises invasion craft along coast as Taiwan tensions escalate\",\n",
    "  \"The National Park Service warns against sacrificing slower friends in a bear attack\",\n",
    "  \"Maine man wins $1M from $25 lottery ticket\",\n",
    "  \"Make huge profits without work, earn up to $100,000 a day\"\n",
    "]\n",
    "\n",
    "# Create an embeddings\n",
    "embeddings = Embeddings(path=\"colbert-ir/colbertv2.0\", content=True)\n",
    "embeddings.index(data)\n",
    "\n",
    "similarity = Similarity(path=\"colbert-ir/colbertv2.0\", lateencode=True)\n",
    "\n",
    "ranker = Reranker(embeddings, similarity)\n",
    "ranker(\"North America\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Notice that while the outputs are the same, the scoring and order is different.\n",
    "\n",
    "Let's try a more interesting example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'id': 'ChatGPT',\n",
       "  'text': 'ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other multimodal models to create human-like responses in text, speech, and images. It has access to features such as searching the web, using apps, and running programs. It is credited with accelerating the AI boom, an ongoing period of rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.',\n",
       "  'score': 0.6639302968978882},\n",
       " {'id': 'ChatGPT Search',\n",
       "  'text': 'ChatGPT Search (originally SearchGPT) is a search engine developed by OpenAI. It combines traditional search engine features with generative pretrained transformers (GPT) to generate responses, including citations to external websites.',\n",
       "  'score': 0.6477508544921875},\n",
       " {'id': 'ChatGPT in education',\n",
       "  'text': 'The usage of ChatGPT in education has sparked considerable debate and exploration. ChatGPT is a chatbot based on large language models (LLMs) that was released by OpenAI in November 2022.',\n",
       "  'score': 0.5918337106704712}]"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from txtai import Embeddings\n",
    "from txtai.pipeline import Reranker, Similarity\n",
    "\n",
    "# Create an embeddings\n",
    "embeddings = Embeddings()\n",
    "embeddings.load(provider=\"huggingface-hub\", container=\"neuml/txtai-wikipedia\")\n",
    "\n",
    "similarity = Similarity(path=\"colbert-ir/colbertv2.0\", lateencode=True)\n",
    "\n",
    "ranker = Reranker(embeddings, similarity)\n",
    "ranker(\"Tell me about ChatGPT\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tvnMO1Eai6Gy"
   },
   "source": [
    "# Wrapping up\n",
    "\n",
    "This notebook gave a quick overview of txtai 9.0. Updated documentation and more examples will be forthcoming. There is much to cover and much to build on!\n",
    "\n",
    "See the following links for more information.\n",
    "\n",
    "- [9.0 Release on GitHub](https://github.com/neuml/txtai/releases/tag/v9.0.0)\n",
    "- [Documentation site](https://neuml.github.io/txtai)"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "gpuType": "T4",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "local",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.18"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
