{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "ef2ed242-3561-464c-8d1c-cc3862e23702",
   "metadata": {},
   "source": [
    "# Text Generation via Prompt Lookup Decoding using OpenVINO™\n",
    "\n",
    "As model sizes grow, Generative AI implementations require significant inference resources. This not only increases the cost per generation from a prompt, but also increases the power consumption used to serve such requests.\n",
    "\n",
    "Inference optimizations for text generation are essential for reducing costs and power consumption. When optimizing the inference process, the amount of time and energy required to generate text can be significantly reduced. This can lead to cost savings in terms of hardware and software, as well as reduced power consumption. Additionally, inference optimizations can help improve the accuracy of text generation as well as the speed at which it can be generated. This can lead to an improved user experience and increased efficiency in text-generation tasks. In summary, inference optimizations for text generation are essential to reduce costs and power consumption, while also improving the accuracy and speed of text generation.\n",
    "\n",
    "[Prompt Lookup decoding](https://github.com/apoorvumang/prompt-lookup-decoding) is [assisted-generation](https://huggingface.co/blog/assisted-generation#understanding-text-generation-latency) technique, that allows to speed up token generation, where the draft model is replaced with simple string matching the prompt to generate candidate token sequences. \n",
    "\n",
    "Prompt Lookup decoding works the following way. Input defines as all the tokens till the current generation step (input_ids). It then tries to match last few tokens to somewhere earlier in the prompt. If found, it returns the next-k token continuation as `candidate input ids` or `candidate sequence`.\n",
    "\n",
    "![](https://blog.vllm.ai/assets/figures/spec-decode/figure3.png)\n",
    "\n",
    "This method highly effective for input grounded generation (summarization, document QA, multi-turn chat, code editing), where there is high n-gram overlap between LLM input (prompt) and LLM output. This could be entity names, phrases, or code chunks that the LLM directly copies from the input while generating the output. Prompt lookup exploits this pattern to speed up autoregressive decoding in LLMs. This results in significant speedups with no effect on output quality.\n",
    "\n",
    "In this tutorial we consider how to apply [Prompt Lookup decoding with OpenVINO GenAI](https://medium.com/openvino-toolkit/enhancing-llm-inference-with-prompt-lookup-decoding-and-openvino-genai-e15b69aeaeab).\n",
    "\n",
    "#### Table of contents:\n",
    "\n",
    "- [Prerequisites](#Prerequisites)\n",
    "- [Prepare models](#Prepare-models)\n",
    "    - [Select inference device](#Select-inference-device)\n",
    "- [Run target model without prompt lookup decoding](#Run-target-model-without-prompt-lookup-decoding)\n",
    "- [Run Prompt Lookup decoding pipeline](#Run-Prompt-Lookup-decoding-pipeline)\n",
    "- [Evaluate Prompt Lookup Decoding on multiple examples](#Evaluate-Prompt-Lookup-Decoding-on-multiple-examples)\n",
    "\n",
    "\n",
    "### Installation Instructions\n",
    "\n",
    "This is a self-contained example that relies solely on its own code.\n",
    "\n",
    "We recommend  running the notebook in a virtual environment. You only need a Jupyter server to start.\n",
    "For details, please refer to [Installation Guide](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/README.md#-installation-guide).\n",
    "\n",
    "<img referrerpolicy=\"no-referrer-when-downgrade\" src=\"https://static.scarf.sh/a.png?x-pxid=5b5a4db0-7875-4bfb-bdbd-01698b5b1a77&file=notebooks/prompt-lookup-decoding/prompt-lookup-decoding.ipynb\" />"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "08aa16b1-d2f6-4a3a-abfb-5ec278133c80",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "First, we should install the [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) for running model inference.\n",
    "\n",
    "![](https://media.githubusercontent.com/media/openvinotoolkit/openvino.genai/refs/heads/master/src/docs/openvino_genai.svg)\n",
    "\n",
    "[OpenVINO™ GenAI](https://github.com/openvinotoolkit/openvino.genai) is a library of the most popular Generative AI model pipelines, optimized execution methods, and samples that run on top of highly performant [OpenVINO Runtime](https://github.com/openvinotoolkit/openvino).\n",
    "\n",
    "This library is friendly to PC and laptop execution, and optimized for resource consumption. It requires no external dependencies to run generative models as it already includes all the core functionality (e.g. tokenization via openvino-tokenizers).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dfd782ed",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install --pre -U openvino-genai --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly huggingface_hub \"datasets<4.0.0\""
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "54be999f",
   "metadata": {},
   "source": [
    "## Prepare models\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "As example, we will use already converted LLMs from [OpenVINO collection](https://huggingface.co/collections/OpenVINO/llm-6687aaa2abca3bbcec71a9bd) [open_llama_7b_v2](https://huggingface.co/OpenVINO/TinyLlama-1.1B-Chat-v1.0-int4-ov).\n",
    "\n",
    "In case, if you want run own models, you should convert them using [Hugging Face Optimum](https://huggingface.co/docs/optimum/intel/openvino/export) library accelerated by OpenVINO integration. More details about model preparation can be found in [OpenVINO LLM inference guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide/llm-inference-native-ov.html#convert-hugging-face-tokenizer-and-model-to-openvino-ir-format)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "74bb9f96",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "import huggingface_hub as hf_hub\n",
    "\n",
    "model_id = \"OpenVINO/TinyLlama-1.1B-Chat-v1.0-int4-ov\"\n",
    "\n",
    "model_path = Path(model_id.split(\"/\")[-1])\n",
    "\n",
    "if not model_path.exists():\n",
    "    hf_hub.snapshot_download(model_id, local_dir=model_path)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "367f84f8-33e8-4ad6-bd40-e6fd41d2d703",
   "metadata": {},
   "source": [
    "### Select inference device\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "Select the device from dropdown list for running inference using OpenVINO.\n",
    "> **Note**: For achieving maximal performance, we recommend to use GPU as target device if it is available."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "6ddd57de-9f41-403c-bccc-8d3118654a24",
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import requests\n",
    "\n",
    "if not Path(\"notebook_utils.py\").exists():\n",
    "    r = requests.get(\n",
    "        url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py\",\n",
    "    )\n",
    "    open(\"notebook_utils.py\", \"w\").write(r.text)\n",
    "\n",
    "from notebook_utils import device_widget\n",
    "\n",
    "device = device_widget(default=\"CPU\", exclude=[\"NPU\", \"AUTO\"])\n",
    "\n",
    "device\n",
    "\n",
    "# Read more about telemetry collection at https://github.com/openvinotoolkit/openvino_notebooks?tab=readme-ov-file#-telemetry\n",
    "from notebook_utils import collect_telemetry\n",
    "\n",
    "collect_telemetry(\"prompt-lookup-decoding.ipynb\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "53666c13",
   "metadata": {},
   "source": [
    "## Run target model without prompt lookup decoding\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "OpenVINO GenAI provides easy-to-use API for running text generation. Firstly we will create pipeline with `LLMPipeline`. `LLMPipeline` is the main object used for decoding. You can construct it straight away from the folder with the converted model. It will automatically load the `main model`, `tokenizer`, `detokenizer` and default `generation configuration`. \n",
    "After that we will configure parameters for decoding. \n",
    "Then we just run `generate` method and get the output in text format. We do not need to encode input prompt according to model expected template or write post-processing code for logits decoder, it will be done easily with LLMPipeline. \n",
    "\n",
    "To obtain intermediate generation results without waiting until when generation is finished, we will write streamer function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "553148f5",
   "metadata": {
    "test_replace": {
     "config.max_new_tokens = 330": "config.max_new_tokens = 10"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The `prime_fib` function in Python takes an integer `n` and returns the n-th Fibonacci number and also checks if it is prime.\n",
      "\n",
      "The function first checks if `n` is a prime number. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a prime number, the function first checks if `n` is a multiple of 2. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a multiple of 2, the function first checks if `n` is a multiple of 4. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a multiple of 4, the function first checks if `n` is a multiple of 8. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a multiple of 8, the function first checks if `n` is a multiple of 16. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a multiple of 16, the function first checks if `n` is a multiple of 32. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a multiple of 32, the function first checks if `n` is a multiple of 64. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a multiple of 64, the function first checks if `n` is a multiple of 128. If it is not, it returns `"
     ]
    }
   ],
   "source": [
    "import openvino_genai as ov_genai\n",
    "import time\n",
    "\n",
    "pipe = ov_genai.LLMPipeline(model_path, device.value)\n",
    "\n",
    "config = ov_genai.GenerationConfig()\n",
    "config.max_new_tokens = 330\n",
    "prompt = '''<s>\n",
    "def prime_fib(n: int):\n",
    "    \"\"\"\n",
    "    prime_fib returns n-th number that is a Fibonacci number and it's also prime.\n",
    "    >>> prime_fib(1)\n",
    "    2\n",
    "    >>> prime_fib(2)\n",
    "    3\n",
    "    >>> prime_fib(3)\n",
    "    5\n",
    "    >>> prime_fib(4)\n",
    "    13\n",
    "    >>> prime_fib(5)\n",
    "    89\n",
    "    \"\"\"'''\n",
    "\n",
    "\n",
    "def streamer(subword):\n",
    "    print(subword, end=\"\", flush=True)\n",
    "    # Return flag corresponds whether generation should be stopped.\n",
    "    # False means continue generation.\n",
    "    return False\n",
    "\n",
    "\n",
    "start_time = time.perf_counter()\n",
    "pipe.generate(prompt, config, streamer=streamer)\n",
    "end_time = time.perf_counter()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "c40d9901-ceb2-4c4c-a686-303590292ab3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Generation time: 6.56s\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "0"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import gc\n",
    "\n",
    "print(f\"Generation time: {end_time - start_time:.2f}s\")\n",
    "del pipe\n",
    "gc.collect()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "27a01739-1363-42ef-927f-6a340bdbe7ba",
   "metadata": {},
   "source": [
    "## Run Prompt Lookup decoding pipeline\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "To enable prompt lookup decoding in `LLMPipeline,` we should setup parameter `prompt_lookup` to `True`. Additionally we can provide `SchedulerConfig` for resource management. \n",
    "We also need to specify two parameters via generation config: `num_assistant_tokens` is the candidate sequence length to return per iteration and `max_ngram_size` is the maximum ngram to use when looking for matches in the prompt."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "9fde1b3c",
   "metadata": {
    "test_replace": {
     "config.max_new_tokens = 330": "config.max_new_tokens = 10"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The `prime_fib` function in Python takes an integer `n` and returns the n-th Fibonacci number and also checks if it is prime.\n",
      "\n",
      "The function first checks if `n` is a prime number. If it is not, it returns `None`.\n",
      "\n",
      "If `n` is a prime number, the function first checks if `n` is divisible by any of the Fibonacci numbers in the range `[1, n-1]`. If any of these numbers is divisible by `n`, the function returns `None`.\n",
      "\n",
      "Otherwise, the function calculates the Fibonacci numbers `fib_1` and `fib_2` such that `fib_1 + fib_2 = n`.\n",
      "\n",
      "The function then checks if `fib_1` and `fib_2` are not equal to `n`. If they are, the function returns `None`.\n",
      "\n",
      "Finally, the function checks if `fib_1` and `fib_2` are not equal to `n-1`. If they are, the function returns `None`.\n",
      "\n",
      "Therefore, the `prime_fib` function returns the n-th Fibonacci number and also checks if it is prime."
     ]
    }
   ],
   "source": [
    "pipe = ov_genai.LLMPipeline(model_path, device.value, prompt_lookup=True)\n",
    "\n",
    "config = ov_genai.GenerationConfig()\n",
    "config.max_new_tokens = 330\n",
    "config.num_assistant_tokens = 5\n",
    "config.max_ngram_size = 3\n",
    "start_time = time.perf_counter()\n",
    "result = pipe.generate(prompt, config, streamer=streamer)\n",
    "end_time = time.perf_counter()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "d9739752-0bd8-4be7-a4cc-c076228bfc91",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Generation time: 4.03s\n"
     ]
    }
   ],
   "source": [
    "print(f\"Generation time: {end_time - start_time:.2f}s\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "fd59ed90",
   "metadata": {},
   "source": [
    "## Evaluate Prompt Lookup Decoding on multiple examples\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "Configure the data type and the number of examples to run:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "64a36dc1-958c-4f7e-baba-efa89a2d9a8f",
   "metadata": {
    "test_replace": {
     "num_samples_to_select = 50": "num_samples_to_select = 1"
    }
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d796e7cb2d1543b694f9b405068f63e5",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Dropdown(description='Data type:', options=('Code', 'Text'), value='Code')"
      ]
     },
     "execution_count": 24,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "num_samples_to_select = 50\n",
    "\n",
    "import ipywidgets as widgets\n",
    "\n",
    "data_options = [\"Code\", \"Text\"]\n",
    "data_type = widgets.Dropdown(\n",
    "    options=data_options,\n",
    "    value=data_options[0],\n",
    "    description=\"Data type:\",\n",
    "    disabled=False,\n",
    ")\n",
    "data_type"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "8f3486cd",
   "metadata": {},
   "source": [
    "Load the dataset and prepare the prompts:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "13f03634",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loading dataset...\n",
      "Done\n"
     ]
    }
   ],
   "source": [
    "from datasets import load_dataset\n",
    "\n",
    "print(\"loading dataset...\")\n",
    "\n",
    "if data_type.value == \"Code\":\n",
    "    ds = load_dataset(\"openai_humaneval\", split=\"test\")\n",
    "    prompts = ds[\"prompt\"]\n",
    "    prompts = [\"<s>\" + prompts[i] for i in range(num_samples_to_select)]\n",
    "else:\n",
    "    ds = load_dataset(\"abisee/cnn_dailymail\", \"3.0.0\", split=\"test\")\n",
    "    prompts = ds[\"article\"]\n",
    "    prompts = [\n",
    "        \"<|user|> ###\\nArticle: \" + prompts[i] + \"\\n\\nSummarize the above article in 5 sentence.\\n<|end|><|assistant|>\" for i in range(num_samples_to_select)\n",
    "    ]\n",
    "print(\"Done\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "be4e20d6",
   "metadata": {},
   "source": [
    "Run auto-regressive generation and get total runtime per example:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "1f4ea9e5",
   "metadata": {
    "test_replace": {
     "config.max_new_tokens = 330": "config.max_new_tokens = 10"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Running Auto-Regressive generation...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [04:01<00:00,  4.82s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Done\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "25"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import openvino_genai as ov_genai\n",
    "import time\n",
    "from tqdm import tqdm\n",
    "\n",
    "print(\"Running Auto-Regressive generation...\")\n",
    "pipe = ov_genai.LLMPipeline(model_path, device.value)\n",
    "\n",
    "config = ov_genai.GenerationConfig()\n",
    "config.max_new_tokens = 330\n",
    "\n",
    "times_auto_regressive = []\n",
    "for prompt in tqdm(prompts):\n",
    "    start_time = time.perf_counter()\n",
    "    result = pipe.generate(prompt, config)\n",
    "    end_time = time.perf_counter()\n",
    "    times_auto_regressive.append(end_time - start_time)\n",
    "print(\"Done\")\n",
    "\n",
    "import gc\n",
    "\n",
    "del pipe\n",
    "gc.collect()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "35dbba92",
   "metadata": {},
   "source": [
    "Now run generation with Prompt Lookup decoding:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "d73e9f37",
   "metadata": {
    "test_replace": {
     "config.max_new_tokens = 330": "config.max_new_tokens = 10"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Running Prompt Lookup Decoding generation...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 50/50 [03:03<00:00,  3.68s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Done\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "pipe = ov_genai.LLMPipeline(model_path, device.value, prompt_lookup=True)\n",
    "\n",
    "config = ov_genai.GenerationConfig()\n",
    "config.max_new_tokens = 330\n",
    "config.num_assistant_tokens = 5\n",
    "config.max_ngram_size = 3\n",
    "\n",
    "\n",
    "times_prompt_lookup = []\n",
    "print(\"Running Prompt Lookup Decoding generation...\")\n",
    "for prompt in tqdm(prompts):\n",
    "    start_time = time.perf_counter()\n",
    "    result = pipe.generate(prompt, config)\n",
    "    end_time = time.perf_counter()\n",
    "    times_prompt_lookup.append((end_time - start_time))\n",
    "print(\"Done\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "a0f4da9c",
   "metadata": {},
   "source": [
    "Now let's calculate the speedup:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "ad898772",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "average speedup: 1.37\n"
     ]
    }
   ],
   "source": [
    "avg_speedup = sum([x / y for x, y in zip(times_auto_regressive, times_prompt_lookup)]) / len(prompts)\n",
    "print(f\"average speedup: {avg_speedup:.2f}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  },
  "openvino_notebooks": {
   "imageUrl": "https://blog.vllm.ai/assets/figures/spec-decode/figure3.png",
   "tags": {
    "categories": [
     "API Overview"
    ],
    "libraries": [],
    "other": [
     "LLM"
    ],
    "tasks": [
     "Text Generation"
    ]
   }
  },
  "widgets": {
   "application/vnd.jupyter.widget-state+json": {
    "state": {
     "c09eb6c800744d31bd23e38d33a82b0a": {
      "model_module": "@jupyter-widgets/base",
      "model_module_version": "2.0.0",
      "model_name": "LayoutModel",
      "state": {}
     },
     "d4e65aeb9fd243c99022f6dede35f3c0": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DescriptionStyleModel",
      "state": {
       "description_width": ""
      }
     },
     "e83ffbfc2136400194e2b1da63bccb26": {
      "model_module": "@jupyter-widgets/controls",
      "model_module_version": "2.0.0",
      "model_name": "DropdownModel",
      "state": {
       "_options_labels": [
        "CPU"
       ],
       "description": "Device:",
       "index": 0,
       "layout": "IPY_MODEL_c09eb6c800744d31bd23e38d33a82b0a",
       "style": "IPY_MODEL_d4e65aeb9fd243c99022f6dede35f3c0"
      }
     }
    },
    "version_major": 2,
    "version_minor": 0
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
