{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Accelerate Qwen3-8B with Speculative Decoding and Efficient Draft Models\n",
    "\n",
    "[Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) is part of the latest [Qwen family](https://qwenlm.github.io/blog/qwen3/), trained with explicit agentic capabilities. It supports tool invocation, multi-step reasoning, and long context, making it well-suited for agent workflows. Integrated with agentic frameworks such as Hugging Face SmolAgents and QwenAgent, it enables a wide range of agentic applications involving tool calling and reasoning\n",
    "\n",
    "In this notebook we will demonstrate how to speedup Qwen3-8B inference with speculative decoding using OpenVINO GenAI library."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Table of contents:\n",
    "\n",
    "- [Prerequisites](#Prerequisites)\n",
    "- [Generation with OpenVINO GenAI](#Generation-with-OpenVINO-GenAI)\n",
    "- [Accelerated Generation with Speculative Decoding](#Accelerated-Generation-with-Speculative-Decoding)\n",
    "- [Further Accelerate Qwen3-8B by Draft Layer Pruning ](#Further-Accelerate-Qwen3-8B-by-Draft-Layer-Pruning)\n",
    "- [Compute Average Speedup Gain of Speculative Decoding with Qwen3-8B model](#Compute-Average-Speedup-Gain-of-Speculative-Decoding-with-Qwen3-8B-model)\n",
    "\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "\n",
    "> Note: we recommend running this notebook in a virtual environment. \n",
    "\n",
    "Install required dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -q -r ./smolagents/requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Generation with OpenVINO GenAI\n",
    "We can simply run generation with the Qwen3-8B model using OpenVINO GenAI library.\n",
    "[OpenVINO™ GenAI](https://github.com/openvinotoolkit/openvino.genai) is a library of the most popular Generative AI model pipelines, optimized execution methods, and samples that run on top of highly performant OpenVINO Runtime.\n",
    "This library is friendly to PC and laptop execution, and optimized for resource consumption. It requires no external dependencies to run generative models as it already includes all the core functionality (e.g. tokenization via openvino-tokenizers)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Prepare Model\n",
    "\n",
    "First we will download Qwen3-8B model converted to the OpenVINO™ IR (Intermediate Representation) format with weights compressed to INT4 by [NNCF](https://github.com/openvinotoolkit/nncf), available in the OpenVINO LLM collection [Qwen3-8b-int4-ov](https://huggingface.co/OpenVINO/Qwen3-8B-int4-ov ):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "import huggingface_hub as hf_hub\n",
    "\n",
    "model_id = \"OpenVINO/Qwen3-8B-int4-ov\"\n",
    "model_path = Path(model_id.split(\"/\")[-1])\n",
    "\n",
    "if not model_path.exists():\n",
    "    hf_hub.snapshot_download(model_id, local_dir=model_path)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Instantiate a pipeline with OpenVINO Generate API\n",
    "\n",
    "We will use [OpenVINO Generate API](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) to create pipelines to run an inference with OpenVINO Runtime. \n",
    "\n",
    "Firstly we need to create a pipeline with `LLMPipeline`. `LLMPipeline` is the main object used for text generation using LLM in OpenVINO GenAI API. You can construct it straight away from the folder with the downloaded model. We will provide directory with model and device for `LLMPipeline`. Additionally we provide `SchedulerConfig` for resource management.  Then we can run `generate` method and get the output in text format."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import openvino_genai as ov_genai\n",
    "import time\n",
    "\n",
    "\n",
    "def streamer(subword):\n",
    "    print(subword, end=\"\", flush=True)\n",
    "    return False\n",
    "\n",
    "\n",
    "# select device for inference\n",
    "device = \"GPU\"\n",
    "\n",
    "# define scheduler\n",
    "scheduler_config = ov_genai.SchedulerConfig()\n",
    "scheduler_config.num_kv_blocks = 200\n",
    "scheduler_config.dynamic_split_fuse = False\n",
    "scheduler_config.max_num_batched_tokens = 8192\n",
    "scheduler_config.enable_prefix_caching = False\n",
    "scheduler_config.use_cache_eviction = False\n",
    "\n",
    "# create a pipeline for generation\n",
    "pipe = ov_genai.LLMPipeline(model_path, device, scheduler_config=scheduler_config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "After instantiating the pipeline we are ready to generate with the model.\n",
    "\n",
    "We can configure parameters for decoding. We can create the default config with `ov_genai.GenerationConfig()`, setup parameters, and apply the updated version with `set_generation_config(config)` or put config directly to `generate()`. Since our prompt is already formatted in the Qwen3 chat-template format, we set the generation-config 'apply_chat_template' parameter to 'False'.\n",
    "To get a more accurate measurement of the generation time, we add a warmup generation step before the actual generation to let the model allocate memory and compile any kernels it needs to reach its full potential."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "\n",
    "generation_config = ov_genai.GenerationConfig()\n",
    "generation_config.apply_chat_template = False\n",
    "\n",
    "input_prompt = \"\"\"<|im_start|>user\n",
    "In one sentence, explain what blockchain is.\n",
    "<|im_end|>\n",
    "<|im_start|>assistant\n",
    "\"\"\"\n",
    "\n",
    "# We will first do a short warmup to the model so the time measurement will not include the warmup overhead.\n",
    "generation_config.max_new_tokens = 100\n",
    "pipe.generate(input_prompt, generation_config)\n",
    "\n",
    "# Now we can measure the time and see the result\n",
    "generation_config.max_new_tokens = 2048\n",
    "\n",
    "start = time.perf_counter()\n",
    "result = pipe.generate([input_prompt], generation_config, streamer)\n",
    "# we don't include TTFT in speedup measurement\n",
    "ar_gen_time = time.perf_counter() - start - (result.perf_metrics.get_ttft().mean / 1000)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f\"Generation took {ar_gen_time:.3f} seconds\")\n",
    "\n",
    "import gc\n",
    "\n",
    "# del pipe\n",
    "# gc.collect()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Accelerated Generation with Speculative Decoding\n",
    "Speculative decoding is a lossless decoding paradigm introduced in a recent [ICML paper](https://arxiv.org/abs/2211.17192) for accelerating auto-regressive generation with LLMs.\n",
    "The method aims to mitigate the inherent latency bottleneck caused by the sequential nature of auto-regressive generation.\n",
    "Speculative decoding employs a draft language model to generate a block of \\(\\gamma\\) candidate tokens.\n",
    "The LLM, referred to as the target model, then processes these candidate tokens in parallel.\n",
    "The algorithm examines each token's probability distribution, calculated by both the target and draft models, to determine whether the token should be accepted or rejected.\n",
    "\n",
    "In this section we will demonstrate how to accelerate the generation of Qwen3-8B using speculative-decoding, with the open source Qwen3-0.6B model as the draft model.\n",
    "\n",
    "The [Qwen3‑0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) model is a compact yet capable language model in Alibaba’s Qwen3 series, featuring just 0.6 billion parameters (~0.44B non-embedding), 28 layers, and a 32K token context window, making it ideal for edge or low-resource deployments. Despite its small size, it's built atop the same hybrid reasoning architecture as its larger siblings, supporting both thinking mode (for logical reasoning, math, and code) and non-thinking mode (for fast, conversational responses) within a unified framework.\n",
    "\n",
    "We will first download the draft-model and then we will use it to initialize a speculative-decoding generation pipeline:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "import huggingface_hub as hf_hub\n",
    "\n",
    "draft_model_id = \"OpenVINO/Qwen3-0.6B-int8-ov\"\n",
    "draft_model_path = Path(draft_model_id.split(\"/\")[-1])\n",
    "\n",
    "if not draft_model_path.exists():\n",
    "    hf_hub.snapshot_download(draft_model_id, local_dir=draft_model_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define schedulers for main/draft models\n",
    "\n",
    "draft_device = \"GPU\"\n",
    "scheduler_config = ov_genai.SchedulerConfig()\n",
    "scheduler_config.num_kv_blocks = 200\n",
    "scheduler_config.dynamic_split_fuse = False\n",
    "scheduler_config.max_num_batched_tokens = 8192\n",
    "scheduler_config.enable_prefix_caching = False\n",
    "scheduler_config.use_cache_eviction = False\n",
    "\n",
    "draft_scheduler_config = ov_genai.SchedulerConfig()\n",
    "draft_scheduler_config.num_kv_blocks = 200\n",
    "draft_scheduler_config.dynamic_split_fuse = False\n",
    "draft_scheduler_config.max_num_batched_tokens = 8192\n",
    "\n",
    "draft_model = ov_genai.draft_model(draft_model_path, draft_device, scheduler_config=draft_scheduler_config)\n",
    "\n",
    "# create a pipeline with a draft model for generation with speculative decoding\n",
    "pipe = ov_genai.LLMPipeline(model_path, device, draft_model=draft_model, scheduler_config=scheduler_config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we are ready to generate with our speculative decoding pipeline. We will run a small warmup step before measuring the actual generation time"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define in the generation config the numbers of candidates generated by draft_model per iteration\n",
    "generation_config.num_assistant_tokens = 3\n",
    "generation_config.apply_chat_template = False\n",
    "\n",
    "# Again we will do a short warmup before measuring time for the model\n",
    "generation_config.max_new_tokens = 100\n",
    "pipe.generate(input_prompt, generation_config)\n",
    "\n",
    "# Now we can measure the time and see the result\n",
    "generation_config.max_new_tokens = 2048\n",
    "\n",
    "start = time.perf_counter()\n",
    "result = pipe.generate([input_prompt], generation_config, streamer)\n",
    "# we don't include TTFT in speedup measurement\n",
    "sd_gen_time = time.perf_counter() - start - (result.perf_metrics.get_ttft().mean / 1000)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f\"Generation took {sd_gen_time:.3f} seconds\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's calculate the speedup achieved when accelerating using Qwen3-0.6B as a draft for the specific example we used:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f\"End to end speedup with speculative decoding is {ar_gen_time / sd_gen_time:.2f}x\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Further Accelerate Qwen3-8B by Draft Layer Pruning \n",
    "\n",
    "Layer pruning is a model compression technique for large language models (LLMs) that reduces inference cost by removing entire transformer layers from the network. We leveraged this technique to generate a smaller yet qualitative draft-model - [pruned-qwen3-draft](https://huggingface.co/OpenVINO/Qwen3-pruned-6L-from-0.6B-int8-ov), eventually further accelerating the speculative-decoding generation of Qwen3-8B model.\n",
    "\n",
    "We will download the enhanced draft-model and repeat the measurement in previous section to demonstrate the speedup improvement:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "import huggingface_hub as hf_hub\n",
    "\n",
    "draft_model_id = \"OpenVINO/Qwen3-pruned-6L-from-0.6B-int8-ov\"\n",
    "draft_model_path = Path(draft_model_id.split(\"/\")[-1])\n",
    "\n",
    "if not draft_model_path.exists():\n",
    "    hf_hub.snapshot_download(draft_model_id, local_dir=draft_model_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define schedulers for main/draft models\n",
    "\n",
    "draft_device = \"GPU\"\n",
    "scheduler_config = ov_genai.SchedulerConfig()\n",
    "scheduler_config.num_kv_blocks = 200\n",
    "scheduler_config.dynamic_split_fuse = False\n",
    "scheduler_config.max_num_batched_tokens = 8192\n",
    "scheduler_config.enable_prefix_caching = False\n",
    "scheduler_config.use_cache_eviction = False\n",
    "\n",
    "draft_scheduler_config = ov_genai.SchedulerConfig()\n",
    "draft_scheduler_config.num_kv_blocks = 200\n",
    "draft_scheduler_config.dynamic_split_fuse = False\n",
    "draft_scheduler_config.max_num_batched_tokens = 8192\n",
    "\n",
    "draft_model = ov_genai.draft_model(draft_model_path, draft_device, scheduler_config=draft_scheduler_config)\n",
    "\n",
    "# create a pipeline with a draft model for generation with speculative decoding\n",
    "pipe = ov_genai.LLMPipeline(model_path, device, draft_model=draft_model, scheduler_config=scheduler_config)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# We need to define in the generation config how many tokens the draft should predict in each cycle\n",
    "generation_config.num_assistant_tokens = 3\n",
    "generation_config.apply_chat_template = False\n",
    "\n",
    "# Again we will do a short warmup before measuring time for the model\n",
    "generation_config.max_new_tokens = 100\n",
    "pipe.generate(input_prompt, generation_config)\n",
    "\n",
    "# Now we can measure the time and see the result\n",
    "generation_config.max_new_tokens = 2048\n",
    "\n",
    "start = time.perf_counter()\n",
    "result = pipe.generate([input_prompt], generation_config, streamer)\n",
    "accelerated_sd_gen_time = time.perf_counter() - start - (result.perf_metrics.get_ttft().mean / 1000)  # we don't include TTFT in speedup measurement"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f\"Generation took {accelerated_sd_gen_time:.3f} seconds\")\n",
    "print(f\"End to end speedup with speculative decoding is {ar_gen_time / accelerated_sd_gen_time:.2f}x\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Compute Average Speedup Gain of Speculative Decoding with Qwen3-8B model\n",
    "\n",
    "In this section we measure the average speedup gain of speculative-decoding with Qwen3-8B model over multiple examples. \n",
    "We use a small mix of summarization, reasoning, math and classification examples for the speedup evaluation. The examples are taken from the [CNN/DailyMail](https://huggingface.co/datasets/abisee/cnn_dailymail) and the [MT-Bench](https://huggingface.co/datasets/philschmid/mt-bench) datasets.  \n",
    "We run the model on these examples twice: first without speculative-decoding, then with speculative-decoding. We compute the total time ratio of the two methods- which is the speedup, and finally we compute the average speedup over all tested examples."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Run target model without speculative decoding\n",
    "We will first run generation without speculative-decoding, but this time we will run it over multiple examples. The examples are provided in a json file."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import openvino_genai as ov_genai\n",
    "import sys\n",
    "import time\n",
    "from tqdm import tqdm\n",
    "\n",
    "print(f\"Loading model from {model_path}\")\n",
    "\n",
    "# Define scheduler\n",
    "scheduler_config = ov_genai.SchedulerConfig()\n",
    "scheduler_config.num_kv_blocks = 200\n",
    "scheduler_config.dynamic_split_fuse = False\n",
    "scheduler_config.max_num_batched_tokens = 8192\n",
    "scheduler_config.enable_prefix_caching = False\n",
    "scheduler_config.use_cache_eviction = False\n",
    "\n",
    "pipe = ov_genai.LLMPipeline(model_path, device, scheduler_config=scheduler_config)\n",
    "\n",
    "generation_config = ov_genai.GenerationConfig()\n",
    "generation_config.apply_chat_template = False\n",
    "\n",
    "print(\"Loading prompts...\")\n",
    "import json\n",
    "\n",
    "f = open(\"prompts.json\")\n",
    "prompts = json.load(f)\n",
    "\n",
    "# We will first do a short warmup to the model so the time measurement will not include the warmup overhead.\n",
    "generation_config.max_new_tokens = 100\n",
    "pipe.generate(\"This is a warmup prompt\", generation_config)\n",
    "\n",
    "# finished warmup step, let's run our examples\n",
    "generation_config.max_new_tokens = 2048\n",
    "times_auto_regressive = []\n",
    "print(\"Running Auto-Regressive generation...\")\n",
    "for prompt in tqdm(prompts):\n",
    "    start_time = time.perf_counter()\n",
    "    result = pipe.generate(prompt, generation_config)\n",
    "    end_time = time.perf_counter()\n",
    "    times_auto_regressive.append(end_time - start_time)\n",
    "print(\"Done\")\n",
    "\n",
    "import gc\n",
    "\n",
    "del pipe\n",
    "gc.collect()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Run target model with speculative decoding\n",
    "Now we will run generation with speculative-decoding over the same examples. \n",
    "\n",
    "In the following dropdown list you can choose which draft to use for the speculative-decoding pipeline:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import ipywidgets as widgets\n",
    "\n",
    "draft_model_name = widgets.Dropdown(\n",
    "    options=[\"Qwen3-0.6B-int8-ov\", \"Qwen3-0.6B-pruned-int8-ov\"],\n",
    "    value=\"Qwen3-0.6B-int8-ov\",  # default value\n",
    "    description=\"Select Draft Model:\",\n",
    ")\n",
    "\n",
    "draft_model_name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import openvino_genai as ov_genai\n",
    "import time\n",
    "from tqdm import tqdm\n",
    "\n",
    "draft_model_path = draft_model_name.value\n",
    "\n",
    "print(\"Loading prompts...\")\n",
    "import json\n",
    "\n",
    "f = open(\"prompts.json\")\n",
    "prompts = json.load(f)\n",
    "\n",
    "# Define scheduler\n",
    "scheduler_config = ov_genai.SchedulerConfig()\n",
    "scheduler_config.num_kv_blocks = 200\n",
    "scheduler_config.dynamic_split_fuse = False\n",
    "scheduler_config.max_num_batched_tokens = 8192\n",
    "scheduler_config.enable_prefix_caching = False\n",
    "scheduler_config.use_cache_eviction = False\n",
    "# Define scheduler for the draft\n",
    "\n",
    "draft_scheduler_config = ov_genai.SchedulerConfig()\n",
    "draft_scheduler_config.num_kv_blocks = 200\n",
    "draft_scheduler_config.dynamic_split_fuse = False\n",
    "draft_scheduler_config.max_num_batched_tokens = 8192\n",
    "\n",
    "draft_model = ov_genai.draft_model(draft_model_path, device, scheduler_config=draft_scheduler_config)\n",
    "\n",
    "pipe = ov_genai.LLMPipeline(model_path, device, draft_model=draft_model, scheduler_config=scheduler_config)\n",
    "\n",
    "generation_config = ov_genai.GenerationConfig()\n",
    "generation_config.num_assistant_tokens = 3\n",
    "generation_config.apply_chat_template = False\n",
    "\n",
    "# Again, We will first do a short warmup\n",
    "generation_config.max_new_tokens = 100\n",
    "pipe.generate(\"This is a warmup prompt\", generation_config)\n",
    "\n",
    "# finished warmup step, let's run our examples\n",
    "generation_config.max_new_tokens = 2048\n",
    "times_speculative_decoding = []\n",
    "\n",
    "print(\"Running Speculative Decoding generation...\")\n",
    "for prompt in tqdm(prompts):\n",
    "    start_time = time.perf_counter()\n",
    "    result = pipe.generate(prompt, generation_config)\n",
    "    end_time = time.perf_counter()\n",
    "    times_speculative_decoding.append(end_time - start_time)\n",
    "print(\"Done\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Calculate average speedup\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "avg_speedup = sum([x / y for x, y in zip(times_auto_regressive, times_speculative_decoding)]) / len(prompts)\n",
    "print(f\"average speedup: {avg_speedup:.2f}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  },
  "openvino_notebooks": {
   "imageUrl": "https://user-images.githubusercontent.com/29454499/255799218-611e7189-8979-4ef5-8a80-5a75e0136b50.png",
   "tags": {
    "categories": [
     "Model Demos",
     "AI Trends"
    ],
    "libraries": [],
    "other": [
     "LLM"
    ],
    "tasks": [
     "Text Generation",
     "Conversational"
    ]
   }
  },
  "widgets": {
   "application/vnd.jupyter.widget-state+json": {
    "state": {},
    "version_major": 2,
    "version_minor": 0
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
