{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c2da5bbc",
   "metadata": {},
   "source": [
    "# Qwen3 Agent with OpenVINO GenAI & Smolagents\n",
    "\n",
    "## Accelerated AI Agent Deployment with Speculative Decoding\n",
    "\n",
    "This notebook demonstrates how to implement an intelligent agent using **Qwen3-8B** with **HuggingFace's smolagents** framework and **Intel's OpenVINO GenAI** library. We'll showcase how speculative decoding can significantly accelerate agent inference, achieving up to **1.6x faster performance** compared to standard auto-regressive generation on Intel AI PCs.\n",
    "\n",
    "### What We'll Build\n",
    "\n",
    "- **Qwen3 Agent**: A conversational AI agent powered by the Qwen3-8B language model\n",
    "- **Tool Integration**: Using smolagents framework for seamless tool calling capabilities\n",
    "- **Performance Optimization**: Leveraging OpenVINO GenAI's speculative decoding for faster inference\n",
    "- **Interactive Demo**: A Gradio-based web interface for real-time agent interaction\n",
    "\n",
    "### Key Technologies\n",
    "\n",
    "- **🤖 Qwen3-8B**: Advanced language model from Alibaba with strong reasoning capabilities\n",
    "- **🔧 HuggingFace smolagents**: Lightweight framework for building AI agents with tool-calling abilities\n",
    "- **⚡ Intel OpenVINO GenAI**: High-performance inference library with speculative decoding optimization\n",
    "- **🖥️ Gradio**: User-friendly web interface for interactive demonstrations\n",
    "- **💻 Intel AIPC**: Optimized deployment on Intel AI-accelerated hardware\n",
    "\n",
    "### Performance Benefits\n",
    "\n",
    "By utilizing OpenVINO GenAI's speculative decoding, we achieve:\n",
    "- **1.6x faster inference** compared to standard auto-regressive generation\n",
    "- **Reduced latency** for real-time agent interactions\n",
    "- **Optimized resource utilization** on Intel AI PC hardware\n",
    "\n",
    "Let's get started by setting up our environment and implementing the agent!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "489d790a",
   "metadata": {},
   "source": [
    "## 🚀 Environment Setup\n",
    "\n",
    "Before we begin, let's set up a clean Python environment and install the required dependencies.\n",
    "\n",
    "### Step 1: Create a New Python Environment\n",
    "\n",
    "We recommend creating a new virtual environment to avoid conflicts with existing packages:\n",
    "\n",
    "```bash\n",
    "# Create a new conda environment (recommended)\n",
    "conda create -n qwen3-agent python=3.11 -y\n",
    "conda activate qwen3-agent\n",
    "\n",
    "# OR create a virtual environment with venv\n",
    "python -m venv qwen3-agent\n",
    "# On Windows:\n",
    "qwen3-agent\\Scripts\\activate\n",
    "# On Linux:\n",
    "source qwen3-agent/bin/activate\n",
    "```\n",
    "\n",
    "### Step 2: Install Dependencies\n",
    "\n",
    "Install the required packages from the requirements files:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d0288e6c",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install -q -r requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe9bf65a",
   "metadata": {},
   "source": [
    "Once you have your environment set up and dependencies installed, you're ready to proceed with the implementation!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58a88df3",
   "metadata": {},
   "source": [
    "## 📥 Download the Qwen3-8B Model\n",
    "\n",
    "Next, we need to download the pre-optimized Qwen3-8B model in OpenVINO format. This model has been quantized to INT4 for optimal performance on Intel hardware.\n",
    "\n",
    "The model will be downloaded from HuggingFace Hub and stored locally for use with OpenVINO GenAI."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d0a4d3a7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import huggingface_hub as hf_hub\n",
    "\n",
    "model_id = \"OpenVINO/Qwen3-8B-int4-ov\"\n",
    "model_path = \"./qwen3-8b-int4-ov\"\n",
    "\n",
    "hf_hub.snapshot_download(model_id, local_dir=model_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d813108f",
   "metadata": {},
   "source": [
    "## 🚀 Download the Draft Model for Speculative Decoding\n",
    "\n",
    "For speculative decoding acceleration, we also need to download a smaller draft model (Qwen3-0.6B). This smaller model generates initial predictions that are then verified by the main model, significantly speeding up inference.\n",
    "\n",
    "The draft model works in tandem with the main model to achieve up to 1.6x faster generation speeds."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fefe6910",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Download the draft model for speculative decoding\n",
    "draft_model_id = \"OpenVINO/Qwen3-pruned-6L-from-0.6B-int8-ov\"\n",
    "draft_model_path = \"./qwen3-pruned-6l-from-0.6b-int8-ov\"\n",
    "\n",
    "hf_hub.snapshot_download(draft_model_id, local_dir=draft_model_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b146d48",
   "metadata": {},
   "source": [
    "## 🖥️ Start the OpenVINO GenAI Server\n",
    "\n",
    "Now we'll start our OpenVINO GenAI server with speculative decoding enabled. \n",
    "\n",
    "The server will expose an OpenAI-compatible API endpoint that we can use with smolagents."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ae674a7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import subprocess\n",
    "import time\n",
    "import requests\n",
    "\n",
    "\n",
    "def test_server(host, port, max_retries=10, wait_time=5):\n",
    "    for _ in range(max_retries):\n",
    "        time.sleep(wait_time)\n",
    "        try:\n",
    "            response = requests.get(f\"http://{host}:{port}/v1/models\")\n",
    "            return True\n",
    "        except requests.ConnectionError:\n",
    "            continue\n",
    "    return False\n",
    "\n",
    "\n",
    "host = \"localhost\"\n",
    "port = 8000\n",
    "\n",
    "# Check that we don't have a server already running\n",
    "if test_server(host, port, max_retries=1, wait_time=0):\n",
    "    print(\"Server is already running\")\n",
    "else:\n",
    "    # Start the server with speculative decoding\n",
    "    server_process = subprocess.Popen(\n",
    "        [\n",
    "            \"python\",\n",
    "            \"server.py\",\n",
    "            \"--model_path\",\n",
    "            model_path,\n",
    "            \"--draft_path\",\n",
    "            draft_model_path,\n",
    "            \"--host\",\n",
    "            host,\n",
    "            \"--port\",\n",
    "            str(port),\n",
    "        ]\n",
    "    )\n",
    "\n",
    "    # Check that the server is running\n",
    "    if test_server(host, port):\n",
    "        print(f\"Server started with PID: {server_process.pid}\")\n",
    "    else:\n",
    "        # Making sure process is terminated\n",
    "        server_process.terminate()\n",
    "        server_process.wait()\n",
    "        print(f\"Server failed to start\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17aa432c",
   "metadata": {},
   "source": [
    "## 🤖 Initialize the Smolagents Demo\n",
    "\n",
    "Now that our server is running, we can create a Qwen3 agent using smolagents. The agent will communicate with our OpenVINO GenAI server through OpenAI-compatible API calls.\n",
    "\n",
    "We'll set up:\n",
    "1. A model wrapper that sends requests to our local server\n",
    "2. A tool-calling agent with access to basic tools from smolagents"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3757a1b5",
   "metadata": {},
   "outputs": [],
   "source": [
    "from smolagents import OpenAIServerModel, ToolCallingAgent\n",
    "\n",
    "# Configuration\n",
    "model_id = \"Qwen3-8B\"\n",
    "host = \"localhost\"\n",
    "port = 8000\n",
    "enable_thinking = False  # Enable Qwen3's thinking capability"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5816c972",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Configure thinking and generation parameters according to Qwen3 recommendations\n",
    "extra_body = {\"chat_template_kwargs\": {\"enable_thinking\": enable_thinking}}\n",
    "\n",
    "# Use sampling parameters (recommended for thinking mode)\n",
    "if enable_thinking:\n",
    "    generation_params = {\n",
    "        \"temperature\": 0.6,\n",
    "        \"top_p\": 0.95,\n",
    "    }\n",
    "else:\n",
    "    generation_params = {\n",
    "        \"temperature\": 0.7,\n",
    "        \"top_p\": 0.8,\n",
    "    }\n",
    "\n",
    "# Initialize the model wrapper\n",
    "api_base = f\"http://{host}:{port}/v1\"\n",
    "model = OpenAIServerModel(model_id, api_base=api_base, api_key=\"None\", max_tokens=4096, extra_body=extra_body, **generation_params)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a738dbf7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initialize the tool-calling agent\n",
    "agent = ToolCallingAgent(\n",
    "    tools=[],  # Start with no custom tools\n",
    "    model=model,\n",
    "    add_base_tools=True,  # Include basic smolagents toolbox\n",
    "    stream_outputs=True,  # Enable streaming for real-time responses\n",
    "    planning_interval=None,\n",
    ")\n",
    "\n",
    "# Add pttx to authorized imports in python interpreter\n",
    "agent.tools[\"python_interpreter\"].authorized_imports.append(\"pptx.*\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "99ed29ae",
   "metadata": {},
   "source": [
    "## 🎯 Test the Agent\n",
    "\n",
    "Let's test our Qwen3 agent by asking it about OpenVINO GenAI. The agent will use its thinking capabilities and available tools to provide a comprehensive response."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3a188167",
   "metadata": {},
   "outputs": [],
   "source": [
    "agent.run(\"What is OpenVINO GenAI? What are the latest features?\", reset=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1cb33d8e",
   "metadata": {},
   "source": [
    "## 🚀 Launch Interactive Gradio Demo\n",
    "\n",
    "Now let's create an interactive web interface using Gradio. This will provide a user-friendly chat interface where you can interact with the Qwen3 agent in real-time, showcasing the accelerated performance from OpenVINO GenAI's speculative decoding."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1e2352a7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from smolagents import GradioUI\n",
    "\n",
    "# Create and launch the Gradio interface\n",
    "demo = GradioUI(agent)\n",
    "demo.launch(share=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6cceaf1",
   "metadata": {},
   "source": [
    "## 🎉 Summary\n",
    "\n",
    "Congratulations! You've successfully implemented and deployed a high-performance Qwen3 agent with accelerated inference. Here's what we accomplished:\n",
    "\n",
    "### What We Built:\n",
    "- **OpenVINO GenAI Server**: Deployed Qwen3-8B with speculative decoding using a 0.6B draft model\n",
    "- **Smolagents Integration**: Created a tool-calling agent with access to base tools\n",
    "- **Interactive Interface**: Launched a Gradio web UI for real-time interaction\n",
    "\n",
    "### Key Performance Benefits:\n",
    "- **1.6x faster inference** compared to standard auto-regressive generation\n",
    "- **Optimized for Intel hardware** using OpenVINO's acceleration\n",
    "- **Real-time tool calling** with thinking capabilities enabled\n",
    "\n",
    "### Technical Stack:\n",
    "- **Intel OpenVINO GenAI** for optimized inference\n",
    "- **HuggingFace Smolagents** for agent framework\n",
    "- **Qwen3 models** (8B target + 0.6B draft) for speculative decoding\n",
    "- **Gradio** for interactive web interface\n",
    "\n",
    "### Expanding Agent Capabilities:\n",
    "With smolagents, you can easily extend your agent by:\n",
    "- **Custom Tools**: Add domain-specific tools for specialized tasks\n",
    "- **MCP Integration**: Connect to Model Context Protocol servers for external capabilities\n",
    "- **Sub-Agents**: Create hierarchical agent systems with specialized sub-agents\n",
    "\n",
    "You now have a working demonstration of accelerated AI agent inference on Intel AI PCs with tool-calling capabilities!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
