{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Text Generation with LoRA via OpenVINO GenAI\n",
    "\n",
    "LoRA, or [Low-Rank Adaptation](https://arxiv.org/abs/2106.09685), is a popular and lightweight training technique used for fine-tuning Large Language and Stable Diffusion Models without needing full model training. Full fine-tuning of larger models (consisting of billions of parameters) is inherently expensive and time-consuming. LoRA works by adding a smaller number of new weights to the model for training, rather than retraining the entire parameter space of the model. This makes training with LoRA much faster, memory-efficient, and produces smaller model weights (a few hundred MBs), which are easier to store and share.\n",
    "\n",
    "At its core, LoRA leverages the concept of low-rank matrix factorization. Instead of updating all the parameters in a neural network, LoRA decomposes the parameter space into two low-rank matrices. This decomposition allows the model to capture essential information with fewer parameters, significantly reducing the amount of data and computation required for fine-tuning. This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency. \n",
    "\n",
    "![](https://github.com/user-attachments/assets/bf823c71-13b4-402c-a7b4-d6fc30a60d88)\n",
    "\n",
    "Some more advantages of using LoRA:\n",
    "\n",
    "* LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.\n",
    "* The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.\n",
    "* LoRA is orthogonal to many other parameter-efficient methods and can be combined with many of them.\n",
    "* Performance of models fine-tuned using LoRA is comparable to the performance of fully fine-tuned models.\n",
    "* LoRA does not add any inference latency because adapter weights can be merged with the base model.\n",
    "\n",
    "More details about LoRA can be found in HuggingFace [conceptual guide](https://huggingface.co/docs/peft/conceptual_guides/lora) and [blog post](https://huggingface.co/blog/peft).\n",
    "  \n",
    "In this tutorial we explore possibilities to use LoRA with OpenVINO Generative API.\n",
    "\n",
    "#### Table of contents:\n",
    "\n",
    "- [Prerequisites](#Prerequisites)\n",
    "- [Prepare models](#Prepare-models)\n",
    "- [Select inference device](#Select-inference-device)\n",
    "- [Create pipeline and generate results via OpenVINO GenAI without LoRA](#Create-pipeline-and-generate-results-via-OpenVINO-GenAI-without-LoRA)\n",
    "- [Create pipeline and generate results via OpenVINO GenAI with LoRA](#Create-pipeline-and-generate-results-via-OpenVINO-GenAI-with-LoRA)\n",
    "    - [Load adapter](#Load-adapter)\n",
    "    - [Initialize pipeline with adapters and run inference](#Initialize-pipeline-with-adapters-and-run-inference)\n",
    "    - [Get information about adapters](#Get-information-about-adapters)\n",
    "    - [Disable adapters](#Disable-adapters)\n",
    "    - [Remove adapter](#Remove-adapter)\n",
    "    - [Selection specific adapter during generation](#Selection-specific-adapter-during-generation)\n",
    "    - [Use several adapters](#Use-several-adapters)\n",
    "\n",
    "\n",
    "### Installation Instructions\n",
    "\n",
    "This is a self-contained example that relies solely on its own code.\n",
    "\n",
    "We recommend  running the notebook in a virtual environment. You only need a Jupyter server to start.\n",
    "For details, please refer to [Installation Guide](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/README.md#-installation-guide).\n",
    "\n",
    "<img referrerpolicy=\"no-referrer-when-downgrade\" src=\"https://static.scarf.sh/a.png?x-pxid=5b5a4db0-7875-4bfb-bdbd-01698b5b1a77&file=notebooks/llm-lora/llm-lora.ipynb\" />\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "First, we should install the [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai) for running model inference.\n",
    "\n",
    "![](https://media.githubusercontent.com/media/openvinotoolkit/openvino.genai/refs/heads/master/src/docs/openvino_genai.svg)\n",
    "\n",
    "[OpenVINO™ GenAI](https://github.com/openvinotoolkit/openvino.genai) is a library of the most popular Generative AI model pipelines, optimized execution methods, and samples that run on top of highly performant [OpenVINO Runtime](https://github.com/openvinotoolkit/openvino).\n",
    "\n",
    "This library is friendly to PC and laptop execution, and optimized for resource consumption. It requires no external dependencies to run generative models as it already includes all the core functionality (e.g. tokenization via openvino-tokenizers)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import platform\n",
    "\n",
    "%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu \"torch==2.8\" \"torchvision==0.23.0\" \"transformers==4.53.3\" accelerate pillow \"peft>=0.15.0\"\n",
    "%pip install -q \"git+https://github.com/huggingface/optimum-intel.git\"\n",
    "%pip install -q -U \"openvino>=2024.5.0\" \"openvino-tokenizers>=2024.5.0\" \"openvino-genai>=2024.5.0\"\n",
    "\n",
    "if platform.system() == \"Darwin\":\n",
    "    %pip install -q \"numpy<2.0.0\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "import requests\n",
    "from pathlib import Path\n",
    "\n",
    "notebook_utils_path = Path(\"notebook_utils.py\")\n",
    "\n",
    "if not notebook_utils_path.exists():\n",
    "    r = requests.get(\n",
    "        url=\"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py\",\n",
    "    )\n",
    "    notebook_utils_path.open(\"w\").write(r.text)\n",
    "\n",
    "# Read more about telemetry collection at https://github.com/openvinotoolkit/openvino_notebooks?tab=readme-ov-file#-telemetry\n",
    "from notebook_utils import collect_telemetry\n",
    "\n",
    "collect_telemetry(\"llm-lora.ipynb\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prepare models\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "As example, we will use already converted LLMs from [OpenVINO collection](https://huggingface.co/collections/OpenVINO/llm-6687aaa2abca3bbcec71a9bd). As example we will use [TinyLlama-1.1B-Chat-v1.0-int8-ov](https://huggingface.co/OpenVINO/TinyLlama-1.1B-Chat-v1.0-int8-ov).\n",
    "\n",
    "In case, if you want run own models, you should convert them using [Hugging Face Optimum](https://huggingface.co/docs/optimum/intel/openvino/export) library accelerated by OpenVINO integration. More details about model preparation can be found in [OpenVINO LLM inference guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide/llm-inference-native-ov.html#convert-hugging-face-tokenizer-and-model-to-openvino-ir-format)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "import huggingface_hub as hf_hub\n",
    "\n",
    "model_id = \"OpenVINO/TinyLlama-1.1B-Chat-v1.0-int8-ov\"\n",
    "\n",
    "model_path = Path(model_id.split(\"/\")[-1])\n",
    "\n",
    "if not model_path.exists():\n",
    "    hf_hub.snapshot_download(model_id, local_dir=model_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Select inference device\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "Select the device from dropdown list for running inference using OpenVINO.\n",
    "> **Note**: For achieving maximal performance, we recommend to use GPU as target device if it is available."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "a1c4493ce2c94ca7b6f05b350b54c961",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Dropdown(description='Device:', options=('CPU',), value='CPU')"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from notebook_utils import device_widget\n",
    "\n",
    "device = device_widget(default=\"CPU\", exclude=[\"NPU\", \"AUTO\"])\n",
    "\n",
    "device"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create pipeline and generate results via OpenVINO GenAI without LoRA\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "OpenVINO GenAI provides easy-to-use API for running text generation. Firstly we will create pipeline with `LLMPipeline`. `LLMPipeline` is the main object used for decoding. You can construct it straight away from the folder with the converted model. It will automatically load the `main model`, `tokenizer`, `detokenizer` and default `generation configuration`. \n",
    "After that we will configure parameters for decoding. \n",
    "Then we just run `generate` method and get the output in text format. We do not need to encode input prompt according to model expected template or write post-processing code for logits decoder, it will be done easily with LLMPipeline. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sure, here's a sky blue color:\n",
      "\n",
      "Sky blue is a shade of blue that is typically associated with the sky or the ocean. It is a deep, rich blue color that is often used in fashion, interior design, and photography. Sky blue is a cool color, meaning it is cooler than other warm colors like red, orange, and yellow. It is also a classic color that is often associated with elegance, sophistication, and trust\n"
     ]
    }
   ],
   "source": [
    "import openvino_genai as ov_genai\n",
    "\n",
    "pipe = ov_genai.LLMPipeline(model_path, device.value)\n",
    "\n",
    "print(pipe.generate(\"Give me a sky blue color.\", max_new_tokens=100))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create pipeline and generate results via OpenVINO GenAI with LoRA\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "You can add one or multiple adapters into config and also specify alpha blending coefficients for their addition. OpenVINO GenAI supports LoRA adapters saved in Safetensors format. You can use one of publicly available pretrained adapters from [HuggingFace Hub](https://huggingface.co/models) or train your own.\n",
    "> **Important Note**: Before loading pretrained adapters, please make sure that they are compatible with your base model architecture.\n",
    "\n",
    "Generally, process of adapters configuration consists of 3 steps:\n",
    "1. Load adapters, initialize pipeline with adapters and configure it. Use `openvino_genai.Adapter` to load LoRA. Use `openvino_genai.AdapterConfig` to initialize pipeline with adapters, add and remove adapters or change their weight coefficient for blending into pipeline.\n",
    "2. Register adapters in pipeline constructor. These adapters will influence next generation. But you can also update `adapter_config`, remove some adapters or load a new one and pass it to `generate()` via adapters parameter.\n",
    "3. Choose which adapter (or a combination of adapters) to apply in each `generate` call. It is not obligated to use all of provided in constructor adapters simultaneously, you can select one or combination of several among them for each generation cycle.\n",
    "\n",
    "Adapter could be loaded in next mode:\n",
    "* MODE_AUTO - Automatically selected\n",
    "* MODE_DYNAMIC - A, B, alpha are fully variable\n",
    "* MODE_STATIC_RANK - A and B have static shape, alpha is variable\n",
    "* MODE_STATIC - A, B and alpha are constants\n",
    "* MODE_FUSE - A, B and alpha are constants, fused to main matrix W\n",
    "\n",
    "We will use default MODE_AUTO.\n",
    "\n",
    "Loaded adapters could be added to `adapter_config` via `add(adapter, [alpha])` method or remove via `remove(adapter)`. You can change alpha by `set_alpha(aplha)`.\n",
    "\n",
    "For more information, please, see LoRA adapters [user guide](https://github.com/openvinotoolkit/openvino.genai/blob/master/site/docs/guides/lora-adapters.mdx)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Load adapter\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "Let's try [Javascript/tinyllama-colorist-lora](Javascript/tinyllama-colorist-lora), which was trained on color dataset to fine-tune TinyLLama to be a colorist expert."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "from huggingface_hub import hf_hub_download\n",
    "\n",
    "lora_dir = Path(\"lora\")\n",
    "\n",
    "colorista_lora_id = \"Javascript/tinyllama-colorist-lora\"\n",
    "colorita_lora_path = lora_dir / \"tinyllama-colorist-lora\"\n",
    "\n",
    "if not colorita_lora_path.exists():\n",
    "    hf_hub_download(repo_id=colorista_lora_id, filename=\"adapter_model.safetensors\", local_dir=colorita_lora_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Initialize pipeline with adapters and run inference\n",
    "[back to top ⬆️](#Table-of-contents:)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sure, here's a sky blue color:\n",
      "\n",
      "- RGB: 145, 206, 230\n",
      "- Hex: #c3e6ff\n",
      "- HSL: 120°, 100%, 50%\n",
      "- HSV: 120°, 100%, 50%\n",
      "- CMYK: 0%, 0%, 100%, \n"
     ]
    }
   ],
   "source": [
    "adapter_config = ov_genai.AdapterConfig()\n",
    "\n",
    "colorist_adapter = ov_genai.Adapter(colorita_lora_path / \"adapter_model.safetensors\")\n",
    "adapter_config.add(colorist_adapter, alpha=0.5)\n",
    "\n",
    "pipe_with_adapters = ov_genai.LLMPipeline(model_path, device.value, adapters=adapter_config)\n",
    "print(pipe_with_adapters.generate(\"Give me a sky blue color.\", max_new_tokens=100))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Get information about adapters\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "You can get all loaded adapters via `get_adapters()`. To find out what alpha value is used for a particular adapter, it could be used `get_alpha(adapter)`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loaded adapters numers:  1\n",
      "Alpha for colorist adapter:  0.5\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[<openvino_genai.py_openvino_genai.Adapter at 0x7ff6581ebab0>]"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "print(\"Loaded adapters numers: \", len(adapter_config.get_adapters()))\n",
    "print(\"Alpha for colorist adapter: \", adapter_config.get_alpha(colorist_adapter))\n",
    "\n",
    "adapter_config.get_adapters()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Disable adapters\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "You can disable adapters providing empty `AdapterConfig` into generate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sure, here's a sky blue color:\n",
      "\n",
      "Sky blue is a shade of blue that is typically associated with the sky or the ocean. It is a deep, rich blue color that is often used in fashion, interior design, and photography. Sky blue is a cool color, which means it is less warm and more cool than warm colors like red or orange. It is often used in the summer months, when the sky is often blue and the weather is warm.\n"
     ]
    }
   ],
   "source": [
    "print(pipe_with_adapters.generate(\"Give me a sky blue color.\", max_new_tokens=100, adapters=ov_genai.AdapterConfig()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Remove adapter\n",
    "[back to top ⬆️](#Table-of-contents:)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loaded adapters:  0\n"
     ]
    }
   ],
   "source": [
    "adapter_config.remove(colorist_adapter)\n",
    "print(\"Loaded adapters: \", len(adapter_config.get_adapters()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's try more adapters, which fine-tune TinyLLama: [snshrivas10/sft-tiny-chatbot](https://huggingface.co/snshrivas10/sft-tiny-chatbot) and [emilykang/medprob-anatomy_lora](https://huggingface.co/emilykang/medprob-anatomy_lora)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "chatbot_lora_id = \"snshrivas10/sft-tiny-chatbot\"\n",
    "chatbot_lora_path = lora_dir / \"sft-tiny-chatbot\"\n",
    "\n",
    "if not chatbot_lora_path.exists():\n",
    "    hf_hub_download(repo_id=chatbot_lora_id, filename=\"adapter_model.safetensors\", local_dir=chatbot_lora_path)\n",
    "\n",
    "med_lora_id = \"therealcyberlord/TinyLlama-1.1B-Medical\"\n",
    "med_lora_path = lora_dir / \"TinyLlama-1.1B-Medical\"\n",
    "\n",
    "if not med_lora_path.exists():\n",
    "    hf_hub_download(repo_id=med_lora_id, filename=\"adapter_model.safetensors\", local_dir=med_lora_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loaded adapters:  1\n",
      "Alpha for chatbot adapter:  1.0\n"
     ]
    }
   ],
   "source": [
    "chatbot_adapter = ov_genai.Adapter(chatbot_lora_path / \"adapter_model.safetensors\")\n",
    "adapter_config.add(chatbot_adapter)\n",
    "\n",
    "print(\"Loaded adapters: \", len(adapter_config.get_adapters()))\n",
    "print(\"Alpha for chatbot adapter: \", adapter_config.get_alpha(chatbot_adapter))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Selection specific adapter during generation\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "Providing adapters argument with `openvino_genai.AdapterConfig` into `generate` allow to select one or several from them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "A sky blue color is a shade of blue that is a deep, rich blue with a hint of green. It is a beautiful and calming color that is often associated with nature and peacefulness.\n"
     ]
    }
   ],
   "source": [
    "print(pipe_with_adapters.generate(\"Give me a sky blue color.\", max_new_tokens=100, adapters=adapter_config))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Use several adapters\n",
    "[back to top ⬆️](#Table-of-contents:)\n",
    "\n",
    "\n",
    "Let's add one more adapter to `adapter_config` and put it into `generate()`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loaded adapters:  2\n",
      "Alpha for medprob-anatomy_lora adapter:  1.0\n"
     ]
    }
   ],
   "source": [
    "med_adapter = ov_genai.Adapter(med_lora_path / \"adapter_model.safetensors\")\n",
    "adapter_config.add(med_adapter)\n",
    "\n",
    "print(\"Loaded adapters: \", len(adapter_config.get_adapters()))\n",
    "print(\"Alpha for medprob-anatomy_lora adapter: \", adapter_config.get_alpha(med_adapter))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The frontal lobe of the brain is a part of the brain that is responsible for decision-making, attention, and impulse control. It is located on the front of the brain and is divided into two halves: the prefrontal cortex and the anterior cingulate cortex. The prefrontal cortex is responsible for higher-level cognitive functions such as planning, reasoning, and decision-making, while the anterior cingulate cortex is responsible for regulating\n"
     ]
    }
   ],
   "source": [
    "print(pipe_with_adapters.generate(\"What is the structure of the frontal lobe of the brain?\", max_new_tokens=100, adapters=adapter_config))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  },
  "openvino_notebooks": {
   "imageUrl": "",
   "tags": {
    "categories": [
     "API Overview",
     "First Steps"
    ],
    "libraries": [],
    "other": [
     "LLM"
    ],
    "tasks": [
     "Text Generation"
    ]
   }
  },
  "widgets": {
   "application/vnd.jupyter.widget-state+json": {
    "state": {},
    "version_major": 2,
    "version_minor": 0
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
