{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "208af817",
   "metadata": {},
   "source": [
    "# Create a RAG system on AI PC\n",
    "\n",
    "**Retrieval-augmented generation (RAG)** is a technique for augmenting LLM knowledge with additional, often private or real-time, data. LLMs can reason about wide-ranging topics, but their knowledge is limited to the public data up to a specific point in time that they were trained on. If you want to build AI applications that can reason about private data or data introduced after a model’s cutoff date, you need to augment the knowledge of the model with the specific information it needs. The process of bringing the appropriate information and inserting it into the model prompt is known as Retrieval Augmented Generation (RAG).\n",
    "\n",
    "## Step 1 : Launch the Lemonade Server\n",
    "\n",
    "### Lemonade Server Installer\n",
    "\n",
    "The Lemonade Server is available as a standalone tool with a one-click Windows installer `.exe`. Check out the [server spec](https://github.com/onnx/turnkeyml/blob/main/docs/lemonade/server_spec.md) to learn more about the functionality.\n",
    "\n",
    "### GUI Installation\n",
    "\n",
    "> *Note:* you may need to give your browser or OS permission to download or install the .exe.\n",
    "\n",
    "1. Navigate to the [latest release](https://github.com/onnx/turnkeyml/releases/latest).\n",
    "2. Scroll to the bottom and click `Lemonade_Server_Installer.exe` to download.\n",
    "3. Double-click the `Lemonade_Server_Installer.exe` and follow the instructions.\n",
    "\n",
    "### Launch the Lemonade Server\n",
    "\n",
    "Now that you have the server installed, you can double click the desktop shortcut to run the server process.\n",
    "\n",
    "From there, you can connect it to applications that are compatible with the OpenAI completions API. \n",
    "\n",
    "![lemonade.png](./lemonade.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b12ba34c",
   "metadata": {},
   "source": [
    "## Step 2 ：Create Environment and Install required packages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "23a36720",
   "metadata": {
    "vscode": {
     "languageId": "shellscript"
    }
   },
   "outputs": [],
   "source": [
    "cd {git_repo}/llm/rag\n",
    "Conda activate llm-lemonade\n",
    "pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "acd72a4f",
   "metadata": {
    "vscode": {
     "languageId": "shellscript"
    }
   },
   "source": [
    "## Step 3 ：Run the Demo"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eaec31c7",
   "metadata": {},
   "outputs": [],
   "source": [
    "streamlit run streamlit_rag_lemonade.py"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ccedc38b",
   "metadata": {},
   "source": [
    "## Framework Analysis\n",
    "\n",
    "Now, when model created, we can setup Chatbot interface using Streamlit\n",
    "\n",
    "![rag.png](./rag_framework.png)\n",
    "\n",
    "A typical RAG application has two main components:\n",
    "\n",
    "-**Indexing**: a pipeline for ingesting data from a source and indexing it. This usually happen offline.\n",
    "\n",
    "-**Retrieval and generation**: the actual RAG chain, which takes the user query at run time and retrieves the relevant data from the index, then passes that to the model.\n",
    "\n",
    "The most common full sequence from raw data to answer looks like:\n",
    "\n",
    "**Indexing**\n",
    "\n",
    "1.`Load`: First we need to load our data. We’ll use DocumentLoaders for this.\n",
    "\n",
    "2.`Split`: Text splitters break large Documents into smaller chunks. This is useful both for indexing data and for passing it in to a model, since large chunks are harder to search over and won’t in a model’s finite context window.\n",
    "\n",
    "3.`Store`: We need somewhere to store and index our splits, so that they can later be searched over. This is often done using a VectorStore and Embeddings model.\n",
    "\n",
    "![Indexing pipeline]\n",
    "\n",
    "**Retrieval and generation**\n",
    "\n",
    "1.`Retrieve`: Given a user input, relevant splits are retrieved from storage using a Retriever.\n",
    "\n",
    "2.`Generate`: A LLM produces an answer using a prompt that includes the question and the retrieved data.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eda6ea41",
   "metadata": {},
   "source": [
    "## Essential Lines to Invoke the Model from Lemonade Server\n",
    "\n",
    "Main Function to load the document, initialize the embeddings , create the vector database and invoke the model from Lemonade Server"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9e9c15b3",
   "metadata": {},
   "outputs": [],
   "source": [
    "st.header(\"LLM RAG with Lemonade\")\n",
    "\n",
    "LEMONADE_BASE_URL = \"http://localhost:8000/api/v0\"\n",
    "VECTOR_DB_DIR = \"vector_dbs\"\n",
    "\n",
    "# Load available models from Lemonade\n",
    "available_models = [\n",
    "    \"Llama-3.2-1B-Instruct-Hybrid\",\n",
    "    \"Llama-3.2-3B-Instruct-Hybrid\",\n",
    "    \"Qwen-1.5-7B-chat-Hybrid\",\n",
    "    \"DeepSeek-R1-Distill-Qwen-7B-Hybrid\",\n",
    "    \"DeepSeek-R1-Distill-Llama-8B-Hybrid\",\n",
    "]\n",
    "\n",
    "LEMONADE_MODEL_ID = st.selectbox(\"Select a model:\", available_models, index=0)\n",
    "\n",
    "# Main Function to load the document, initialize the embeddings , create the vector database and invoke the model\n",
    "def getfinalresponse(document_url, embedding_type, chat_model):    \n",
    "    \n",
    "    document_url = url_path    \n",
    "    chat_model = LEMONADE_MODEL_ID      \n",
    "    embedding_fn = initialize_embedding_fn(embedding_type)\n",
    "    vector_store = get_or_create_embeddings(document_url, embedding_fn)     \n",
    "\n",
    "    chat_model_instance = ChatOpenAI(\n",
    "        base_url=LEMONADE_BASE_URL,\n",
    "        model_name=chat_model,\n",
    "        temperature=0.7,\n",
    "        openai_api_key=\"none\"\n",
    "    )\n",
    "    return handle_user_interaction(vector_store, chat_model_instance)"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
