{
 "nbformat": 4,
 "nbformat_minor": 0,
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# RAG with HoneyHive Tracing"
   ],
   "metadata": {
    "id": "OSLubIkqF23O"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "**HoneyHive is an AI monitoring and evaluation platform** that helps developers and businesses build, track, and improve reliable AI applications. It offers tools for performance monitoring, dataset management, debugging, and collaboration, ensuring AI systems run smoothly and scale effectively in production.\n",
    "\n",
    "![Screenshot from 2025-03-20 11-16-24.png]()"
   ],
   "metadata": {
    "id": "SVWriapEIZ79"
   }
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "pg4lPOhYgu6v"
   },
   "outputs": [],
   "source": [
    "!pip install lancedb honeyhive sentence-transformers openai pandas"
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Step 1: Initialize Clients and Setup\n",
    "First, set up the necessary clients and configuration for HoneyHive, OpenAI, and LanceDB:"
   ],
   "metadata": {
    "id": "qCkvUfB4CZEy"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "import os\n",
    "import sys\n",
    "import logging\n",
    "import pandas as pd\n",
    "import lancedb\n",
    "from lancedb.pydantic import LanceModel, Vector\n",
    "from lancedb.embeddings import get_registry\n",
    "import openai\n",
    "from honeyhive import HoneyHiveTracer, trace\n",
    "from typing import List, Dict, Any\n",
    "\n",
    "# Configure logging\n",
    "logging.basicConfig(\n",
    "    level=logging.INFO,\n",
    "    format=\"%(asctime)s - %(name)s - %(levelname)s - %(message)s\",\n",
    "    handlers=[\n",
    "        logging.FileHandler(\"rag_pipeline.log\"),\n",
    "        logging.StreamHandler(sys.stdout),\n",
    "    ],\n",
    ")\n",
    "logger = logging.getLogger(\"lancedb_rag\")\n",
    "\n",
    "# Initialize HoneyHive tracer\n",
    "HONEYHIVE_API_KEY = os.environ.get(\"HONEYHIVE_API_KEY\", \"your honeyhive api key\")\n",
    "HONEYHIVE_PROJECT = os.environ.get(\"HONEYHIVE_PROJECT\", \"your honeyhive project name\")\n",
    "\n",
    "HoneyHiveTracer.init(\n",
    "    api_key=HONEYHIVE_API_KEY,\n",
    "    project=HONEYHIVE_PROJECT,\n",
    "    source=\"dev\",\n",
    "    session_name=\"lancedb_rag_session\",\n",
    ")\n",
    "\n",
    "# Set OpenAI API key\n",
    "OPENAI_API_KEY = os.environ.get(\"OPENAI_API_KEY\", \"your openai api key\")\n",
    "openai.api_key = OPENAI_API_KEY"
   ],
   "metadata": {
    "id": "e84SnBXG_0gv"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Step 2: Define Document Class\n",
    "Create a simple document class to hold text chunks:"
   ],
   "metadata": {
    "id": "Aokipi7QCh7d"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "class Document:\n",
    "    \"\"\"Simple document class to hold text chunks.\"\"\"\n",
    "\n",
    "    def __init__(self, text: str, metadata: Dict[str, Any] = None):\n",
    "        self.text = text\n",
    "        self.metadata = metadata or {}"
   ],
   "metadata": {
    "id": "_LLO5eEEBrjR"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Step 3: Load and Process Documents with Tracing\n",
    "Create functions to load and chunk documents with HoneyHive tracing:"
   ],
   "metadata": {
    "id": "IyG9qUYtDIa_"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "@trace\n",
    "def load_documents(file_path: str) -> List[Document]:\n",
    "    \"\"\"\n",
    "    Load documents from a text file.\n",
    "    Each line is treated as a separate document.\n",
    "    \"\"\"\n",
    "    logger.info(f\"Loading documents from {file_path}\")\n",
    "    documents = []\n",
    "\n",
    "    try:\n",
    "        with open(file_path, \"r\") as f:\n",
    "            lines = f.readlines()\n",
    "\n",
    "        for i, line in enumerate(lines):\n",
    "            if line.strip():  # Skip empty lines\n",
    "                doc = Document(\n",
    "                    text=line.strip(), metadata={\"source\": file_path, \"line_number\": i}\n",
    "                )\n",
    "                documents.append(doc)\n",
    "\n",
    "        logger.info(f\"Loaded {len(documents)} documents\")\n",
    "        return documents\n",
    "    except Exception as e:\n",
    "        logger.error(f\"Error loading documents: {e}\")\n",
    "        raise\n",
    "\n",
    "\n",
    "@trace\n",
    "def chunk_documents(documents: List[Document], chunk_size: int = 1000) -> List[str]:\n",
    "    \"\"\"\n",
    "    Split documents into smaller chunks.\n",
    "    \"\"\"\n",
    "    logger.info(f\"Chunking {len(documents)} documents with chunk size {chunk_size}\")\n",
    "    chunks = []\n",
    "\n",
    "    for doc in documents:\n",
    "        text = doc.text\n",
    "        # Simple chunking by character count\n",
    "        if len(text) <= chunk_size:\n",
    "            chunks.append(text)\n",
    "        else:\n",
    "            # Split into chunks of approximately chunk_size characters\n",
    "            for i in range(0, len(text), chunk_size):\n",
    "                chunk = text[i : i + chunk_size]\n",
    "                chunks.append(chunk)\n",
    "\n",
    "    logger.info(f\"Created {len(chunks)} chunks\")\n",
    "    return chunks"
   ],
   "metadata": {
    "id": "fAwdSj6hBu3R"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Step 4: Create LanceDB Table with Tracing\n",
    "Set up a LanceDB table with embeddings:"
   ],
   "metadata": {
    "id": "0DoXL6HUDQHp"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "@trace\n",
    "def create_lancedb_table(chunks: List[str], table_name: str = \"docs\"):\n",
    "    \"\"\"\n",
    "    Create a LanceDB table with embeddings.\n",
    "    \"\"\"\n",
    "    logger.info(f\"Creating LanceDB table '{table_name}' with {len(chunks)} chunks\")\n",
    "\n",
    "    # Connect to LanceDB\n",
    "    db = lancedb.connect(\"/tmp/lancedb\")\n",
    "\n",
    "    # Get embedding model\n",
    "    model = (\n",
    "        get_registry()\n",
    "        .get(\"sentence-transformers\")\n",
    "        .create(name=\"BAAI/bge-small-en-v1.5\", device=\"cpu\")\n",
    "    )\n",
    "\n",
    "    # Define schema\n",
    "    class Docs(LanceModel):\n",
    "        text: str = model.SourceField()\n",
    "        vector: Vector(model.ndims()) = model.VectorField()\n",
    "\n",
    "    # Create table\n",
    "    df = pd.DataFrame({\"text\": chunks})\n",
    "\n",
    "    # Check if table exists and drop if it does\n",
    "    if table_name in db.table_names():\n",
    "        db.drop_table(table_name)\n",
    "\n",
    "    # Create new table\n",
    "    table = db.create_table(table_name, schema=Docs)\n",
    "\n",
    "    # Add data\n",
    "    table.add(data=df)\n",
    "\n",
    "    logger.info(f\"Created table '{table_name}' with {len(chunks)} rows\")\n",
    "    return table"
   ],
   "metadata": {
    "id": "rzK_KUIWByJQ"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "This function creates a LanceDB table and adds document chunks with embeddings. The @trace decorator logs information about the embedding model used and table creation process.\n",
    "\n",
    "​\n",
    "## Step 5: Retrieve Documents with Tracing\n",
    "Create a function to retrieve relevant documents from LanceDB:"
   ],
   "metadata": {
    "id": "3BlQojPHDW01"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "@trace\n",
    "def retrieve_documents(query: str, table_name: str = \"docs\", limit: int = 3):\n",
    "    \"\"\"\n",
    "    Retrieve relevant documents from LanceDB.\n",
    "    \"\"\"\n",
    "    logger.info(f\"Retrieving documents for query: '{query}'\")\n",
    "\n",
    "    # Connect to LanceDB\n",
    "    db = lancedb.connect(\"/tmp/lancedb\")\n",
    "\n",
    "    # Get table\n",
    "    table = db.open_table(table_name)\n",
    "\n",
    "    # Search\n",
    "    results = table.search(query).limit(limit).to_list()\n",
    "\n",
    "    logger.info(f\"Retrieved {len(results)} documents\")\n",
    "    return results"
   ],
   "metadata": {
    "id": "fHEM4WwQB2Vl"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "The @trace decorator logs information about the retrieval process, including the query and number of results.\n",
    "\n",
    "​\n",
    "## Step 6: Generate Response with Tracing\n",
    "Create a function to generate a response using OpenAI with tracing:"
   ],
   "metadata": {
    "id": "t6wlOwaMDdWu"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "@trace\n",
    "def generate_answer(query: str, context: List[Dict[str, Any]]):\n",
    "    \"\"\"\n",
    "    Generate an answer using OpenAI's API.\n",
    "    \"\"\"\n",
    "    logger.info(f\"Generating answer for query: '{query}'\")\n",
    "\n",
    "    # Extract text from context\n",
    "    context_text = \"\\n\\n\".join([item[\"text\"] for item in context])\n",
    "\n",
    "    # Create prompt\n",
    "    prompt = f\"\"\"\n",
    "    Answer the following question based on the provided context:\n",
    "\n",
    "    Context:\n",
    "    {context_text}\n",
    "\n",
    "    Question: {query}\n",
    "\n",
    "    Answer:\n",
    "    \"\"\"\n",
    "\n",
    "    # Call OpenAI API\n",
    "    response = openai.chat.completions.create(\n",
    "        model=\"gpt-3.5-turbo\",\n",
    "        messages=[\n",
    "            {\n",
    "                \"role\": \"system\",\n",
    "                \"content\": \"You are a helpful assistant that answers questions based on the provided context.\",\n",
    "            },\n",
    "            {\"role\": \"user\", \"content\": prompt},\n",
    "        ],\n",
    "        temperature=0.3,\n",
    "        max_tokens=500,\n",
    "    )\n",
    "\n",
    "    answer = response.choices[0].message.content\n",
    "    logger.info(f\"Generated answer: '{answer[:100]}...'\")\n",
    "\n",
    "    return answer"
   ],
   "metadata": {
    "id": "aH9kvI0dB5Pd"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "## Step 7: Complete RAG Pipeline with Tracing\n",
    "Create a function that combines all the previous steps into a complete RAG pipeline:"
   ],
   "metadata": {
    "id": "UI_F6RdHDla7"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "@trace\n",
    "def rag_pipeline(query: str, data_path: str):\n",
    "    \"\"\"\n",
    "    End-to-end RAG pipeline.\n",
    "    \"\"\"\n",
    "    logger.info(f\"Starting RAG pipeline for query: '{query}'\")\n",
    "\n",
    "    # 1. Load documents\n",
    "    documents = load_documents(data_path)\n",
    "\n",
    "    # 2. Chunk documents\n",
    "    chunks = chunk_documents(documents)\n",
    "\n",
    "    # 3. Create vector store\n",
    "    table = create_lancedb_table(chunks)\n",
    "\n",
    "    # 4. Retrieve relevant documents\n",
    "    results = retrieve_documents(query)\n",
    "\n",
    "    # 5. Generate answer\n",
    "    answer = generate_answer(query, results)\n",
    "\n",
    "    logger.info(\"RAG pipeline completed successfully\")\n",
    "    return answer"
   ],
   "metadata": {
    "id": "iUL2A0cjB5-j"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "The @trace decorator logs the entire RAG pipeline process, creating a parent span that contains all the child spans from the individual functions.\n",
    "\n",
    "​\n",
    "## Step 8: Run the Example\n",
    "Finally, create a main function to run the example:\n"
   ],
   "metadata": {
    "id": "l74mI02nDuKs"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "Sample dataset has been taken from Llamaindex dataset based off an essay by Paul Graham"
   ],
   "metadata": {
    "id": "dWTEvhEjLVrY"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "!pip install llama-index-cli -q\n",
    "!llamaindex-cli download-llamadataset PaulGrahamEssayDataset --download-dir ./data"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "QVkbZDKSHZcP",
    "outputId": "13a8e3ec-7693-4906-d50e-38c76e50843c"
   },
   "execution_count": 12,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "100% 1/1 [00:00<00:00,  2.33it/s]\n",
      "Successfully downloaded PaulGrahamEssayDataset to ./data\n"
     ]
    }
   ]
  },
  {
   "cell_type": "code",
   "source": [
    "def main():\n",
    "    \"\"\"\n",
    "    Main function to demonstrate the RAG pipeline.\n",
    "    \"\"\"\n",
    "\n",
    "    # Sample data\n",
    "    data_path = \"/content/data/source_files/source.txt\"\n",
    "\n",
    "    # Sample query\n",
    "    query = \"How did the author's views on artificial intelligence evolve over time, and what were the key moments that led to their disillusionment with early AI approaches?\"\n",
    "\n",
    "    # Run RAG pipeline\n",
    "    answer = rag_pipeline(query, data_path)\n",
    "\n",
    "    print(\"\\n=== Final Answer ===\")\n",
    "    print(answer)\n",
    "\n",
    "    # End HoneyHive tracing session\n",
    "    HoneyHiveTracer.init(\n",
    "        api_key=HONEYHIVE_API_KEY,\n",
    "        project=HONEYHIVE_PROJECT,\n",
    "        source=\"dev\",\n",
    "        session_name=\"new_session\",  # This ends the previous session and starts a new one\n",
    "    )\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    main()"
   ],
   "metadata": {
    "id": "QxkWDniLB94h"
   },
   "execution_count": null,
   "outputs": []
  }
 ]
}