{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "c919f307-07b1-41bd-bc5d-51edd8677983",
   "metadata": {},
   "source": [
    "# Building Data Ingestion from Scratch\n",
    "\n",
    "In this tutorial, we show you how to build a data ingestion pipeline into a vector database.\n",
    "\n",
    "We use Pinecone as the vector database.\n",
    "\n",
    "We will show how to do the following:\n",
    "1. How to load in documents.\n",
    "2. How to use a text splitter to split documents.\n",
    "3. How to **manually** construct nodes from each text chunk.\n",
    "4. [Optional] Add metadata to each Node.\n",
    "5. How to generate embeddings for each text chunk.\n",
    "6. How to insert into a vector database."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bcb486eb-c0b8-40e2-9038-da97aef63139",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "We build an empty Pinecone Index, and define the necessary LlamaIndex wrappers/abstractions so that we can start loading data into Pinecone. "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "22fb9e0a-566b-4f34-b9cf-72193cb51adb",
   "metadata": {},
   "source": [
    "#### Build Pinecone Index"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "cc739b4d-491f-406d-a0e6-f6b1e8c126dc",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/jerryliu/Programming/gpt_index/.venv/lib/python3.10/site-packages/pinecone/index.py:4: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n",
      "  from tqdm.autonotebook import tqdm\n"
     ]
    }
   ],
   "source": [
    "import pinecone\n",
    "import os\n",
    "\n",
    "api_key = os.environ[\"PINECONE_API_KEY\"]\n",
    "pinecone.init(api_key=api_key, environment=\"us-west1-gcp\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "20ba2f76-29d8-4dc5-b25c-64dcfe9e8d23",
   "metadata": {},
   "outputs": [],
   "source": [
    "# dimensions are for text-embedding-ada-002\n",
    "pinecone.create_index(\"quickstart\", dimension=1536, metric=\"euclidean\", pod_type=\"p1\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "45f9a999-dac2-4bc8-8133-ccc851b76a6d",
   "metadata": {},
   "outputs": [],
   "source": [
    "pinecone_index = pinecone.Index(\"quickstart\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "3216f9e2-946d-4b43-8b8c-acf6788633a5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{}"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# [Optional] drop contents in index\n",
    "pinecone_index.delete(deleteAll=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89246384-983c-4e2c-ac05-ffdc1d54a594",
   "metadata": {},
   "source": [
    "#### Create PineconeVectorStore\n",
    "\n",
    "Simple wrapper abstraction to use in LlamaIndex. Wrap in StorageContext so we can easily load in Nodes. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "775aabb2-3dd2-44b1-b6b9-2f7326409e10",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.vector_stores import PineconeVectorStore"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "15f0aa46-9f5b-42c1-9374-db94781363f0",
   "metadata": {},
   "outputs": [],
   "source": [
    "vector_store = PineconeVectorStore(pinecone_index=pinecone_index)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be9437a0-3d52-4586-8217-43944971a2cc",
   "metadata": {},
   "source": [
    "## Build an Ingestion Pipeline from Scratch\n",
    "\n",
    "We show how to build an ingestion pipeline as mentioned in the introduction.\n",
    "\n",
    "Note that steps (2) and (3) can be handled via our `NodeParser` abstractions, which handle splitting and node creation.\n",
    "\n",
    "For the purposes of this tutorial, we show you how to create these objects manually."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d1c9630-2a6b-4656-b272-de1b869c8977",
   "metadata": {},
   "source": [
    "### 1. Load Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "48739cfc-c05a-420a-8c78-280892f8d7a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir data\n",
    "!wget --user-agent \"Mozilla\" \"https://arxiv.org/pdf/2307.09288.pdf\" -O \"data/llama2.pdf\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "079666c5-0685-413d-a765-17f71ae89506",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "from llama_hub.file.pymu_pdf.base import PyMuPDFReader"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "4eee7692-2188-4552-9f2e-cb90ac6b7678",
   "metadata": {},
   "outputs": [],
   "source": [
    "loader = PyMuPDFReader()\n",
    "documents = loader.load(file_path=\"./data/llama2.pdf\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "74c573db-1863-45c3-9049-8bad535e6e35",
   "metadata": {},
   "source": [
    "### 2. Use a Text Splitter to Split Documents\n",
    "\n",
    "Here we import our `SentenceSplitter` to split document texts into smaller chunks, while preserving paragraphs/sentences as much as possible."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "9e175007-84d5-406e-bf5f-6ecacfbfd152",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.text_splitter import SentenceSplitter"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "0dbccb26-ea2a-48c9-adb4-1ebe88adaa1c",
   "metadata": {},
   "outputs": [],
   "source": [
    "text_splitter = SentenceSplitter(\n",
    "    chunk_size=1024,\n",
    "    # separator=\" \",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "7a9bed96-adfa-40c9-92bd-9dba68d58730",
   "metadata": {},
   "outputs": [],
   "source": [
    "text_chunks = []\n",
    "# maintain relationship with source doc index, to help inject doc metadata in (3)\n",
    "doc_idxs = []\n",
    "for doc_idx, doc in enumerate(documents):\n",
    "    cur_text_chunks = text_splitter.split_text(doc.text)\n",
    "    text_chunks.extend(cur_text_chunks)\n",
    "    doc_idxs.extend([doc_idx] * len(cur_text_chunks))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "354157d6-b436-4f0a-bf6e-f0a197e54c60",
   "metadata": {},
   "source": [
    "### 3. Manually Construct Nodes from Text Chunks\n",
    "\n",
    "We convert each chunk into a `TextNode` object, a low-level data abstraction in LlamaIndex that stores content but also allows defining metadata + relationships with other Nodes.\n",
    "\n",
    "We inject metadata from the document into each node.\n",
    "\n",
    "This essentially replicates logic in our `SimpleNodeParser`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "33b93044-3eb4-4c77-bc40-be53dffd3749",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.schema import TextNode"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "adbfcb3f-5554-4594-ae80-7236e28485aa",
   "metadata": {},
   "outputs": [],
   "source": [
    "nodes = []\n",
    "for idx, text_chunk in enumerate(text_chunks):\n",
    "    node = TextNode(\n",
    "        text=text_chunk,\n",
    "    )\n",
    "    src_doc = documents[doc_idxs[idx]]\n",
    "    node.metadata = src_doc.metadata\n",
    "    nodes.append(node)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "257bb2e3-608a-4542-ba29-f29b59771a3f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "total_pages: 77\n",
      "file_path: ./data/llama2.pdf\n",
      "source: 1\n",
      "\n",
      "Llama 2: Open Foundation and Fine-Tuned Chat Models\n",
      "Hugo Touvron∗\n",
      "Louis Martin†\n",
      "Kevin Stone†\n",
      "Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra\n",
      "Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen\n",
      "Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller\n",
      "Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou\n",
      "Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev\n",
      "Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich\n",
      "Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra\n",
      "Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi\n",
      "Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang\n",
      "Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang\n",
      "Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic\n",
      "Sergey Edunov\n",
      "Thomas Scialom∗\n",
      "GenAI, Meta\n",
      "Abstract\n",
      "In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned\n",
      "large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.\n",
      "Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our\n",
      "models outperform open-source chat models on most benchmarks we tested, and based on\n",
      "our human evaluations for helpfulness and safety, may be a suitable substitute for closed-\n",
      "source models. We provide a detailed description of our approach to fine-tuning and safety\n",
      "improvements of Llama 2-Chat in order to enable the community to build on our work and\n",
      "contribute to the responsible development of LLMs.\n",
      "∗Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com\n",
      "†Second author\n",
      "Contributions for all the authors can be found in Section A.1.\n",
      "arXiv:2307.09288v2  [cs.CL]  19 Jul 2023\n"
     ]
    }
   ],
   "source": [
    "# print a sample node\n",
    "print(nodes[0].get_content(metadata_mode=\"all\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fac468f5-870c-4486-b576-2aa6d9eaf322",
   "metadata": {},
   "source": [
    "### [Optional] 4. Extract Metadata from each Node\n",
    "\n",
    "We extract metadata from each Node using our Metadata extractors.\n",
    "\n",
    "This will add more metadata to each Node."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "9c369188-9cc9-4550-924e-b29d212ad057",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.node_parser.extractors import (\n",
    "    MetadataExtractor,\n",
    "    QuestionsAnsweredExtractor,\n",
    "    TitleExtractor,\n",
    ")\n",
    "from llama_index.llms import OpenAI\n",
    "\n",
    "llm = OpenAI(model=\"gpt-3.5-turbo\")\n",
    "\n",
    "metadata_extractor = MetadataExtractor(\n",
    "    extractors=[\n",
    "        TitleExtractor(nodes=5, llm=llm),\n",
    "        QuestionsAnsweredExtractor(questions=3, llm=llm),\n",
    "    ],\n",
    "    in_place=False,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f5501ffc-9bbb-4b48-9181-4e4e371e8f41",
   "metadata": {},
   "outputs": [],
   "source": [
    "nodes = metadata_extractor.process_nodes(nodes)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d52522c-ffee-49d1-8651-658d70248053",
   "metadata": {},
   "source": [
    "### 5. Generate Embeddings for each Node\n",
    "\n",
    "Generate document embeddings for each Node using our OpenAI embedding model (`text-embedding-ada-002`).\n",
    "\n",
    "Store these on the `embedding` property on each Node."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "6e071e36-e609-4a0c-a478-e8cfbe751cff",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.embeddings import OpenAIEmbedding\n",
    "\n",
    "embed_model = OpenAIEmbedding()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "2047ca46-729f-4c5a-a8d7-3bc860604333",
   "metadata": {},
   "outputs": [],
   "source": [
    "for node in nodes:\n",
    "    node_embedding = embed_model.get_text_embedding(\n",
    "        node.get_content(metadata_mode=\"all\")\n",
    "    )\n",
    "    node.embedding = node_embedding"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11f78014-dcca-43f5-95cb-9cfb5140b30e",
   "metadata": {},
   "source": [
    "### 6. Load Nodes into a Vector Store\n",
    "\n",
    "We now insert these nodes into our `PineconeVectorStore`.\n",
    "\n",
    "**NOTE**: We skip the VectorStoreIndex abstraction, which is a higher-level abstraction that handles ingestion as well. We use `VectorStoreIndex` in the next section to fast-trak retrieval/querying."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fe34fbe4-3396-402e-8599-4b42013c3016",
   "metadata": {},
   "outputs": [],
   "source": [
    "vector_store.add(nodes)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "74e7f8bd-d92a-40ad-8d9b-a18c04ddfca9",
   "metadata": {},
   "source": [
    "## Retrieve and Query from the Vector Store\n",
    "\n",
    "Now that our ingestion is complete, we can retrieve/query this vector store.\n",
    "\n",
    "**NOTE**: We can use our high-level `VectorStoreIndex` abstraction here. See the next section to see how to define retrieval at a lower-level!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "be6a4fe1-2665-43e6-a872-8e631e31b0fd",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index import VectorStoreIndex\n",
    "from llama_index.storage import StorageContext"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "0e9d2495-d4f7-469a-9cea-a5cfc401c085",
   "metadata": {},
   "outputs": [],
   "source": [
    "index = VectorStoreIndex.from_vector_store(vector_store)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "5c89e1c1-8ed1-45f5-b2a4-7c3382195693",
   "metadata": {},
   "outputs": [],
   "source": [
    "query_engine = index.as_query_engine()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "cd6e0ddb-97c9-4f42-8843-a36a29ba3f17",
   "metadata": {},
   "outputs": [],
   "source": [
    "query_str = \"Can you tell me about the key concepts for safety finetuning\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "950cae37-7bad-44a3-be51-4154a8630818",
   "metadata": {},
   "outputs": [],
   "source": [
    "response = query_engine.query(query_str)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "c0b309bb-ca5a-4b15-948c-687038361c91",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The key concepts for safety fine-tuning include supervised safety fine-tuning, safety RLHF (Reinforcement Learning from Human Feedback), and safety context distillation. Supervised safety fine-tuning involves gathering adversarial prompts and safe demonstrations to align the model with safety guidelines. Safety RLHF integrates safety into the RLHF pipeline by training a safety-specific reward model and gathering challenging adversarial prompts for fine-tuning. Safety context distillation refines the RLHF pipeline by generating safer model responses with a safety preprompt and fine-tuning the model on these responses without the preprompt. These concepts aim to mitigate safety risks and improve the model's alignment with safety guidelines.\n"
     ]
    }
   ],
   "source": [
    "print(str(response))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "llama_index_v2",
   "language": "python",
   "name": "llama_index_v2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
