{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "27a81f4e-1d04-49df-9368-490c0ad8cc8b",
   "metadata": {},
   "source": [
    "# 吴恩达AI教程-Building Agentic RAG with LlamaIndex\n",
    "\n",
    "吴恩达《使用LlamaIndex构建主动式RAG|Building Agentic RAG with LlamaIndex》\n",
    "\n",
    "https://www.bilibili.com/video/BV18m421u7zD?p=1"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "719ced9f-cc30-4696-9dfa-5c9a4fbdc1f7",
   "metadata": {},
   "source": [
    "## Tutorial-1： simple router"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8344db91-188d-490c-9e1b-9c7b038e6473",
   "metadata": {},
   "source": [
    "## 设置LLM本地环境\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "id": "a322440e-427c-4d5c-aee9-189e88163d5c",
   "metadata": {},
   "outputs": [],
   "source": [
    "import nest_asyncio\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "id": "3e056987-9e20-46f1-a1d2-486ab7f447fd",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import SimpleDirectoryReader\n",
    "\n",
    "documents = SimpleDirectoryReader(input_files=[\"../datasets/LlamaParse-PDF-kg.pdf\"]).load_data()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "id": "d52e2807-5241-4ef4-89de-b57796493db1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.node_parser import SentenceSplitter\n",
    "\n",
    "splitter = SentenceSplitter(chunk_size=1024)\n",
    "nodes = splitter.get_nodes_from_documents(documents)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "68dee395-e6ff-4575-9977-574aafbde684",
   "metadata": {},
   "source": [
    "## llamaIndex的大模型设置"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "48c10e35-6514-4129-a807-fd3dab030670",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "TextNode(id_='210805e9-cbd6-4d56-9119-a986661be1d3', embedding=None, metadata={'page_label': '1', 'file_name': 'LlamaParse-PDF-kg.pdf', 'file_path': '../datasets/LlamaParse-PDF-kg.pdf', 'file_type': 'application/pdf', 'file_size': 174069, 'creation_date': '2024-05-23', 'last_modified_date': '2024-05-23'}, excluded_embed_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], excluded_llm_metadata_keys=['file_name', 'file_type', 'file_size', 'creation_date', 'last_modified_date', 'last_accessed_date'], relationships={<NodeRelationship.SOURCE: '1'>: RelatedNodeInfo(node_id='b5cc734b-09bb-4d87-af12-c0f601cc93b0', node_type=<ObjectType.DOCUMENT: '4'>, metadata={'page_label': '1', 'file_name': 'LlamaParse-PDF-kg.pdf', 'file_path': '../datasets/LlamaParse-PDF-kg.pdf', 'file_type': 'application/pdf', 'file_size': 174069, 'creation_date': '2024-05-23', 'last_modified_date': '2024-05-23'}, hash='0430602e795f5f98712bd3ea589b30d1dfecb63046ddbff3531985d0a66e0529')}, text='此 Python 笔记本提供了有关利用  LlamaParse 从 PDF 文档中提取信息并随后\\n将提取的内容存储到  Neo4j 图数据库中的综合指南。本教程在设计时考虑到了\\n实用性，适合对文档处理、信息提取和图形数据库技术感兴趣的开发人员、数\\n据科学家和技术爱好者。  \\n \\n该笔记本电脑的主要特点：  \\n1. 设置环境 ：逐步说明如何设置  Python 环境，包括安装必要的库和\\n工具，例如  LlamaParse 和 Neo4j 数据库驱动程序。  \\n2. PDF 文档处理 ：演示如何使用  LlamaParse 读取  PDF 文档，提取\\n相关信息（例如文本、表格和图像），并将这些信息转换为适合数据库\\n插入的结构化格式。  \\n3. 文档图模型 ：设计有效图模型的指南，该模型表示从  PDF 文档中\\n提取的关系和实体，确保查询和分析的最佳结构。  \\n4. 在 Neo4j  中存储提取的数据 ：详细的代码示例展示了如何从  \\nPython 连接到  Neo4j 数据库，根据提取的数据创建节点和关系，以及\\n执行  Cypher 查询来填充数据库。  \\n5. 生成和存储文本嵌入 ：使用过去创建的程序通过  OpenAI API 调用\\n生成文本嵌入，并将嵌入存储为  Neo4j 中的向量。  \\n6. 查询和分析数据 ：用于检索和分析存储数据的  Cypher 查询示例，\\n说明  Neo4j 如何发现隐藏在  PDF 内容中的见解和关系。  \\n7. 结论：有关处理  PDF、设计图形模式和优化  Neo4j 查询的最佳实\\n践的提示，以及针对在此过程中遇到的潜在问题的常见故障排除建议。  \\n请注意，对于此示例，版本是必需的 llama_index >=0.10.4 。如果 pip install --\\nupgrade <package_name> 不起作用，您可以 pip uninstall <package_name> 再次使\\n用并安装所需的软件包。', start_char_idx=0, end_char_idx=837, text_template='{metadata_str}\\n\\n{content}', metadata_template='{key}: {value}', metadata_seperator='\\n')"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "nodes[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "id": "004d30db-19fd-4af3-8c8a-1adcbcfbcfc5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# from llama_index.core import Settings\n",
    "\n",
    "# Settings.embed_model = \"local:BAAI/bge-small-en-v1.5\"   # 本地模型需要加\"local:\"开头\n",
    "# Settings.chunk_size = 1024  # chunk_size大小\n",
    "\n",
    "# Settings.llm = llm\n",
    "\n",
    "from llama_index.embeddings.ollama import OllamaEmbedding\n",
    "from llama_index.llms.ollama import Ollama\n",
    "\n",
    "from llama_index.core import  VectorStoreIndex\n",
    "from llama_index.core import Settings\n",
    "\n",
    "EMBEDDING_MODEL  = \"mixedbread-ai/mxbai-embed-large-v1\"\n",
    "GENERATION_MODEL = \"mistral-7b-ins\"\n",
    "\n",
    "# llm = MistralAI(model=GENERATION_MODEL)\n",
    "\n",
    "llm = Ollama(model=GENERATION_MODEL)\n",
    "\n",
    "ollama_embedding = OllamaEmbedding(\n",
    "    model_name= GENERATION_MODEL,\n",
    "    base_url=\"http://localhost:11434\",\n",
    "    ollama_additional_kwargs={\"mirostat\": 0},\n",
    ")\n",
    "\n",
    "\n",
    "Settings.llm = llm\n",
    "Settings.embed_model = ollama_embedding\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2fc72dd6-895f-4fa4-b4f4-9a2d2ab89afe",
   "metadata": {},
   "source": [
    "## Router"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "id": "75871f53-a711-4dc0-a40e-98ebab7ae85c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import SummaryIndex\n",
    "\n",
    "summary_index = SummaryIndex(nodes)\n",
    "vector_index = VectorStoreIndex(nodes)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "id": "567c2247-3f26-4aaa-8e4e-619f3db99b77",
   "metadata": {},
   "outputs": [],
   "source": [
    "summary_query_engine = summary_index.as_query_engine(\n",
    "    response_mode = \"tree_summarize\",\n",
    "    use_async=True,\n",
    ")\n",
    "\n",
    "vector_query_engine = vector_index.as_query_engine()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 68,
   "id": "e654ec84-a74b-4acd-ae83-8dc60fa4aa02",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.tools import QueryEngineTool\n",
    "\n",
    "\n",
    "summary_tool = QueryEngineTool.from_defaults(\n",
    "    query_engine=summary_query_engine,\n",
    "    description=(\n",
    "        \"装配工序有用的总结问题\"\n",
    "    ),\n",
    ")\n",
    "\n",
    "vector_tool = QueryEngineTool.from_defaults(\n",
    "    query_engine=vector_query_engine,\n",
    "    description=(\n",
    "        \"Useful for retrieving specific context from the 装配工序卡.\"\n",
    "    ),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4fff0606-8137-4635-be21-a1afd932a70f",
   "metadata": {},
   "source": [
    "## Selectors\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "id": "d035a22c-2232-4f36-9193-a390980a9d34",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.query_engine.router_query_engine import RouterQueryEngine\n",
    "from llama_index.core.selectors import LLMMultiSelector\n",
    "\n",
    "query_engine = RouterQueryEngine(\n",
    "    selector=LLMMultiSelector.from_defaults(),\n",
    "    query_engine_tools=[\n",
    "        summary_tool,\n",
    "        vector_tool,\n",
    "    ],\n",
    "    verbose=True\n",
    ")\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "id": "b040156c-6c2e-4fcb-98b8-70ff74379258",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[1;3;38;5;200mSelecting query engine 0: The first summary mentions 'PDF文档处理' in the context of assembly processes, suggesting that it may discuss various methods or techniques for handling PDF documents within the context of these processes..\n",
      "\u001b[0m在该Python笔记本中，PDF文档处理方面展示了如何使用LlamaParse读取PDF文档，提取相关信息（例如文本、表格和图像），并将这些信息转换为适合数据库插入的结构化格式。\n"
     ]
    }
   ],
   "source": [
    "response = query_engine.query(\"PDF文档处理有哪些？\")\n",
    "print(str(response))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8bf90cc4-5389-4209-8022-5138c14e7f41",
   "metadata": {},
   "source": [
    "## 官网封装了一个接口\n",
    "\n",
    "```\n",
    "from utils import get_router_query_engine\n",
    "\n",
    "query_engine = get_router_query_engine(\"***.pdf\")\n",
    "response = query_engine.query(\"差速器总成装配第三工位工序有哪些？\")\n",
    "print(str(response))\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "335017ea-ecf6-4587-8c25-eaabfd19d9c5",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
