{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# RAG项目实战(使用Llamalndex构建企业私有知识库)\n",
    "\n",
    "本节课教学目标和重要知识点:\n",
    "\n",
    "1. 使用Conda配置知识库项目Python环境\n",
    "2. SentenceTransformer大模型详解\n",
    "3. Embedding文本向量化处理实战\n",
    "4. InternLM2.5-1.8B/Qwen2.5-0.5B模型实战\n",
    "5. 知识库模型问答测试与实际效果评估\n",
    "6. 使用LLamalndex创建知识库实战\n",
    "7. 使用Streamlit创建Web应用实战\n",
    "\n",
    "RAG流程图:\n",
    "![RAG](./assets/rag.png)\n",
    "\n",
    "\n",
    "index流程图\n",
    "![RAG](./assets/20250322-122247.png)\n",
    "\n",
    "![RAG](./assets/20250322-122646.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "vscode": {
     "languageId": "bat"
    }
   },
   "source": [
    "## 1.使用 conda 安装环境\n",
    "\n",
    "```bash\n",
    "## 创建虚拟环境\n",
    "conda create -n llamaindex-rag python=3.10\n",
    "\n",
    "## 激活虚拟环境\n",
    "conda activate llamaindex-rag\n",
    "\n",
    "## pip配置国内源\n",
    "pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple\n",
    "python -m pip install --upgrade pip\n",
    "\n",
    "## 安装pytorch, 网站： https://pytorch.org/ \n",
    "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124\n",
    "\n",
    "## 安装依赖\n",
    "pip install -r requirements.txt\n",
    "\n",
    "## 生成依赖文件\n",
    "pip freeze > requirements.txt\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.下载SentenceTransformer模型\n",
    "在进行RAG之前,需要使用词向量模型进行Embedding,将文本进行向量化处理,此处选择Sentence Transformer模型。执行下面代码,将模型下载到本地\n",
    "\n",
    "- embedding models 文本向量化模型\n",
    "\n",
    "**影响召唤率**\n",
    "\n",
    "目前跨语音最精准 (如语料既有中文又有英文)\n",
    "- 模型：\n",
    "  - text-embedding-ada-002   （openai的）\n",
    "    问题：购买key， 按token付费，成本高昂\n",
    "  \n",
    "为了减免成本，我们一般选择对模型微调以便可以使用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading Model from https://www.modelscope.cn to directory: D:/AIModels/embedding_models\\Ceceliachenen\\paraphrase-multilingual-MiniLM-L12-v2\n"
     ]
    }
   ],
   "source": [
    "#模型下载\n",
    "from modelscope import snapshot_download\n",
    "#model_id模型的id\n",
    "#cache_dir 缓存到本地的路径\n",
    "#revision：模型版本控制\n",
    "model_dir = snapshot_download(\n",
    "    model_id=\"Ceceliachenen/paraphrase-multilingual-MiniLM-L12-v2\",\n",
    "    cache_dir='D:/AIModels/embedding_models'\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3 下载文本生成模型\n",
    "- 文本生成模型\n",
    "    - Qwen2.5\n",
    "        - 一般来说用7B模型\n",
    "        - 不量化的情况下需要显存24G\n",
    "    - IntelnLM 上海书生浦语\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading Model from https://www.modelscope.cn to directory: D:/AIModels/text_generation\\Qwen\\Qwen2.5-0.5B-Instruct\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-03-22 22:18:10,299 - modelscope - INFO - Creating symbolic link [D:/AIModels/text_generation\\Qwen\\Qwen2.5-0.5B-Instruct].\n",
      "2025-03-22 22:18:10,300 - modelscope - WARNING - Failed to create symbolic link D:/AIModels/text_generation\\Qwen\\Qwen2.5-0.5B-Instruct for D:\\Qwen\\Qwen2___5-0___5B-Instruct.\n"
     ]
    }
   ],
   "source": [
    "#模型下载\n",
    "from modelscope import snapshot_download\n",
    "#model_id模型的id\n",
    "#cache_dir 缓存到本地的路径\n",
    "#revision：模型版本控制\n",
    "model_dir = snapshot_download(\n",
    "    model_id=\"Qwen/Qwen2.5-0.5B-Instruct\",\n",
    "    cache_dir='D:/AIModels/text_generation'\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4.调用本地模型进行推理测试\n",
    "执行下面代码进行提问测试,可以看出模型本身不具备关于xtuner的相关知识,回复也比较杂乱。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "assistant: XTune 是一款开源的、基于Python的工具，主要用于分析和优化视频文件。XTune 可以帮助用户发现并修复视频中的错误或问题，从而提高视频的质量和播放速度。\n",
      "\n",
      "XTune 通过分析视频的编码器配置、音频处理、帧率等信息来识别出可能存在的问题，并提供相应的解决方案。它支持多种编码格式（如 H.264、VP8、HEVC 等），并且可以与多种视频编辑软件进行集成。\n",
      "\n",
      "使用 XTune 的方法是：首先下载 XTune 开源版本；然后安装所需的 Python 库；接着运行 XTune 工具；最后根据提示操作，完成对视频的分析和优化工作。\n",
      "\n",
      "XTune 不仅能帮助用户改善视频质量，还能提升用户体验，使得视频在播放时更加流畅和自然。\n"
     ]
    }
   ],
   "source": [
    "from llama_index.llms.huggingface import HuggingFaceLLM \n",
    "from llama_index.core.llms import ChatMessage\n",
    "#使用HuggingFace加载本地大模型\n",
    "llm = HuggingFaceLLM(\n",
    "    #给定的是本地模型的全路径\n",
    "    model_name=r\"D:/AIModels/text_generation/Qwen/Qwen2___5-0___5B-Instruct\",\n",
    "    tokenizer_name=r\"D:/AIModels/text_generation/Qwen/Qwen2___5-0___5B-Instruct\",\n",
    "    model_kwargs={\"trust_remote_code\":True},\n",
    "    tokenizer_kwargs={\"trust_remote_code\":True}\n",
    ")\n",
    "rsp = llm.chat(messages=[ChatMessage(content=\"xtune是什么?\")])\n",
    "print(rsp)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5.创建知识库\n",
    "创建./data文件夹,用于构建知识库(语料库)。\n",
    "\n",
    "执行下面的代码,运行测试后,可以看到正确回答问题,并且可以给出答案的出处。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "data目录存在: True\n",
      "找到的.md文件: ['./data\\\\XTuner.md']\n",
      "xtuner 是一个用于高效地Fine-tuning大型语言模型的工具库。它支持多种方法，如深度学习算法（如QLoRA）、神经网络架构等。XTuner提供了多个开箱即用的配置文件，并且可以集成DeepSpeed来优化训练过程，因此XTuner能够提供灵活性和性能优势。此外，XTuner还支持对Hugging Face Adapter进行合并以提高模型兼容性。总之，XTuner是一款强大的工具，可以帮助开发人员更高效地进行大规模语言模型的训练和优化。\n"
     ]
    }
   ],
   "source": [
    "from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings\n",
    "from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n",
    "from llama_index.llms.huggingface import HuggingFaceLLM \n",
    "\n",
    "# 初始化一个HuggingFaceEmbedding对象,用于将文本转换为向量表示\n",
    "# 指定了一个预训练的sentence-transformer模型的路经\n",
    "embed_model = HuggingFaceEmbedding(\n",
    "    model_name=r\"D:/AIModels/embedding_models/Ceceliachenen/paraphrase-multilingual-MiniLM-L12-v2\"\n",
    "    )\n",
    "\n",
    "# 将创建的嵌入模型赋值给全局设置的embed_model属性,\n",
    "# 这样在后续的索引构建过程中就会使用这个模型。\n",
    "Settings.embed_model = embed_model\n",
    "\n",
    "# 使用HuggingFace加载本地大模型\n",
    "llm = HuggingFaceLLM(\n",
    "    #给定的是本地模型的全路径\n",
    "    model_name=r\"D:/AIModels/text_generation/Qwen/Qwen2___5-0___5B-Instruct\",\n",
    "    tokenizer_name=r\"D:/AIModels/text_generation/Qwen/Qwen2___5-0___5B-Instruct\",\n",
    "    model_kwargs={\"trust_remote_code\":True},\n",
    "    tokenizer_kwargs={\"trust_remote_code\":True}\n",
    ")\n",
    "\n",
    "# 设置全局的llm属性,这样在索引查询时会使用这个模型\n",
    "Settings.llm = llm\n",
    "\n",
    "# RAG系统构建过程\n",
    "# 从指定目录读取所有文档,并加载数据到内存中,required_exts只加载指定扩展名的文档\n",
    "documents = SimpleDirectoryReader(\"./data\", required_exts=[\".md\"]).load_data()\n",
    "\n",
    "# 创建一个VectorStoreIndex,并使用之前加载的文档来构建索引\n",
    "# 此索引将文档转换为向量,并存储这些向量以便于快速检索\n",
    "# 默认是存储在内存中的\n",
    "index = VectorStoreIndex.from_documents(documents)\n",
    "\n",
    "# 创建一个查询引擎,这个引擎可以接收查询并返回相关文档的响应\n",
    "query_engine = index.as_query_engine()\n",
    "response=query_engine.query(\"xtuner是什么?\")\n",
    "\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 思考一下：\n",
    "> - 向量持久化\n",
    "> - 基于已有数据的项目数据库检索\n",
    "\n",
    "以Qdrant举例：\n",
    "- 向量持久化：将向量数据存储在Qdrant中，以便后续的检索和分析。\n",
    "![](./assets/20250323-015709.png)\n",
    "\n",
    "**1.离线处理:文档加载、切割、向量化及存储**\n",
    "\n",
    "```python\n",
    "def ingest_to_db():\n",
    "    # Crawl\n",
    "    documents = SimpleDirectoryReader(input_dir=RAG_DATA_DIR, recursive=True, required_exts=[\".md\"]).load_data()\n",
    "\n",
    "    # This is from qdrant, not llama-index\n",
    "    db_client = QdrantClient(host=\"localhost\", port=6333)\n",
    "\n",
    "    # pass the DB client to the vector store\n",
    "    vector_store = QdrantVectorStore(\n",
    "        collection_name= \"rag_1\",\n",
    "        client = db_client\n",
    "        )\n",
    "    storage_context=StorageContext.from_defaults(vector_store=vector_store)\n",
    "\n",
    "    # The effect: feed data into DB\n",
    "    VectorStore Index.from_documents(\n",
    "        documents,\n",
    "        storage_context=storage_context\n",
    "        )\n",
    "```\n",
    "\n",
    "**2. 在线处理： 检索和生成**\n",
    "\n",
    "```python\n",
    "\n",
    "# Declare the Qdrant VectorStore; assuming the datais ingested\n",
    "db_client = QdrantClient(host=\"localhost\", port=6333)\n",
    "vector_store = QdrantVectorStorstore(\n",
    "    collection_name= \"rag_1\",\n",
    "    client = db_client\n",
    "    )\n",
    "# The index itself does NOT hold any nodes (see below\n",
    "index = VectorStore Index.from_vector_store(vector_store)\n",
    "query_engine=index.as_query_engine()\n",
    "\n",
    "# During inference, the major operations from the query_engine\n",
    "# happen in its retriever, where the vector store'squery()`is called\n",
    "response=query_engine.query(\"Xtuner是什么?\")\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6.创建WEB应用\n",
    "\n",
    "基于Streamlit框架快速的搭建WEB应用, app.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import streamlit as st\n",
    "from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings\n",
    "from llama_index.embeddings.huggingface import HuggingFaceEmbedding\n",
    "from llama_index.llms.huggingface import HuggingFaceLLM\n",
    "\n",
    "st.set_page_config(page_title=\"RAG间答系统\",page_icon=\"\")\n",
    "st.title(\"LlamaIndex RAG\")\n",
    "\n",
    "# 初始化模型\n",
    "@st.cache_resource\n",
    "def init_models():\n",
    "    embed_model = HuggingFaceEmbedding(\n",
    "        model_name=r\"D:/AIModels/embedding_models/Ceceliachenen/paraphrase-multilingual-MiniLM-L12-v2\"\n",
    "    )\n",
    "    Settings.embed_model = embed_model\n",
    "\n",
    "    llm = HuggingFaceLLM(\n",
    "        model_name=r\"D:/AIModels/text_generation/Qwen/Qwen2___5-0___5B-Instruct\",\n",
    "        tokenizer_name=r\"D:/AIModels/text_generation/Qwen/Qwen2___5-0___5B-Instruct\",\n",
    "        model_kwargs=(\"trust_remote_code\": True),\n",
    "        tokenizer_kwargs={\"trust_remote_code\": True}\n",
    "    )\n",
    "    Settings.llm = llm\n",
    "\n",
    "    documents = SimpleDirectoryReader(\"./data\").load_data()\n",
    "    index = VectorStoreIndex.from_documents(documents)\n",
    "    query_engine = index.as_query_engine()\n",
    "    return query_engine\n",
    "\n",
    "if 'query_engine' not in st.session_state:\n",
    "    st.session_state['query_engine'] = init_models()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "rag-learn",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
