{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Chroma入门教程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[Chroma向量数据库完全手册](https://medium.com/@lemooljiang/chroma%E5%90%91%E9%87%8F%E6%95%B0%E6%8D%AE%E5%BA%93%E5%AE%8C%E5%85%A8%E6%89%8B%E5%86%8C-4248b15679ea)\n",
    "\n",
    "[向量数据库Chroma极简教程](https://www.cnblogs.com/rude3knife/p/chroma_tutorial.html)\n",
    "\n",
    "[向量数据库Chroma学习记录](https://www.cnblogs.com/deeplearningmachine/p/18132593)\n",
    "\n",
    "[chroma cookbook](https://cookbook.chromadb.dev/)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 安装"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install chromadb==0.5.16"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Chroma快速上手"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 设计理念"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Chroma的目标是帮助用户更加便捷地构建大模型应用，更加轻松的将知识（knowledge）、事实（facts）和技能（skills）等我们现实世界中的文档整合进大模型中。\n",
    "\n",
    "Chroma提供的工具：\n",
    "\n",
    "- 存储文档数据和它们的元数据：store embeddings and their metadata\n",
    "- 嵌入：embed documents and queries\n",
    "- 搜索： search embeddings\n",
    "\n",
    "Chroma的设计优先考虑：\n",
    "\n",
    "- 足够简单并且提升开发者效率：simplicity and developer productivity\n",
    "- 搜索之上再分析：analysis on top of search\n",
    "- 追求快（性能）： it also happens to be very quick"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image.png](../imgs/chroma.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Demo"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import chromadb\n",
    "\n",
    "chroma_client = chromadb.Client()\n",
    "\n",
    "collection = chroma_client.create_collection(name=\"my_collection\")\n",
    "\n",
    "# 我们向Chroma提交了两个文档（简单起见，是两个字符串），一个是This is a document about engineer，一个是This is a document about steak。\n",
    "# 若在add方法没有传入embedding参数，则会使用Chroma默认的all-MiniLM-L6-v2 方式进行embedding。随后，我们对数据集进行query，要求返回两个最相关的结果。提问内容为：Which food is the best?\n",
    "collection.add(\n",
    "    documents=[\"This is a document about engineer\", \"This is a document about steak\"],\n",
    "    metadatas=[{\n",
    "        \"source\": \"doc1\"\n",
    "    }, {\n",
    "        \"source\": \"doc2\"\n",
    "    }],\n",
    "    ids=[\"id1\", \"id2\"])\n",
    "\n",
    "results = collection.query(query_texts=[\"Which food is the best?\"], n_results=2)\n",
    "\n",
    "print(results)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 数据持久化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Chroma一般是直接作为内存数据库使用，但是也可以进行持久化存储。\n",
    "\n",
    "在初始化Chroma Client时，使用PersistentClient："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "import chromadb\n",
    "from chromadb.config import Settings\n",
    "\n",
    "# 没有则会创建，有则直接加载\n",
    "client = chromadb.PersistentClient(path=\"../datas/chroma/\", settings=Settings(allow_reset=True))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这样在运行代码后，在你指定的位置会新建一个chroma.sqlite3文件。\n",
    "\n",
    "这个sqlite3的数据库里包含的表如下图，从中可以窥见一部分Chroma的数据存储思路：\n",
    "\n",
    "![chroma_sqlite01.png](../imgs/chroma_sqlite01.png)\n",
    "\n",
    "![chroma_sqlite02.png](../imgs/chroma_sqlite02.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Chroma Client还支持下面两个API："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "client.heartbeat()  # returns a nanosecond heartbeat. Useful for making sure the client remains connected.\n",
    "client.reset()  # Empties and completely resets the database. ⚠️ This is destructive and not reversible."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "此外，Chroma还支持服务端，客户端模式，用于跨进程通信。详见：\n",
    "\n",
    "https://docs.trychroma.com/usage-guide#running-chroma-in-clientserver-mode"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 数据集（Collection）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "collection是Chroma中一个重要的概念，下面的代码和注释简单介绍了collection的主要功能和使用方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "collection = client.get_or_create_collection(\n",
    "    name=\"test\"\n",
    ")  # Get a collection object from an existing collection, by name. If it doesn't exist, create it.\n",
    "collection = client.get_collection(\n",
    "    name=\"test\"\n",
    ")  # Get a collection object from an existing collection, by name. Will raise an exception if it's not found.\n",
    "client.delete_collection(\n",
    "    name=\"my_collection\"\n",
    ")  # Delete a collection and all associated embeddings, documents, and metadata. ⚠️ This is destructive and not reversible\n",
    "collection.peek()  # returns a list of the first 10 items in the collection\n",
    "collection.count()  # returns the number of items in the collection\n",
    "collection.modify(name=\"new_name\")  # Rename the collection"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "collection支持传入一些自身的元数据metadata："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "collection = client.create_collection(\n",
    "    name=\"collection_name\",\n",
    "    metadata={\"hnsw:space\": \"cosine\"}  # l2 is the default\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "collection允许用户自行切换距离计算函数，方法是通过设置cellection的 metadata 中的 \"hnsw:space\"：\n",
    "```python\n",
    "collection = client.create_collection(\n",
    "      name=\"collection_name\",\n",
    "      metadata={\"hnsw:space\": \"cosine\"} # l2 is the default\n",
    "  )\n",
    "```\n",
    "\n",
    "| Distance          | parameter | Equation                                                     |\n",
    "| ----------------- | --------- | ------------------------------------------------------------ |\n",
    "| Squared L2        | 'l2'      | $d = \\sum\\left(A_i-B_i\\right)^2$                             |\n",
    "| Inner product     | 'ip'      | $d = 1.0 - \\sum\\left(A_i \\times B_i\\right) $                 |\n",
    "| Cosine similarity | 'cosine'  | $d = 1.0 - \\frac{\\sum\\left(A_i \\times B_i\\right)}{\\sqrt{\\sum\\left(A_i^2\\right)} \\cdot \\sqrt{\\sum\\left(B_i^2\\right)}}$ |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 文档（Document）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在上面的Demo中，我们使用了默认的add函数。\n",
    "\n",
    "```python\n",
    "def add(ids: OneOrMany[ID],\n",
    "        embeddings: Optional[OneOrMany[Embedding]] = None,\n",
    "        metadatas: Optional[OneOrMany[Metadata]] = None,\n",
    "        documents: Optional[OneOrMany[Document]] = None) -> None\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "除此之外，你还可以有如下传参：\n",
    "\n",
    "- ids: 文档的唯一ID\n",
    "- embeddings（可选）: 如果不传该参数，将根据Collection设置的embedding_function进行计算。\n",
    "- metadatas（可选）：要与嵌入关联的元数据。在查询时，您可以根据这些元数据进行过滤。\n",
    "- documents（可选）：与该嵌入相关联的文档，甚至可以不放文档。\n",
    "\n",
    "示例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "collection.add(\n",
    "    embeddings=[[1.2, 2.3, 4.5], [6.7, 8.2, 9.2]],\n",
    "    documents=[\"This is a document\", \"This is another document\"],\n",
    "    metadatas=[{\n",
    "        \"source\": \"my_source\"\n",
    "    }, {\n",
    "        \"source\": \"my_source\"\n",
    "    }],\n",
    "    ids=[\"id1\", \"id2\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 简单查询"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "输入文档内的文本进行相似性查询，可以使用query方法\n",
    "\n",
    "```python\n",
    "collection.query(\n",
    "    query_embeddings=[[11.1, 12.1, 13.1],[1.1, 2.3, 3.2], ...],\n",
    "    n_results=10,\n",
    "    where={\"metadata_field\": \"is_equal_to_this\"},\n",
    "    where_document={\"$contains\":\"search_string\"}\n",
    ")\n",
    "```\n",
    "\n",
    "若想要通过id查找，可以使用get方法\n",
    "\n",
    "```python\n",
    "collection.get(\n",
    "    ids=[\"id1\", \"id2\", \"id3\", ...],\n",
    "    where={\"style\": \"style1\"}\n",
    ")\n",
    "```\n",
    "\n",
    "与此同时，你可以定制返回结果包含的数据\n",
    "\n",
    "```python\n",
    "# Only get documents and ids\n",
    "collection.get({\n",
    "    include: [ \"documents\" ]\n",
    "})\n",
    "\n",
    "collection.query({\n",
    "    queryEmbeddings: [[11.1, 12.1, 13.1],[1.1, 2.3, 3.2], ...],\n",
    "    include: [ \"documents\" ]\n",
    "})\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 条件查询"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Chroma 支持按元数据和文档内容过滤查询。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**where 字段用于按元数据进行过滤**\n",
    "\n",
    "```python\n",
    "{\n",
    "    \"metadata_field\": {\n",
    "        <Operator>: <Value>\n",
    "    }\n",
    "}\n",
    "```\n",
    "\n",
    "支持下列操作操作符：\n",
    "\n",
    "> `$eq` - equal to (string, int, float)\n",
    ">\n",
    "> `$ne` - not equal to (string, int, float)\n",
    ">\n",
    "> `$gt` - greater than (int, float)\n",
    ">\n",
    "> `$gte` - greater than or equal to (int, float)\n",
    ">\n",
    "> `$lt` - less than (int, float)\n",
    ">\n",
    "> `$lte` - less than or equal to (int, float)\n",
    "\n",
    "```python\n",
    "# is equivalent to\n",
    "{\n",
    "    \"metadata_field\": {\n",
    "        \"$eq\": \"search_string\"\n",
    "    }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**where_document 字段用于按文档内容进行过滤**\n",
    "\n",
    "```python\n",
    "# Filtering for a search_string\n",
    "{\n",
    "    \"$contains\": \"search_string\"\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**使用逻辑运算符**\n",
    "\n",
    "可以在查询条件中使用逻辑运算符\n",
    "\n",
    "```python\n",
    "{\n",
    "    \"$and\": [\n",
    "        {\n",
    "            \"metadata_field\": {\n",
    "                <Operator>: <Value>\n",
    "            }\n",
    "        },\n",
    "        {\n",
    "            \"metadata_field\": {\n",
    "                <Operator>: <Value>\n",
    "            }\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "{\n",
    "    \"$or\": [\n",
    "        {\n",
    "            \"metadata_field\": {\n",
    "                <Operator>: <Value>\n",
    "            }\n",
    "        },\n",
    "        {\n",
    "            \"metadata_field\": {\n",
    "                <Operator>: <Value>\n",
    "            }\n",
    "        }\n",
    "    ]\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**使用in/not in**\n",
    "\n",
    "in将返回metadata中包含给出列表中属性值的文档：\n",
    "\n",
    "```json\n",
    "{\n",
    "  \"metadata_field\": {\n",
    "    \"$in\": [\"value1\", \"value2\", \"value3\"]\n",
    "  }\n",
    "}\n",
    "```\n",
    "\n",
    "not in则与其相反：\n",
    "\n",
    "```json\n",
    "{\n",
    "  \"metadata_field\": {\n",
    "    \"$nin\": [\"value1\", \"value2\", \"value3\"]\n",
    "  }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 更新文档"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "带上ids，其他参数和add方法类似\n",
    "\n",
    "```python\n",
    "collection.update(\n",
    "    ids=[\"id1\", \"id2\", \"id3\", ...],\n",
    "    embeddings=[[1.1, 2.3, 3.2], [4.5, 6.9, 4.4], [1.1, 2.3, 3.2], ...],\n",
    "    metadatas=[{\"chapter\": \"3\", \"verse\": \"16\"}, {\"chapter\": \"3\", \"verse\": \"5\"}, {\"chapter\": \"29\", \"verse\": \"11\"}, ...],\n",
    "    documents=[\"doc1\", \"doc2\", \"doc3\", ...],\n",
    ")\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 删除文档"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "提供ids，还允许附带where条件进行删除\n",
    "\n",
    "```python\n",
    "collection.delete(\n",
    "    ids=[\"id1\", \"id2\", \"id3\",...],\n",
    "    where={\"chapter\": \"20\"}\n",
    ")\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Chroma Embeddings算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 默认Embeddings算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Chroma默认使用的是 `all-MiniLM-L6-v2` 模型来进行embeddings"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 官方预训练模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "你也可以直接使用官方预训练的托管在Huggingface上的模型\n",
    "\n",
    "可以使用 sentence-transformers 或者 text2vec 导入模型并在创建 collection 时指定"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install sentence-transformers==3.0.1\n",
    "# !pip install text2vec==1.3.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sentence_transformers import SentenceTransformer\n",
    "model = SentenceTransformer('all-mpnet-base-v2')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "> The **all-*** models where trained on all available training data (more than 1 billion training pairs) and are designed as **general purpose** models. The **all-mpnet-base-v2** model provides the best quality, while **all-MiniLM-L6-v2** is 5 times faster and still offers good quality. Toggle *All models* to see all evaluated models or visit [HuggingFace Model Hub](https://huggingface.co/models?library=sentence-transformers) to view all existing sentence-transformers models.\n",
    "\n",
    "选择非常多，你可以点击官网查看每种预训练模型的详细信息。\n",
    "\n",
    "https://www.sbert.net/docs/pretrained_models.html\n",
    "\n",
    "\n",
    "![embedding_models.png](../imgs/embedding_models.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "创建 collection 时可以指定"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import chromadb.utils.embedding_functions as ef\n",
    "import uuid\n",
    "\n",
    "# model_name 默认为 all-MiniLM-L6-v2\n",
    "ste = ef.SentenceTransformerEmbeddingFunction(model_name=\"all-mpnet-base-v2\")\n",
    "client.get_or_create_collection(name=\"st_coll\")\n",
    "client.delete_collection(\"st_coll\")\n",
    "st_coll = client.create_collection(name=\"st_coll\", embedding_function=ste)\n",
    "st_coll.add(\n",
    "    documents=[\"This is a document about engineer\", \"This is a document about steak\"],\n",
    "    metadatas=[{\n",
    "        \"source\": \"doc1\"\n",
    "    }, {\n",
    "        \"source\": \"doc2\"\n",
    "    }],\n",
    "    ids=[str(uuid.uuid4()), str(uuid.uuid4())])\n",
    "\n",
    "# model_name 默认为 shibing624/text2vec-base-chinese\n",
    "t2ve = ef.Text2VecEmbeddingFunction(model_name=\"shibing624/text2vec-base-chinese-sentence\")\n",
    "client.get_or_create_collection(name=\"t2ve_coll\")\n",
    "client.delete_collection(\"t2ve_coll\")\n",
    "t2ve_coll = client.create_collection(name=\"t2ve_coll\", embedding_function=t2ve)\n",
    "t2ve_coll.add(\n",
    "    documents=[\"This is a document about engineer\", \"This is a document about steak\"],\n",
    "    metadatas=[{\n",
    "        \"source\": \"doc1\"\n",
    "    }, {\n",
    "        \"source\": \"doc2\"\n",
    "    }],\n",
    "    ids=[str(uuid.uuid4()), str(uuid.uuid4())])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 其他第三方Embeddings算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "你还可以使用其他第三方模型，包括第三方平台，例如：\n",
    "\n",
    "```python\n",
    "openai_ef = embedding_functions.OpenAIEmbeddingFunction(\n",
    "                api_key=\"YOUR_API_KEY\",\n",
    "                model_name=\"text-embedding-ada-002\"\n",
    "            )\n",
    "```\n",
    "\n",
    "其他包括Cohere，HuggingFace等。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 自定义Embeddings算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "你甚至可以使用自己的本地Embeddings算法，Chroma留有扩展点："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from chromadb import Documents, EmbeddingFunction, Embeddings\n",
    "from text2vec import SentenceModel\n",
    "\n",
    "# 加载text2vec库的向量化模型\n",
    "model = SentenceModel(\"shibing624/text2vec-base-chinese-sentence\")\n",
    "\n",
    "\n",
    "# Documents是字符串数组类型，Embeddings是浮点数组类型\n",
    "class MyEmbeddingFunction(EmbeddingFunction):\n",
    "\n",
    "    def __call__(self, input: Documents) -> Embeddings:\n",
    "        # embed the documents somehow\n",
    "        return model.encode(input).tolist()\n",
    "\n",
    "client.get_or_create_collection(name=\"udf_coll\")\n",
    "client.delete_collection(\"udf_coll\")\n",
    "udf_coll = client.create_collection(name=\"udf_coll\", embedding_function=MyEmbeddingFunction())\n",
    "udf_coll.add(\n",
    "    documents=[\"This is a document about engineer\", \"This is a document about steak\"],\n",
    "    metadatas=[{\n",
    "        \"source\": \"doc1\"\n",
    "    }, {\n",
    "        \"source\": \"doc2\"\n",
    "    }],\n",
    "    ids=[str(uuid.uuid4()), str(uuid.uuid4())])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 实战：在Langchain中使用Chroma对中国古典四大名著进行相似性查询"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "很多人认识Chroma是由于Langchain经常将其作为向量数据库使用。不过Langchain官方文档里的Chroma示例使用的是英文Embeddings算法以及英文的文档语料。\n",
    "\n",
    "官方文档链接如下：\n",
    "\n",
    "https://python.langchain.com/v0.1/docs/modules/data_connection/vectorstores/\n",
    "\n",
    "既然我们是华语区博客，这本篇文章中，我们就尝试用中文的语料和Embeddings算法来做一次实战。\n",
    "\n",
    "相关资源可以在github下载：\n",
    "\n",
    "https://github.com/naosense/Yiya/tree/master/book\n",
    "\n",
    "\n",
    "先贴上完整代码，我们再来逐步解释："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install langchain==0.2.16 langchain-chroma==0.1.4 langchain-community==0.2.12 modelscope==1.20.1 addict oss2==2.19.1 datasets==2.16.0 simplejson==3.19.2 sortedcontainers==2.4.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.document_loaders import TextLoader\n",
    "from langchain_community.embeddings import ModelScopeEmbeddings\n",
    "from langchain_text_splitters.character import CharacterTextSplitter\n",
    "from langchain_community.vectorstores import Chroma\n",
    "\n",
    "# 读取原始文档\n",
    "raw_documents_sanguo = TextLoader('../datas/novel/三国演义.txt', encoding='utf-8').load()\n",
    "raw_documents_xiyou = TextLoader('../datas/novel/西游记.txt', encoding='utf-8').load()\n",
    "\n",
    "# 分割文档\n",
    "text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
    "documents_sanguo = text_splitter.split_documents(raw_documents_sanguo)\n",
    "documents_xiyou = text_splitter.split_documents(raw_documents_xiyou)\n",
    "documents = documents_sanguo + documents_xiyou\n",
    "print(\"documents nums:\", documents.__len__())\n",
    "\n",
    "# 生成向量（embedding）\n",
    "model_id = \"damo/nlp_corom_sentence-embedding_chinese-base\"\n",
    "embeddings = ModelScopeEmbeddings(model_id=model_id)\n",
    "db = Chroma.from_documents(documents, embedding=embeddings)\n",
    "\n",
    "# 检索\n",
    "query = \"美猴王是谁？\"\n",
    "docs = db.similarity_search(query, k=5)\n",
    "\n",
    "# 打印结果\n",
    "for doc in docs:\n",
    "    print(\"===\")\n",
    "    print(\"metadata:\", doc.metadata)\n",
    "    print(\"page_content:\", doc.page_content)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 准备原始文档"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "相关小说可以从github上下载：https://github.com/naosense/Yiya/tree/master/book\n",
    "\n",
    "原始文档字符集为 GB2312，注意处理成utf-8，否则使用 TextLoader 时需要指定为 GB2312。\n",
    "\n",
    "如果需要查看字符集编码，可以用Python的chardet编码库进行判断。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install chardet==5.2.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import chardet\n",
    "\n",
    "def detect_file_encoding(file_path):\n",
    "    with open(file_path, 'rb') as f:\n",
    "        result = chardet.detect(f.read())\n",
    "    return result['encoding']\n",
    "\n",
    "file_path = '../datas/novel/三国演义.txt'\n",
    "encoding = detect_file_encoding(file_path)\n",
    "print(f'The encoding of file {file_path} is {encoding}')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 分割文档"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通常来说文档都是很大的，比如名著小说，法律文档，我们通过langchain提供的CharacterTextSplitter来帮我们分割文本：\n",
    "\n",
    "```python\n",
    "text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### embedding"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们选择魔搭平台ModelScope里的 [通用中文embeddings算法](https://modelscope.cn/models/iic/nlp_corom_sentence-embedding_chinese-base/summary) （`damo/nlp_corom_sentence-embedding_chinese-base`）来作为我们的embedding算法。他有768维的向量。\n",
    "\n",
    "![modelscope_embedding.png](../imgs/modelscope_embedding.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### query"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将两个文档准备好后，我们进行提问，“美猴王是谁？” 要求返回5个相似答案。下面的返回的答案，可以看到，5个文档都是取自西游记.txt中的文本。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 总结"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "目前向量数据库在AI中的应用越来越重要，但很多厂商更倾向于将向量数据库隐藏在产品内部，用户感知不到很多向量数据库的使用细节。但大模型的学习终究是建立在开源代码之上的，学习Chroma可以让我们快速了解向量数据库的基本原理，也有利于我们未来更好地理解大模型。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.15"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
