{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "07608b87-6f06-4fbb-9d7f-bdb92f840fe8",
   "metadata": {},
   "source": [
    "# 开源模型的本地部署-xinference"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d00f7d3-3a5c-41f2-8766-3beb5261a028",
   "metadata": {},
   "source": [
    "## 实践说明\n",
    "\n",
    "本章节聚焦大模型在**裸机环境下的本地化部署实践**，以Ubuntu 22.04操作系统为基础平台，采用小组协作模式进行安装与部署。这种部署模式高度契合实际应用场景：**单一大模型实例，多用户、多团队共享使用**的需求，为企业级部署提供了可复现的参考方案."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "45fa00d9-da16-4ebf-ba20-7534379effeb",
   "metadata": {},
   "source": [
    "## xinference安装"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9fd1501-b2e0-4399-9a05-666c0bc42178",
   "metadata": {},
   "source": [
    "### pip安装"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef2f003c-1c56-4d0d-849f-1f8e25790129",
   "metadata": {},
   "source": [
    "```bash\n",
    "pip install \"xinference[vllm]\"\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "15c82ecc-f9aa-42e5-9c99-d4f204cb593b",
   "metadata": {},
   "source": [
    "### docker方式（推荐）\n",
    "\n",
    "#### 安装"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4cac1330-5870-4d29-8790-0fff024f8192",
   "metadata": {},
   "source": [
    "官网示例：  \n",
    "在拥有英伟达显卡的机器上运行\n",
    "```bash\n",
    "docker run -e XINFERENCE_MODEL_SRC=modelscope -p 9998:9997 --gpus all xprobe/xinference:<your_version> xinference-local -H 0.0.0.0 --log-level debug\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b2b2ff05-ffc3-44c6-b4de-284dca501113",
   "metadata": {},
   "source": [
    "实际操作： \n",
    "\n",
    "拉取镜像： \n",
    "```bash\n",
    "docker pull xprobe/xinference:latest\n",
    "```\n",
    "\n",
    "启动容器：\n",
    "```bash\n",
    "docker run -d \\\n",
    "  --name xinference-server \\\n",
    "  --ipc=host \\\n",
    "  -v /root/.xinference:/root/.xinference \\\n",
    "  -v /root/.cache/modelscope:/root/.cache/modelscope \\\n",
    "  -e XINFERENCE_MODEL_SRC=modelscope \\\n",
    "  -p 9997:9997 \\\n",
    "  --gpus all \\\n",
    "  xprobe/xinference:latest \\\n",
    "  xinference-local -H 0.0.0.0\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20415a5d-bf44-41c1-965a-b853c81bcc28",
   "metadata": {},
   "source": [
    "查看日志： \n",
    "查看运行状态\n",
    "\n",
    "```\n",
    "docker ps\n",
    "```\n",
    "\n",
    "如果看到 xprobe/xinference:latest-cpu 在运行，说明启动成功。\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3891bc6d-6855-4c74-952f-b59b2086ab36",
   "metadata": {},
   "source": [
    "查看日志输出\n",
    "```\n",
    "docker logs -f <容器ID或名称>\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9ca801b-3cc2-46e4-b96d-31713d22b5e3",
   "metadata": {},
   "source": [
    "#### 卸载\n",
    "\n",
    "🔄 停止并删除容器\n",
    "\n",
    "1.  停止容器：首先需要停止正在运行的容器。\n",
    "\n",
    "```bash\n",
    "docker stop xinference-server\n",
    "``` \n",
    "2.  删除容器：容器停止后，即可将其删除。\n",
    "```bash\n",
    "docker rm xinference-server \n",
    "```\n",
    "\n",
    "\n",
    "可选清理\n",
    "\n",
    "删除容器本身并不会自动删除其关联的镜像或数据卷。\n",
    "•   删除镜像：如果你希望同时释放磁盘空间，可以删除运行此容器所用的Docker镜像。首先使用 docker images 查找名为 \"xprobe/xinference\" 的镜像，然后使用 docker rmi <镜像ID> 删除它。\n",
    "\n",
    "•   清理数据卷：你命令中的 -v 选项会在宿主机上创建数据卷（例如 /root/.xinference）。删除容器不会自动删除这些数据卷。如果你确定不再需要这些数据（例如模型的缓存文件），可以手动删除宿主机上对应的目录： \n",
    "```bash\n",
    "rm -rf /root/.xinference  \n",
    "rm -rf /root/.cache/modelscope  \n",
    "``` \n",
    "注意，此操作是不可逆的，会永久删除相关数据。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "543363c3-5840-4a9e-a88a-dbae24de928b",
   "metadata": {},
   "source": [
    "#### 升级\n",
    "\n",
    "先卸载再升级\n",
    "\n",
    "要升级你通过Docker运行的Xinference服务，核心是获取新版本的镜像并重新创建容器。下面是具体的操作步骤，以及一些注意事项。\n",
    "\n",
    "🔄 升级步骤\n",
    "\n",
    "1.  停止并删除旧容器\n",
    "    首先需要停止当前运行的容器并将其移除。\n",
    "\n",
    "    ```bash\n",
    "    docker stop xinference-server\n",
    "    docker rm xinference-server\n",
    "    ```\n",
    "    如果容器已经停止，直接执行 `docker rm xinference-server` 即可。\n",
    "\n",
    "3.  拉取最新的镜像\n",
    "    使用 docker pull 命令获取最新的Xinference镜像。\n",
    "\n",
    "    ```bash\n",
    "    docker pull xprobe/xinference:latest\n",
    "    ```\n",
    "    如果你想升级到某个特定的稳定版本（例如 v0.1.0），可以将 latest 替换为对应的版本号，如 v0.1.0。\n",
    "\n",
    "5.  使用新镜像重新创建并启动容器\n",
    "    使用与你最初运行容器时相同的参数和卷挂载命令来启动新容器，这能确保你的模型数据和配置得以保留。根据你之前的命令，完整的启动指令如下：\n",
    "\n",
    "```bash\n",
    "docker run -d \\\n",
    "  --name xinference-server \\\n",
    "  --ipc=host \\\n",
    "  -v /root/.xinference:/root/.xinference \\\n",
    "  -v /root/.cache/modelscope:/root/.cache/modelscope \\\n",
    "  -e XINFERENCE_MODEL_SRC=modelscope \\\n",
    "  -p 9997:9997 \\\n",
    "  --gpus all \\\n",
    "  xprobe/xinference:latest \\\n",
    "  xinference-local -H 0.0.0.0\n",
    "```\n",
    "    这个命令的关键在于通过 -v 参数挂载了宿主机上的模型缓存目录，这样新容器启动后就能直接使用之前已经下载的模型，无需重新下载。\n",
    "\n",
    "💡 升级策略与验证\n",
    "\n",
    "• 推荐策略：平滑升级\n",
    "\n",
    "    为了最大限度地减少服务中断时间，可以采用一种平滑的升级方式：首先使用新镜像和一个新的容器名（例如 xinference-server-new）和一个新的临时端口（例如 9998:9997）启动一个全新的容器。在验证这个新容器运行无误后，再停止并删除旧的 xinference-server 容器，最后将新容器重命名为 xinference-server 并改回原来的端口映射。这种方式类似于蓝绿部署或滚动更新的思想，可以降低升级风险。\n",
    "\n",
    "• 验证升级结果\n",
    "\n",
    "    容器启动后，可以通过以下命令检查其运行状态：\n",
    "    ```\n",
    "    docker ps\n",
    "    ```\n",
    "    确认容器状态为 \"Up\" 后，在浏览器中访问 http://你的服务器IP:9997，打开Xinference的Web UI界面，查看服务是否正常并可管理模型。\n",
    "\n",
    "⚠️ 注意事项\n",
    "\n",
    "•   数据持久化：本次升级之所以能保留模型，关键在于你最初启动容器时已经通过 -v 参数将容器内的目录挂载到了宿主机上。请务必确保新容器使用了与旧容器相同的卷挂载路径，这是保证模型数据不丢失的前提。\n",
    "\n",
    "•   版本变更：如果跨越大版本升级（例如从 v0.6.x 升级到 v0.7.x），建议查阅官方发布的版本更新日志（如GitHub的Release页面），了解是否有不兼容的变更（如配置项、API接口的变动）需要处理。\n",
    "\n",
    "•   使用Docker Compose管理：如果你的部署环境相对复杂，或者未来有频繁升级的需求，强烈建议使用 docker-compose.yml 文件来定义服务。这样，升级操作通常只需修改镜像版本号，然后执行 docker-compose up -d 即可，更加简洁可靠。\n",
    "\n",
    "希望这些步骤能帮助你顺利完成Xinference服务的升级。如果升级后遇到任何问题，或者想了解更高级的部署策略，随时可以告诉我。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f86e6136-6777-4a66-a22e-5bf7109a2d56",
   "metadata": {},
   "source": [
    "## 安装模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5260f1d5-a493-477a-977f-d92ed63c1518",
   "metadata": {},
   "source": [
    "### 查看已部署哪些模型\n",
    "\n",
    "使用 `xinference list `\n",
    "\n",
    "假设使用docker部署的xinference：   \n",
    "\n",
    "\n",
    "```bash\n",
    "root@server5~:# docker exec -it xinference-server /bin/bash\n",
    "root@37b217ed0393:/opt/inference# xinference list\n",
    "UID            Type    Name    Format      Size (in billions)  Quantization\n",
    "-------------  ------  ------  --------  --------------------  --------------\n",
    "my_qwen3_0.6b  LLM     qwen3   pytorch                    0_6  none\n",
    "\n",
    "UID                 Type       Name                    Dimensions\n",
    "------------------  ---------  --------------------  ------------\n",
    "my_qwen_embed_0.6b  embedding  Qwen3-Embedding-0.6B          1024\n",
    "\n",
    "UID                    Type    Name\n",
    "---------------------  ------  -------------------\n",
    "my_qwen_reranker_0.6b  rerank  Qwen3-Reranker-0.6B\n",
    "\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25948782-9e47-4849-a31e-440908fc7433",
   "metadata": {},
   "source": [
    "### LLM模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9cdcdd10-c7a0-4b54-9e93-cc8343a93418",
   "metadata": {},
   "source": [
    "#### 使用 Web UI 界面安装"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8d84103e-3265-4a3c-9104-3c95779c68e0",
   "metadata": {},
   "source": [
    "可以通过访问 http://127.0.0.1:9997/ui 来使用 UI，  \n",
    "访问 http://127.0.0.1:9997/docs 来查看 API 文档。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aa833f7e-f6ad-4c5c-8340-5ca941263e66",
   "metadata": {},
   "source": [
    "单卡部署Qwen3-4B\n",
    "![image.png](../assets/xinference-llm1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37f5cc2c-c37c-45ab-9e95-82bf619a9521",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "be24526c-e419-40ae-a41e-711df7359682",
   "metadata": {},
   "source": [
    "4卡部署Qwen3-14B\n",
    "\n",
    "![image.png](../assets/xinference-llm2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "868bffa7-d608-440e-aca8-48c88daa0b6b",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "fcaac516-5371-4979-b5ab-b7780df9dcc7",
   "metadata": {},
   "source": [
    "部署结果查看： \n",
    "\n",
    "![image.png](../assets/xinference-llm3.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "304de1a6-ffd0-445a-86f7-668146972ebf",
   "metadata": {},
   "source": [
    "部署过程日志查看:\n",
    "```\n",
    "(base) root@server5:~# docker logs -f xinference-server\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2c010d04-698d-463e-a373-2b4fba6909e9",
   "metadata": {},
   "source": [
    "#### API调用测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5350fb2a-a259-46c6-a7a4-04d4f0fa8d21",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from openai import OpenAI\n",
    "client = OpenAI(base_url=\"http://127.0.0.1:9997/v1\", api_key=\"not used actually\")\n",
    "\n",
    "response = client.chat.completions.create(\n",
    "    model=\"my_qwen3_14b\",\n",
    "    messages=[\n",
    "        {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
    "        {\"role\": \"user\", \"content\": \"天空为什么是蓝色的?\"}\n",
    "    ]\n",
    ")\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cbcf1698-5178-43dd-a7b3-b48861ec17b8",
   "metadata": {},
   "source": [
    "预期结果： \n",
    "```bash\n",
    "ChatCompletion(id='chatb1148186-c601-11f0-8e24-c63d833e36d7', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='<think>\\n好的，用户问我是谁。首先，我需要明确自己的身份。我是通义千问，阿里巴巴集团研发的大型语言模型，隶属于通义实验室。用户可能想了解我的基本功能或用途。要确保回答简洁明了，涵盖主要信息，比如我的身份、研发机构、功能以及应用领域。\\n\\n接下来，用户可能对我的能力感兴趣，比如是否能回答问题、创作内容等。需要提到我支持多种语言，能够处理不同任务，如回答问题、创作文章、编程等。同时，要强调我的训练数据和语言能力，比如基于大量文本数据，支持中文、英文等多语言，以及处理长文本的能力。\\n\\n另外，用户可能想知道我的应用场景，比如日常使用、工作、学习等。需要提到我能够帮助用户解决各种问题，提供信息、建议和创意。还要说明我的使用方式，如通过对话或API接口，让用户知道如何与我互动。\\n\\n同时，要确保回答友好且易于理解，避免使用过于技术化的术语。可能需要检查是否有遗漏的重要信息，比如是否支持多轮对话，或者是否有其他特别的功能。最后，保持回答的结构清晰，分点说明，方便用户快速获取关键信息。\\n</think>\\n\\n我是通义千问，阿里巴巴集团研发的大型语言模型，隶属于通义实验室。我的主要功能是通过自然语言处理技术，帮助用户回答问题、创作内容、编程、数据分析等。我支持多种语言，能够处理长文本，适用于日常交流、工作、学习等多种场景。你可以通过对话或API接口与我互动，我会尽力提供帮助。', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=None))], created=1763637115, model='my_qwen3_4b', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=332, prompt_tokens=22, total_tokens=354, completion_tokens_details=None, prompt_tokens_details=None))\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "672aab7b-cc0e-4090-a621-8fcf49b4e373",
   "metadata": {},
   "source": [
    "### Embedding模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f8fb345-6aef-4a99-9d00-35b6fdcf78c2",
   "metadata": {},
   "source": [
    "#### 使用 Web UI 界面安装"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7752969b-b269-4f5d-8a42-e904466d7950",
   "metadata": {},
   "source": [
    "这里可以设置一些引擎参数，比如选择的vllm，参考：https://vllm.hyper.ai/docs/inference-and-serving/engine_args\n",
    "\n",
    "Additional parameters passed to the inference engine: vllm   \n",
    "设置参数，要注意-改为_, 不然会报错\n",
    "\n",
    "比如： gpu-memory-utilization 改为： gpu_memory_utilization\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5fd7e43e-8034-4f87-a7ac-38a2c8d8f53d",
   "metadata": {},
   "source": [
    "![image.png](../assets/xinference-embed1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "19469c25-9c5b-4529-8682-7ef51d19333d",
   "metadata": {},
   "source": [
    "启动成功"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "735ed7f8-95ef-4fbc-a1e3-516989b458b5",
   "metadata": {},
   "source": [
    "![image.png](../assets/xinference-embed2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5469d251-d853-447f-8ffe-aef8b4747abb",
   "metadata": {},
   "source": [
    "#### API调用测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1e01d13f-64cf-4ea5-94a0-5482dafa8fc8",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from langchain_community.embeddings import XinferenceEmbeddings\n",
    "\n",
    "# 替换为你的Xinference服务器URL和模型UID\n",
    "my_emb = XinferenceEmbeddings(\n",
    "    server_url=\"http://localhost:9997\",\n",
    "    model_uid=\"my_qwen_embed_0.6b\"  # 替换为实际的模型UID\n",
    ")\n",
    "\n",
    "# 只对第二个文本生成嵌入向量\n",
    "text = \"LangChain 是一个强大的工具。\"\n",
    "\n",
    "# 生成嵌入向量\n",
    "vector = my_emb.embed_documents([text])[0]  # 取第一个结果\n",
    "\n",
    "# 打印结果\n",
    "print(f\"文本: {text}\")\n",
    "print(f\"嵌入向量前5维: {vector[:5]}...\")\n",
    "print(f\"总维度: {len(vector)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "118f4693-db68-4c63-9262-a48489ec38b9",
   "metadata": {},
   "source": [
    "预期结果:\n",
    "\n",
    "```bash\n",
    "文本: LangChain 是一个强大的工具。\n",
    "嵌入向量前5维: [0.029530085623264313, 0.043322477489709854, -0.004771255888044834, -0.11111023277044296, -0.016885600984096527]...\n",
    "总维度: 1024\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8af78411-9dba-4638-9b83-035c638c91ac",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "2e553806-80fb-49c6-8e97-9d23e15e4678",
   "metadata": {},
   "source": [
    "### ReRaner模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f1b702c-1098-484b-b710-ccee688158da",
   "metadata": {},
   "source": [
    "#### 使用 Web UI 界面安装"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "93a2ea90-ddd0-4259-b8d2-568fb6d54ccf",
   "metadata": {},
   "source": [
    "![image.png](../assets/xinference-reranker1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0634eaf0-9de9-4f49-a458-9d7c8b0b6763",
   "metadata": {},
   "source": [
    "启动成功"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c1bc0425-bbec-416a-9ce9-bd6ee4a13cd2",
   "metadata": {},
   "source": [
    "![image.png](../assets/xinference-reranker2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84dadce8-4257-4b5e-94f0-28734a7d4aa4",
   "metadata": {},
   "source": [
    "#### API调用测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8bc2e96d-e77a-4efa-92b0-368fd9e85d2b",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from langchain_community.document_compressors import XinferenceRerank\n",
    "from langchain_core.documents import Document\n",
    "\n",
    "# 1. 初始化 XinferenceRerank\n",
    "reranker = XinferenceRerank(\n",
    "    server_url=\"http://localhost:9997\",\n",
    "    model_uid=\"my_qwen_reranker_0.6b\"  # 替换为你部署的 Rerank 模型 UID\n",
    ")\n",
    "\n",
    "# 2. 准备测试数据\n",
    "query = \"LangChain 是什么？\"\n",
    "docs = [\n",
    "    Document(page_content=\"苹果是一种水果，很好吃。\"),\n",
    "    Document(page_content=\"LangChain 是一个用于构建 LLM 应用的框架。\"),\n",
    "    Document(page_content=\"今天天气真不错。\"),\n",
    "]\n",
    "\n",
    "# 3. 执行重排序 (compress_documents 方法会返回排序后并过滤过的文档)\n",
    "# top_n 指定返回前几个结果\n",
    "compressed_docs = reranker.compress_documents(docs, query)\n",
    "\n",
    "# 4. 打印结果\n",
    "print(f\"查询: {query}\\n\")\n",
    "for i, doc in enumerate(compressed_docs):\n",
    "    # metadata 中通常包含 'relevance_score'\n",
    "    score = doc.metadata.get('relevance_score')\n",
    "    print(f\"排名 {i+1} (得分: {score}): {doc.page_content}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
