{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "197d4764-c3cf-49c4-b8f7-163486ff9ecc",
   "metadata": {},
   "source": [
    "# 开源模型的本地部署-ollama"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e87b22e-bd2b-40a2-aeaf-0ec19cffee0d",
   "metadata": {},
   "source": [
    "## 实践说明\n",
    "\n",
    "本章节聚焦大模型在**裸机环境下的本地化部署实践**，以Ubuntu 22.04操作系统为基础平台，采用小组协作模式进行安装与部署。这种部署模式高度契合实际应用场景：**单一大模型实例，多用户、多团队共享使用**的需求，为企业级部署提供了可复现的参考方案.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ede089f-0ae4-4597-a168-b28a477e14bb",
   "metadata": {},
   "source": [
    "## Ollama\n",
    "\n",
    "在本地运行开源大模型非常的简单，就需要一个软件Ollama，它里面集成了几十个开源大模型。\n",
    "\n",
    "就是这个软件「哦，羊驼」。\n",
    "\n",
    "登录官网： https://ollama.com/ \n",
    "\n",
    "或者登录github:  [https://github.com/ollama/ollama](https://github.com/ollama/ollama)  \n",
    "\n",
    "Ollama支持绝大多数的开源大模型的一键部署：\n",
    "\n",
    "| Model              | Parameters | Size  | Download                         |\n",
    "| ------------------ | ---------- | ----- | -------------------------------- |\n",
    "| Gemma 3            | 1B         | 815MB | `ollama run gemma3:1b`           |\n",
    "| Gemma 3            | 4B         | 3.3GB | `ollama run gemma3`              |\n",
    "| Gemma 3            | 12B        | 8.1GB | `ollama run gemma3:12b`          |\n",
    "| Gemma 3            | 27B        | 17GB  | `ollama run gemma3:27b`          |\n",
    "| QwQ                | 32B        | 20GB  | `ollama run qwq`                 |\n",
    "| DeepSeek-R1        | 7B         | 4.7GB | `ollama run deepseek-r1`         |\n",
    "| DeepSeek-R1        | 671B       | 404GB | `ollama run deepseek-r1:671b`    |\n",
    "| Llama 4            | 109B       | 67GB  | `ollama run llama4:scout`        |\n",
    "| Llama 4            | 400B       | 245GB | `ollama run llama4:maverick`     |\n",
    "| Llama 3.3          | 70B        | 43GB  | `ollama run llama3.3`            |\n",
    "| Llama 3.2          | 3B         | 2.0GB | `ollama run llama3.2`            |\n",
    "| Llama 3.2          | 1B         | 1.3GB | `ollama run llama3.2:1b`         |\n",
    "| Llama 3.2 Vision   | 11B        | 7.9GB | `ollama run llama3.2-vision`     |\n",
    "| Llama 3.2 Vision   | 90B        | 55GB  | `ollama run llama3.2-vision:90b` |\n",
    "| Llama 3.1          | 8B         | 4.7GB | `ollama run llama3.1`            |\n",
    "| Llama 3.1          | 405B       | 231GB | `ollama run llama3.1:405b`       |\n",
    "| Phi 4              | 14B        | 9.1GB | `ollama run phi4`                |\n",
    "| Phi 4 Mini         | 3.8B       | 2.5GB | `ollama run phi4-mini`           |\n",
    "| Mistral            | 7B         | 4.1GB | `ollama run mistral`             |\n",
    "| Moondream 2        | 1.4B       | 829MB | `ollama run moondream`           |\n",
    "| Neural Chat        | 7B         | 4.1GB | `ollama run neural-chat`         |\n",
    "| Starling           | 7B         | 4.1GB | `ollama run starling-lm`         |\n",
    "| Code Llama         | 7B         | 3.8GB | `ollama run codellama`           |\n",
    "| Llama 2 Uncensored | 7B         | 3.8GB | `ollama run llama2-uncensored`   |\n",
    "| LLaVA              | 7B         | 4.5GB | `ollama run llava`               |\n",
    "| Granite-3.3        | 8B         | 4.9GB | `ollama run granite3.3`          |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1de75b5d-b657-49f3-a91a-689f450ea2c6",
   "metadata": {},
   "source": [
    "## Ollama推理框架的安装与卸载\n",
    "\n",
    "> 参考官方说明： https://github.com/ollama/ollama/blob/main/docs/linux.md\n",
    ">\n",
    "> 实验环境的裸机上已提前安装好"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "676af254-9b3b-490d-95d3-d4ce18c3a1f9",
   "metadata": {},
   "source": [
    "### 安装\n",
    "\n",
    "**自动安装**\n",
    "\n",
    "安装前加速设置： \n",
    "```bash\n",
    "export OLLAMA_MIRROR=\"https://bgithub.xyz/ollama/ollama/releases/latest/download\"\n",
    "```\n",
    "\n",
    "\n",
    "安装指令:\n",
    "\n",
    "```shell\n",
    "curl -fsSL https://ollama.com/install.sh | sh\n",
    "```\n",
    "\n",
    "```\n",
    "(base) root@server2:/usr/local/bin# curl -fsSL https://ollama.com/install.sh | sh\n",
    ">>> Installing ollama to /usr/local\n",
    ">>> Downloading Linux amd64 bundle\n",
    "######################################################################## 100.0%\n",
    ">>> Creating ollama user...\n",
    ">>> Adding ollama user to render group...\n",
    ">>> Adding ollama user to video group...\n",
    ">>> Adding current user to ollama group...\n",
    ">>> Creating ollama systemd service...\n",
    ">>> Enabling and starting ollama service...\n",
    "Created symlink /etc/systemd/system/default.target.wants/ollama.service → /etc/systemd/system/ollama.service.\n",
    ">>> NVIDIA GPU installed.\n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "**手动安装**\n",
    "\n",
    "Download and extract the package:\n",
    "\n",
    "```shell\n",
    "curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz\n",
    "sudo tar -C /usr -xzf ollama-linux-amd64.tgz # 官网的方法，错误！\n",
    "\n",
    "sudo tar -C /usr/local -xvzf ollama-linux-amd64.tgz  # 修正\n",
    "```\n",
    "\n",
    "Start Ollama:\n",
    "\n",
    "```shell\n",
    "(base) root@server2:/usr/local/bnohup ollama serve > output.log 2>&1 &&1 &\n",
    "[2] 172350\n",
    "```\n",
    "\n",
    "In another terminal, verify that Ollama is running:\n",
    "\n",
    "```shell\n",
    "ollama -v\n",
    "```\n",
    "\n",
    "```shell\n",
    "pkill -f \"ollama serve\" \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "18498603-05a3-4176-a532-3c198988d949",
   "metadata": {},
   "source": [
    "### 卸载\n",
    "\n",
    "Remove the ollama service:\n",
    "\n",
    "```\n",
    "sudo systemctl stop ollama\n",
    "sudo systemctl disable ollama\n",
    "sudo rm /etc/systemd/system/ollama.service\n",
    "```\n",
    "\n",
    "Remove the ollama binary from your bin directory (either `/usr/local/bin`, `/usr/bin`, or `/bin`):\n",
    "\n",
    "```\n",
    "sudo rm $(which ollama)\n",
    "```\n",
    "\n",
    "Remove the downloaded models and Ollama service user and group:\n",
    "\n",
    "```\n",
    "sudo rm -r /usr/share/ollama\n",
    "sudo userdel ollama\n",
    "sudo groupdel ollama\n",
    "```\n",
    "\n",
    "Remove installed libraries:\n",
    "\n",
    "```\n",
    "sudo rm -rf /usr/local/lib/ollama\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3871b463-6be2-4ba5-a165-544f904059c3",
   "metadata": {},
   "source": [
    "### 升级\n",
    "\n",
    "再次运行之前的安装语句来更新 Ollama：\n",
    "\n",
    "```shell\n",
    "curl -fsSL https://ollama.com/install.sh | sh\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d3d21272-88ec-4bc7-8fe8-66ffecd38de8",
   "metadata": {},
   "source": [
    "## 预训练大模型部署\n",
    "\n",
    "### 安装开源预训练模型\n",
    "\n",
    "先体验`0.5B`模型，在命令行窗口运行(第一次运行会下载并安装模型)：\n",
    "\n",
    "```shell\n",
    "ollama run deepseek-r1:1.5B\n",
    "```\n",
    "\n",
    "安装完成后，输出提示“end a message (/? for help)，可以随便输入信息。\n",
    "\n",
    "```\n",
    "(base) root@server2:/opt# ollama run deepseek-r1:1.5B\n",
    "pulling manifest \n",
    "pulling aabd4debf0c8... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB                         \n",
    "pulling 369ca498f347... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████▏  387 B                         \n",
    "pulling 6e4c38e1172f... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 KB                         \n",
    "pulling f4d24e9138dd... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████▏  148 B                         \n",
    "pulling a85fe2a2e58e... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████████▏  487 B                         \n",
    "verifying sha256 digest \n",
    "writing manifest \n",
    "success \n",
    ">>> Send a message (/? for help)\n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "查看已安装的模型\n",
    "\n",
    "```text\n",
    "ollama list\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2b451e4-f4d5-4819-bbfa-b87e890f4b54",
   "metadata": {},
   "source": [
    "### Chat（聊天界面）服务\n",
    "\n",
    "安装WebGUI：[open-webui](https://github.com/open-webui/open-webui)\n",
    "\n",
    "首先确保本地有Docker运行环境，然后在终端窗口部署WebUI服务：\n",
    "\n",
    "```text\n",
    "docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main\n",
    "```\n",
    "\n",
    "登录：http://localhost:3001/auth\n",
    "\n",
    "说明文档： https://docs.openwebui.com/\n",
    "\n",
    "如果关联不上ollama部署的大模型，参考**远程访问**章节的设置\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c071cbd-282a-4d4d-916b-fe69ef0dc8de",
   "metadata": {},
   "source": [
    "### API（接口服务）\n",
    "\n",
    "```\n",
    "(base) root@server1:~# ollama list \n",
    "NAME                       ID              SIZE      MODIFIED     \n",
    "deepseek-r1:1.5b           a42b25d8c10a    1.1 GB    17 hours ago  \n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72edefc8-a70e-4ba2-9ad2-5abfe69b163d",
   "metadata": {},
   "source": [
    "#### Curl\n",
    "\n",
    "https://github.com/ollama/ollama/blob/main/docs/api.md](https://link.zhihu.com/?target=https%3A//github.com/ollama/ollama/blob/main/docs/api.md)\n",
    "\n",
    "```bash\n",
    "curl http://localhost:11434/api/chat -d '{\n",
    "  \"model\":\"qwen3:8B\",\n",
    "  \"messages\": [\n",
    "    { \"role\": \"user\", \"content\": \"你是谁?\" }\n",
    "  ],\n",
    "  \"stream\": false\n",
    "}'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "17072e5b-4613-45e0-8f9f-53472bfd73b5",
   "metadata": {},
   "source": [
    "预期输出：\n",
    "\n",
    "```\n",
    "{\"model\":\"qwen3:8B\",\"created_at\":\"2025-11-20T09:13:28.088727695Z\",\"message\":{\"role\":\"assistant\",\"content\":\"\\u003cthink\\u003e\\n嗯，用户问“你是谁？”，我需要先确定用户想知道什么。可能他们刚接触我，或者想确认我的身份。首先，我应该介绍自己的基本身份，比如我是通义千问，由通义实验室研发。然后要说明我的功能，比如回答问题、创作文字、逻辑推理等。还要提到我的训练数据，比如覆盖广泛的知识领域，但具体细节可能不需要太详细。另外，用户可能想知道我的应用场景，比如日常交流、学习、工作等，所以需要涵盖这些方面。还要注意语气友好，避免使用技术术语，让用户容易理解。可能用户有更深层的需求，比如想了解我的可靠性或使用范围，所以需要简要提到我的应用场景，但不要过于深入。最后，保持回答简洁，同时提供足够的信息，让用户清楚我的能力和用途。\\n\\u003c/think\\u003e\\n\\n我是通义千问，是通义实验室研发的超大规模语言模型。我能够帮助您回答问题、创作文字、逻辑推理、编程、多语言理解等多种任务。我的训练数据覆盖了广泛的领域，可以为您提供信息和建议。如果您有任何问题或需要帮助，欢迎随时告诉我！\"},\"done_reason\":\"stop\",\"done\":true,\"total_duration\":8476061565,\"load_duration\":41703356,\"prompt_eval_count\":11,\"prompt_eval_duration\":16735522,\"eval_count\":239,\"eval_duration\":8415532404}(env_rag) \n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40eed4cf-8df5-4298-b48b-58f97a3b74a2",
   "metadata": {},
   "source": [
    "\n",
    "#### OpenAI\n",
    "\n",
    "> https://github.com/ollama/ollama/blob/main/docs/openai.md\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2b38e8a6-7d9e-4539-9d2f-a98442983612",
   "metadata": {},
   "outputs": [],
   "source": [
    "from openai import OpenAI\n",
    "client = OpenAI(\n",
    "    base_url = 'http://localhost:11434/v1/',\n",
    "    api_key = 'ollama'\n",
    ")\n",
    "\n",
    "prompt = '你是谁?'\n",
    "messages = [{\"role\":\"user\", \"content\":prompt}]\n",
    "response = client.chat.completions.create(\n",
    "    model = 'qwen3:8B',\n",
    "    messages = messages,\n",
    "    temperature=0.95\n",
    ")\n",
    "\n",
    "response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3cb157df-baaa-4411-9703-a71df8d9765f",
   "metadata": {},
   "source": [
    "预期输出：  \n",
    "```\n",
    "'<think>\\n嗯，用户问“你是谁？”，我需要先明确回答。首先，我应该介绍自己的身份，作为通义千问，由通义实验室研发，阿里云旗下。然后，要说明我的功能和用途，比如回答问题、创作文字、逻辑推理等，同时强调我是一个AI助手，没有真实身份。接下来，我需要用自然的口语化中文表达，避免使用格式和术语，保持简洁。还要注意用户可能的后续问题，比如询问我的能力范围或者如何使用，所以可以主动邀请用户提问，保持对话的开放性。此外，要确保回答友好且有帮助，让用户感到被重视和支持。最后，检查语言是否流畅，避免重复或冗余的信息。这样用户就能清楚了解我的角色，并愿意进一步交流。\\n</think>\\n\\n你好！我是通义千问，由通义实验室研发，阿里云旗下。我是一个大型语言模型，能够帮助你回答问题、创作文字、逻辑推理、编程等。虽然我是一个AI助手，但我没有真实的身份或个人经历。如果你有任何问题或需要帮助，欢迎随时告诉我！'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9c4eb10-eb4b-43bf-a3e7-73f7d064527b",
   "metadata": {},
   "source": [
    "#### Langchain"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "03f94378-9238-4f9d-844b-2194572ff5b1",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_ollama import OllamaLLM\n",
    "myllm = OllamaLLM(base_url='http://localhost:11434', model='qwen3:8B', temperature=0.1)\n",
    "response = myllm.invoke(\"你好\")\n",
    "response"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c789669b-4364-42f8-be28-f9105928b29c",
   "metadata": {},
   "source": [
    "预期输出： \n",
    "\n",
    "```\n",
    "'<think>\\n嗯，用户发来“你好”，我需要回应。首先，要保持友好和热情。可能用户只是随便打个招呼，或者有其他需求。我应该先回应“你好”，然后询问是否有需要帮助的地方。这样既礼貌又专业。另外，要注意语气要自然，不要太机械。可能用户接下来会问问题，或者需要指导，所以保持开放式的回应比较好。还要检查是否有拼写错误，确保回复正确。总之，回应要简洁、友好，并引导用户进一步说明需求。\\n</think>\\n\\n你好！有什么我可以帮助你的吗？😊'\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2ee5a1d6-0636-44e2-8dd0-8c5d2ac97643",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.llms import Ollama\n",
    "\n",
    "myllm = Ollama(base_url='http://localhost:11434', model='qwen3:8B')\n",
    "\n",
    "response = myllm.invoke(\"你好\")\n",
    "response"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d621556-cc83-46a7-9137-a94e74860889",
   "metadata": {},
   "source": [
    "预期输出： \n",
    "\n",
    "```\n",
    "'<think>\\n嗯，用户发来“你好”，我需要回应。首先，要保持友好和热情。可能用户只是随便打个招呼，或者有其他需求。我应该先回应“你好”，然后询问是否有需要帮助的地方。这样既礼貌又专业。另外，要注意语气要自然，不要太机械。可能用户接下来会问问题，或者需要指导，所以保持开放式的回应比较好。还要检查是否有拼写错误，确保回复正确。总之，回应要简洁、友好，并引导用户进一步说明需求。\\n</think>\\n\\n你好！有什么我可以帮助你的吗？😊'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "33df5def-2c4c-4303-99cc-7b4987415f5b",
   "metadata": {},
   "source": [
    "### 远程访问配置\n",
    "\n",
    "配置Ollama允许远程连接：\n",
    "\n",
    "在服务器上需要修改Ollama的配置来允许远程访问：\n",
    "\n",
    "配置OLLAMA_ORIGINS为任意地址，它表示允许访问的来源地址。\n",
    "\n",
    "```\n",
    "sudo vim /etc/systemd/system/ollama.service\n",
    "```\n",
    "\n",
    "添加\n",
    "\n",
    "```\n",
    "Environment=\"OLLAMA_HOST=0.0.0.0:11434\"\n",
    "Environment=\"OLLAMA_ORIGINS=*\"\n",
    "```\n",
    "\n",
    "结果：\n",
    "\n",
    "```python\n",
    "[Unit]\n",
    "Description=Ollama Service\n",
    "After=network-online.target\n",
    "\n",
    "[Service]\n",
    "ExecStart=/usr/local/bin/ollama serve\n",
    "User=ollama\n",
    "Group=ollama\n",
    "Restart=always\n",
    "RestartSec=3\n",
    "Environment=\"PATH=/opt/anaconda3/bin:/opt/anaconda3/condabin:/opt/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/cuda-12.2/bin\"\n",
    "Environment=\"OLLAMA_HOST=0.0.0.0:11434\"  # 监听所有IP，端口默认为11434\n",
    "Environment=\"OLLAMA_ORIGINS=*\"           # 允许所有跨域请求\n",
    "\n",
    "[Install]\n",
    "WantedBy=default.target\n",
    "```\n",
    "\n",
    "重启服务：\n",
    "\n",
    "```\n",
    "sudo systemctl daemon-reload\n",
    "sudo systemctl restart ollama\n",
    "```\n",
    "\n",
    "测试： \n",
    "\n",
    "```\n",
    "curl http://129.201.70.35:11434/api/chat -d '{\n",
    "  \"model\":\"deepseek-r1:1.5b\",\n",
    "  \"messages\": [\n",
    "    { \"role\": \"user\", \"content\": \"你是谁?\" }\n",
    "  ],\n",
    "  \"stream\": false\n",
    "}'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb41df00-9334-4e6a-817b-e09e1054e635",
   "metadata": {},
   "source": [
    "### 安装自定义模型模型\n",
    "\n",
    "现在有很多人已经基于llama3进行中文适配训练，毕竟中文在llama3的训练数据中仅占很小的比例，对于中文的理解和回答是不能令人满意的。\n",
    "\n",
    "看了网上的介绍，下载Llama3-8B-Chinese-Chat.q4_k_m.GGUF来实验，下载地址：\n",
    "\n",
    "[https://huggingface.co/zhouzr/Llama3-8B-Chinese-Chat-GGUF/tree/main](https://link.zhihu.com/?target=https%3A//huggingface.co/zhouzr/Llama3-8B-Chinese-Chat-GGUF/tree/main)\n",
    "\n",
    "下载q4_k_m版本，4.92GB。\n",
    "\n",
    "编写model file文件\n",
    "\n",
    "```text\n",
    "FROM ./Llama3-8B-Chinese-Chat.q4_k_m.GGUF\n",
    "TEMPLATE \"\"\"{{ if .System }}<|start_header_id|>system<|end_header_id|>\n",
    "\n",
    "{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>\n",
    "\n",
    "{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>\n",
    "\n",
    "{{ .Response }}<|eot_id|>\"\"\"\n",
    "PARAMETER stop \"<|start_header_id|>\"\n",
    "PARAMETER stop \"<|end_header_id|>\"\n",
    "PARAMETER stop \"<|eot_id|>\"\n",
    "PARAMETER stop \"<|reserved_special_token\"\n",
    "```\n",
    "\n",
    "执行ollama create llama3-Chinese:8B -f Modelfile创建模型：\n",
    "\n",
    "```text\n",
    "# ollama create llama3-Chinese:8B -f Modelfile\n",
    "transferring model data\n",
    "creating model layer\n",
    "creating template layer\n",
    "creating parameters layer\n",
    "creating config layer\n",
    "using already created layer sha256:74db82a06a038230371e62740a9b430140e4df3a02c5ddcbe97c9bee76d6455e\n",
    "writing layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f\n",
    "writing layer sha256:c0aac7c7f00d8a81a8ef397cd78664957fbe0e09f87b08bc7afa8d627a8da87f\n",
    "writing layer sha256:109fb4827ddd6f21dd04a405dec5e1c9e39cf139e89b98536875a782938c02f5\n",
    "writing manifest\n",
    "success\n",
    "```\n",
    "\n",
    "执行命令查看模型导入情况：\n",
    "\n",
    "```text\n",
    "# ollama list\n",
    "NAME                            ID              SIZE    MODIFIED\n",
    "llama3-Chinese:8B               e45ad8ada59e    4.9 GB  33 seconds ago\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28f41d53-d206-4ec0-9dbd-445b5346653e",
   "metadata": {},
   "source": [
    "## Embedding模型部署\n",
    "\n",
    "### 安装\n",
    "\n",
    "> https://ollama.com/dengcao/Qwen3-Embedding-0.6B\n",
    "\n",
    "```\n",
    "(base) root@server5:~# ollama pull dengcao/Qwen3-Embedding-0.6B:F16\n",
    "pulling manifest \n",
    "pulling 970aa74c0a90... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 274 MB                         \n",
    "pulling c71d239df917... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  11 KB                         \n",
    "pulling ce4a164fc046... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   17 B                         \n",
    "pulling 31df23ea7daa... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  420 B                         \n",
    "verifying sha256 digest \n",
    "writing manifest \n",
    "success \n",
    "(base) root@server5:~# \n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "```\n",
    "(base) root@server5:~# ollama list\n",
    "NAME                                ID              SIZE      MODIFIED\n",
    "dengcao/Qwen3-Reranker-0.6B:F16     d9cf33bea10f    1.2 GB    5 months ago\n",
    "dengcao/Qwen3-Embedding-0.6B:F16    68d659a5c2ee    1.2 GB    5 months ago\n",
    "qwen3:8B                            500a1f067a9f    5.2 GB    5 months ago\n",
    "\n",
    "```\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0b130083-e817-471c-8b8f-c2cb1fc356bc",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "### Curl测试\n",
    "\n",
    "> https://github.com/ollama/ollama/blob/main/docs/api.md#generate-embeddings\n",
    "\n",
    "```bash\n",
    "curl http://localhost:11434/api/embed -d '{\n",
    "  \"model\": \"dengcao/Qwen3-Embedding-0.6B:F16\",\n",
    "  \"input\": \"Why is the sky blue?\"\n",
    "}'\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "270af45c-8787-4fee-99ea-21165014836b",
   "metadata": {},
   "source": [
    "预期输出： \n",
    "\n",
    "```\n",
    "{\"model\":\"dengcao/Qwen3-Embedding-0.6B:F16\",\"embeddings\":[[-0.024212914,-0.0036034752,-0.0067288657,0.06650669,0.03005451,0.090053946,0.015858281,-0.21365112,-0.13126618,-0.0033140276,0.0019760393,0.0061366744,-0.022710085,-0.0065573715,-0.04292852,0.09354305,0.0028000872,0.063248575,0.0881149,-0.07953675,0.0013638529,0.016952239,-0.036346942,0.06211815,0.029141126,-0.026183706,-0.068801254,0.032845736,-0.0033623953,-0.031059679,-0.020542251,-0.02875844,-0.03017207,0.043033093,0.017564341,-0.005353229,0.041816324,0.050767373,0.031978868,0.008317508,0.031363726,0.031027935,0.017945755,0.03377337,-0.031868197,0.026068276,0.021484315,-0.01175275,-0.035628956,-0.00026370864,-0.013512322,-0.038125627,-0.00065319607,-0.011997946,0.020937404,0.041086588,0.012041028,0.024388853,0.009645368,-0.037825808,-0.015601743,0.0013070282,-0.034948338,0.051471844,-0.024393775,0.031639513,0.009485701,0.0011990265,-0.0107458765,0.00023653124,0.026574973,-0.013622836,-0.022438375,0.025215205,-0.021615023,-0.016403833,0.017752318,-0.06029563,0.0038009998,-0.021539925,-0.020726826,0.09825575,0.012999661,0.01874199,-0.020299478,0.04329392,0.041219864,-0.025921639,-0.019559747,0.008083862,0.0647731,0.02603256,-0.06266826,0.004751465,-0.009375853,-0.022190154,\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e439780c-3ec4-4892-bf8b-31c47ef3f4ce",
   "metadata": {},
   "source": [
    "### LangChain接入"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ee9b60eb-619d-4317-abd6-04dbacd2b6b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.embeddings import OllamaEmbeddings\n",
    "ollama_emb = OllamaEmbeddings(\n",
    "    base_url='http://localhost:11434',\n",
    "    model=\"dengcao/Qwen3-Embedding-0.6B:F16\"\n",
    ")\n",
    "r1 = ollama_emb.embed_documents(\n",
    "    [\n",
    "        \"Alpha is the first letter of Greek alphabet\",\n",
    "        \"Beta is the second letter of Greek alphabet\",\n",
    "    ]\n",
    ")\n",
    "r2 = ollama_emb.embed_query(\n",
    "    \"What is the second letter of Greek alphabet\"\n",
    ")\n",
    "\n",
    "# 打印结果\n",
    "print(\"Document embeddings:\")\n",
    "for i, embedding in enumerate(r1):\n",
    "    print(f\"Document {i+1} embedding: {embedding[:10]}\")\n",
    "\n",
    "print(\"\\nQuery embedding:\")\n",
    "print(f\"Query embedding: {r2[:10]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e27b3a61-41f1-47ed-9663-8370a2edcd12",
   "metadata": {},
   "source": [
    "预期输出：  \n",
    "```bash\n",
    "Document embeddings:\n",
    "Document 1 embedding: [2.1825003623962402, -7.665013790130615, -0.20829367637634277, -0.6914094686508179, -2.6871891021728516, 7.04680871963501, 1.9117052555084229, 3.032599925994873, 1.2577763795852661, 5.880715370178223]\n",
    "Document 2 embedding: [1.082127332687378, -10.395365715026855, -0.554850161075592, -1.6366194486618042, 0.3179287016391754, 7.850982189178467, -1.3553366661071777, 3.218174695968628, 0.503334641456604, 4.044854164123535]\n",
    "\n",
    "Query embedding:\n",
    "Query embedding: [0.7879995107650757, -8.500425338745117, -0.4155329763889313, -4.439454555511475, -0.7503153085708618, 4.991929054260254, 0.15705017745494843, -2.4760496616363525, 3.077166795730591, 2.310847759246826]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b9aedbaf-e9fb-46d0-bdb2-adc1bc5faeb1",
   "metadata": {},
   "source": [
    "## Reranker模型部署\n",
    "\n",
    "### 安装\n",
    "\n",
    "> https://ollama.com/dengcao/Qwen3-Reranker-0.6B\n",
    "\n",
    "```bash\n",
    "(base) root@server1:~# ollama pull linux6200/bge-reranker-v2-m3\n",
    "pulling manifest \n",
    "pulling 970aa74c0a90... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 274 MB                         \n",
    "pulling c71d239df917... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  11 KB                         \n",
    "pulling ce4a164fc046... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏   17 B                         \n",
    "pulling 31df23ea7daa... 100% ▕████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏  420 B                         \n",
    "verifying sha256 digest \n",
    "writing manifest \n",
    "success \n",
    "```\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c56916ab-879b-4856-9c65-ca5a5050ed01",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_ollama import OllamaRerank\n",
    "\n",
    "# 假设你在 Ollama 里已经 pull 了你的 rerank 模型\n",
    "# ollama pull bge-reranker-v2\n",
    "reranker = OllamaRerank(\n",
    "    model=\"dengcao/Qwen3-Reranker-0.6B:F16\",  \n",
    "    url=\"http://localhost:11434\",  # 默认端口\n",
    ")\n",
    "\n",
    "query = \"中国最好的旅游城市有哪些？\"\n",
    "\n",
    "docs = [\n",
    "    \"北京有丰富的文化古迹，是热门旅游城市。\",\n",
    "    \"很多人认为上海是中国最国际化的城市，旅游也很方便。\",\n",
    "    \"广州的饮食文化很丰富，但旅游热度不如北京上海。\",\n",
    "]\n",
    "\n",
    "result = reranker.compress_documents(docs, query)\n",
    "\n",
    "for r in result:\n",
    "    print(r.metadata[\"relevance_score\"], r.page_content)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6a0aae0-1f1d-42ac-af08-1e13c4e61b7f",
   "metadata": {},
   "source": [
    "预期输出：\n",
    "\n",
    "```\n",
    "0.93 北京有丰富的文化古迹...\n",
    "0.90 上海是中国最国际化...\n",
    "0.72 广州的饮食文化...\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4294a3e6-6b52-4b9a-b315-9e43ea04845b",
   "metadata": {},
   "source": [
    "### LangChain接入(新版本待验证)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6e01eda0-075f-40da-a203-82724fb9aeba",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_ollama import OllamaRerank\n",
    "\n",
    "# 假设你在 Ollama 里已经 pull 了你的 rerank 模型\n",
    "# ollama pull bge-reranker-v2\n",
    "reranker = OllamaRerank(\n",
    "    model=\"你的模型名\",   # 如 bge-reranker-v2\n",
    "    url=\"http://localhost:11434\",  # 默认端口\n",
    ")\n",
    "\n",
    "query = \"中国最好的旅游城市有哪些？\"\n",
    "\n",
    "docs = [\n",
    "    \"北京有丰富的文化古迹，是热门旅游城市。\",\n",
    "    \"很多人认为上海是中国最国际化的城市，旅游也很方便。\",\n",
    "    \"广州的饮食文化很丰富，但旅游热度不如北京上海。\",\n",
    "]\n",
    "\n",
    "result = reranker.compress_documents(docs, query)\n",
    "\n",
    "for r in result:\n",
    "    print(r.metadata[\"relevance_score\"], r.page_content)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d24c423a-5ad9-4901-8e13-78a8921579d2",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "1513aea3-6f2d-4239-9a10-74e679389e97",
   "metadata": {},
   "source": [
    "## 补充： 指定GPU\n",
    "\n",
    "对于本地部署的大模型，如果想要指定在某张或者某几张卡上执行的话。\n",
    "\n",
    "1.添加环境变量\n",
    "\n",
    "```\n",
    "sudo vim /etc/systemd/system/ollama.service\n",
    "```\n",
    "\n",
    "在[Service]下面增加一行，这里我指定只用第三张GPU，则添加\n",
    "\n",
    "```\n",
    "Environment=\"CUDA_VISIBLE_DEVICES=2\"\n",
    "```\n",
    "\n",
    "保存，退出\n",
    "\n",
    "2.重启ollama服务\n",
    "\n",
    "```\n",
    "systemctl daemon-reload\n",
    "systemctl restart ollama\n",
    "```\n",
    "\n",
    "3.测试效果\n",
    "ollama run 你的模型\n",
    "\n",
    "再打开一个窗口，然后执行 nvidia-smi 查看是否在指定的GPU上工作\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8da83df-b4b4-4a06-a0e2-dd084f0342c5",
   "metadata": {},
   "source": [
    "## 补充window环境下安装\n",
    "\n",
    "1. 下载安装包：[https://ollama.com/download/Ollama-darwin.zip](https://link.zhihu.com/?target=https%3A//ollama.com/download/Ollama-darwin.zip)\n",
    "\n",
    "2. 解压后运行：Ollama，初始化环境。\n",
    "\n",
    "3. 注意：\n",
    "\n",
    "   window上安装默认是在C盘，可以修改： \n",
    "\n",
    "```\n",
    "PS D:\\> .\\OllamaSetup.exe /DIR=\"E:\\ProgramData\"\n",
    "```\n",
    "\n",
    "4. 修改Ollama模型的存储路径\n",
    "\n",
    "安装后建议修改Ollama模型的存储路径，可以通过设置系统环境变量OLLAMA_MODELS来实现。具体步骤如下：\n",
    "\n",
    "    1. 按下Win + R键，输入sysdm.cpl，然后按Enter键，打开“系统属性”窗口。或：此电脑，属性，系统，高级系统设置\n",
    "    2. 在“系统属性”窗口中，点击“环境变量”按钮。\n",
    "    3. 在“环境变量”窗口中，点击“新建”按钮。\n",
    "    4. 在“新建系统变量”窗口中，输入变量名OLLAMA_MODELS，变量值为你希望的模型存储路径（例如E:\\Ollama\\Models），然后点击“确定”。\n",
    "    5. 点击“确定”保存环境变量设置。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8273fd96-55c6-4138-8f09-3f114ce312ac",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
