{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "6bd1f2cd-463f-44ec-adfe-33270a82791f",
   "metadata": {},
   "source": [
    "# 开源模型的本地部署-vllm"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d76c1f3b-a610-40be-ba83-8d3d959e427f",
   "metadata": {},
   "source": [
    "## 实践说明\n",
    "\n",
    "本章节聚焦大模型在**裸机环境下的本地化部署实践**，以Ubuntu 22.04操作系统为基础平台，采用小组协作模式进行安装与部署。这种部署模式高度契合实际应用场景：**单一大模型实例，多用户、多团队共享使用**的需求，为企业级部署提供了可复现的参考方案."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd6c8458-5020-4f72-be7d-d770e86bcdf1",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## vllm安装\n",
    "\n",
    "```python\n",
    "# (Recommended) Create a new conda environment.\n",
    "#（推荐）创建一个新的 conda 环境。\n",
    "\n",
    "conda create -n env_vllm python=3.12 -y\n",
    "conda activate env_vllm\n",
    "\n",
    "pip install ipykernel\n",
    "python -m ipykernel install --user --name=env_vllm --display-name \"Python3 (env_vllm)\"\n",
    "\n",
    "# 安装带有CUDA的vLLM。\n",
    "pip install vllm\n",
    "\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "16516dd7-8c5b-443e-bd16-fff39f1bbf15",
   "metadata": {},
   "source": [
    "## 模型下载"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8051d43-c9f9-484c-8661-b0ab598ea10d",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "通过Modelscope下载模型（国内镜像加速）\n",
    "\n",
    "```shell\n",
    "pip install modelscope\n",
    "modelscope download --model <模型名称> --cache_dir <本地路径>\n",
    "```\n",
    "\n",
    "\n",
    "```shell\n",
    "modelscope download --model Qwen/Qwen3-4B --cache_dir /workspace/models/\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4416ebf-6c40-4909-ae72-e63d0a9b4b6b",
   "metadata": {},
   "source": [
    "## vllm serve引擎参数说明"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd5d26f9-18bb-4f18-89f2-9e7ff7af47f4",
   "metadata": {},
   "source": [
    "vLLM 引擎参数的说明: https://vllm.hyper.ai/docs/inference-and-serving/engine_args"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0534a7f4-4e42-49a2-b334-8b87a67a77c5",
   "metadata": {},
   "source": [
    "\n",
    "1. 核心模型配置\n",
    "\n",
    "```shell\n",
    "--model facebook/opt-125m（默认） HuggingFace 模型名称或本地路径\n",
    "--task auto/generate/embedding/classify（默认auto） 指定模型任务类型\n",
    "--dtype auto/float16/bfloat16/float32 模型权重精度（auto自动选择）\n",
    "--load-format auto/safetensors/bitsandbytes/gguf（默认auto） 权重加载格式\n",
    "--quantization awq/gptq/fp8/bitsandbytes（共25种） 量化方法（若未指定则从模型配置读取）\n",
    "--max-model-len 数值（如4096） 手动设置模型上下文长度（默认自动推导）\n",
    "```\n",
    "\n",
    "2. 并行计算与内存\n",
    "```shell\n",
    "--tensor-parallel-size 1（默认） 张量并行副本数量（多GPU时增加）\n",
    "--pipeline-parallel-size 1（默认） 流水线并行阶段数\n",
    "--block-size 8/16/32（CUDA最大支持32） KV缓存块大小（影响内存和吞吐量）\n",
    "--gpu-memory-utilization 0.9（默认） GPU内存利用率（0~1，如0.8表示占用80%）\n",
    "--swap-space 4（默认，单位GiB） 每GPU的CPU交换空间大小\n",
    "--cpu-offload-gb 0（默认） GPU到CPU的卸载空间（0表示不卸载）\n",
    "```\n",
    "\n",
    "3. 推测解码（Speculative Decoding）\n",
    "```shell\n",
    "--speculative-model tiny-llama-1b 草稿模型路径（用于加速主模型推理）\n",
    "--num-speculative-tokens 5 每次推测生成的token数\n",
    "--speculative-disable-by-batch-size 10 当排队请求数超过此值时禁用推测解码\n",
    "--spec-decoding-acceptance-method rejection_sampler（默认） 推测token验证方法（支持typical_acceptance_sampler）\n",
    "```\n",
    "\n",
    "4. 适配器与扩展\n",
    "```shell\n",
    "--enable-lora True/False 启用LoRA适配器\n",
    "--max-lora-rank 16（默认） LoRA最大秩（影响适配器参数量）\n",
    "--lora-dtype auto/float16/bfloat16 LoRA权重精度\n",
    "--enable-prompt-adapter True/False 启用动态提示词适配器\n",
    "--enable-reasoning True/False 启用模型推理内容生成（需配合--reasoning-parser）\n",
    "```\n",
    "\n",
    "5. 调度与性能优化\n",
    "```shell\n",
    "--scheduling-policy fcfs/priority（默认fcfs） 请求调度策略（先到先服务或优先级）\n",
    "--max-num-batched-tokens 2048 每批处理的最大token数（影响吞吐量）\n",
    "--enable-chunked-prefill True/False 启用预填充分块（长文本优化）\n",
    "--use-v2-block-manager （已弃用） 使用新版块管理器（默认已启用）\n",
    "```\n",
    "\n",
    "6. 多模态与安全\n",
    "```shell\n",
    "--limit-mm-per-prompt image=16,video=2 多模态输入限制（如图片/视频数量）\n",
    "--allowed-local-media-path /data/images 允许API访问的本地文件路径（需谨慎设置）\n",
    "--trust-remote-code True/False 信任HuggingFace远程代码（如自定义模型）\n",
    "```\n",
    "\n",
    "典型场景配置示例\n",
    "\n",
    "7B模型+4bit量化+双GPU张量并行\n",
    "```shell\n",
    "vllm serve --model meta-llama/Llama-2-7b \\\n",
    "           --quantization awq \\\n",
    "           --tensor-parallel-size 2 \\\n",
    "           --gpu-memory-utilization 0.8\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ccb33c11-35f0-47b8-8673-d1820517100c",
   "metadata": {},
   "source": [
    "## LLM-vllm部署-裸机部署"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8ff0ba4-596a-4efd-9ad7-b9407a66500a",
   "metadata": {},
   "source": [
    "### vllm serve 启动服务"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "336dc2f2-908d-44a0-99c3-8e58d8685772",
   "metadata": {},
   "source": [
    "参数说明参考： https://vllm.hyper.ai/docs/serving/openai-compatible-server"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1fc93ab6-7d1f-4679-ba97-7af5c0e6af38",
   "metadata": {},
   "source": [
    "单卡\n",
    "\n",
    "```shell\n",
    "CUDA_VISIBLE_DEVICES=0 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-4B \\\n",
    "--port 8082 \\\n",
    "--max-model-len 16384 \\\n",
    "--tensor-parallel-size 1 \\\n",
    "--trust-remote-code \\\n",
    "--served-model-name my_qwen3_4b \\\n",
    "--dtype=half \\\n",
    "--enable-auto-tool-choice \\\n",
    "--tool-call-parser hermes \\\n",
    "--reasoning-parser deepseek_r1 \\\n",
    "--gpu-memory-utilization 0.8 \\\n",
    "--api-key token-abc123\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "668fc64d-13b1-481a-a520-d8ce56ecfe9e",
   "metadata": {},
   "source": [
    "多卡\n",
    "\n",
    "```shell\n",
    "CUDA_VISIBLE_DEVICES=1,2 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-4B \\\n",
    "--port 8082 \\\n",
    "--max-model-len 16384 \\\n",
    "--tensor-parallel-size 2 \\\n",
    "--trust-remote-code \\\n",
    "--served-model-name my_qwen3_4b \\\n",
    "--dtype=half \\\n",
    "--enable-auto-tool-choice \\\n",
    "--tool-call-parser hermes \\\n",
    "--gpu-memory-utilization 0.8 \\\n",
    "--reasoning-parser deepseek_r1 \\\n",
    "--api-key token-abc123\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "71678e69-2d54-4763-907f-9177bc0e590d",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "显存不够以下命令会报错，以32B为例\n",
    "\n",
    "```shell\n",
    "PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n",
    "CUDA_VISIBLE_DEVICES=0,1,2,3 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-32B \\\n",
    "--port 8082 \\\n",
    "--max-model-len 16384 \\\n",
    "--tensor-parallel-size 4 \\\n",
    "--trust-remote-code \\\n",
    "--served-model-name my_qwen3_32b \\\n",
    "--dtype=half \\\n",
    "--enable-auto-tool-choice \\\n",
    "--tool-call-parser hermes \\\n",
    "--reasoning-parser deepseek_r1 \\\n",
    "--api-key token-abc123\n",
    "```\n",
    "\n",
    "解决方案,引入`--cpu-offload-gb`参数\n",
    "\n",
    "`--cpu-offload-gb` ： 每个 GPU 用于卸载到 CPU 的空间大小（以 GiB 为单位）。默认值为 0，表示不卸载。直观地说，这个参数可以被视为一种虚拟地增加 GPU 内存大小的方式。例如，如果您有一个 24 GB GPU 并将其参数设置为 10，那么实际上您可以把它看作是一个 34GB 的 GPU。这样您就可以加载一个 BF16 权重的 13B 模型，该模型至少需要 26GB 的 GPU 内存。请注意，这需要快速的 CPU-GPU 互连，因为在每次模型前向传播中，部分模型是从 CPU 内存动态加载到 GPU 内存中的。\n",
    "\n",
    "```shell\n",
    "PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True\n",
    "CUDA_VISIBLE_DEVICES=0,1,2,3 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-32B \\\n",
    "--port 8082 \\\n",
    "--max-model-len 16384 \\\n",
    "--tensor-parallel-size 4 \\\n",
    "--cpu-offload-gb 30 \\\n",
    "--trust-remote-code \\\n",
    "--served-model-name my_qwen3_32b \\\n",
    "--dtype=half \\\n",
    "--enable-auto-tool-choice \\\n",
    "--tool-call-parser hermes \\\n",
    "--reasoning-parser deepseek_r1 \\\n",
    "--api-key token-abc123\n",
    "```\n",
    "\n",
    "但是推理速度会很慢，另一种方法是部署量化版本"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "378e4d3e-8947-4ab0-aa5f-5022d7b29142",
   "metadata": {},
   "source": [
    "### 制作yaml启动文件"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49430836-3a32-4fd8-a2b7-b439f1f7892b",
   "metadata": {},
   "source": [
    "```shell\n",
    "# vllm_qwen3_4b_service_config.yaml\n",
    "port: 8082\n",
    "max_model_len: 16384\n",
    "tensor_parallel_size: 4\n",
    "trust_remote_code: true\n",
    "served_model_name: \"my_qwen3_4b\"\n",
    "dtype: \"half\"\n",
    "```\n",
    "\n",
    "```shell\n",
    "CUDA_VISIBLE_DEVICES=0,1 vllm serve /workspace/models/Qwen/Qwen3-4B --config vllm_qwen3_4b_service_config.yaml\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "581876bc-06d0-4e6c-8672-378b2c7e0f4a",
   "metadata": {},
   "source": [
    "### Shell脚本封装（自动化管理）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "239778fe-000b-4357-a631-be2d37694703",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "创建`startvllm_qwen3_4b.sh`脚本：\n",
    "\n",
    "```shell\n",
    "#!/bin/bash\n",
    "# 定义常量\n",
    "LOG_PATH=\"/var/log/vllm.log\"\n",
    "PID_FILE=\"/var/run/vllm.pid\"\n",
    "MODEL_PATH=\"/workspace/models/Qwen/Qwen3-4B\"\n",
    "\n",
    "# 清理旧进程\n",
    "pkill -f \"vllm serve.*Qwen3-4B\"\n",
    "\n",
    "# 设置GPU可见性\n",
    "export CUDA_VISIBLE_DEVICES=0\n",
    "\n",
    "# 启动服务\n",
    "nohup vllm serve \"${MODEL_PATH}\" \\\n",
    "  --port 8082 \\\n",
    "  --max-model-len 16384 \\\n",
    "  --tensor-parallel-size 1 \\\n",
    "  --trust-remote-code \\\n",
    "  --served-model-name \"my_qwen3_4b\" \\\n",
    "  --dtype=half \\\n",
    "  --enable-auto-tool-choice \\\n",
    "  --tool-call-parser hermes \\\n",
    "  --reasoning-parser deepseek_r1 \\\n",
    "  --api-key token-abc123 \\\n",
    "  > \"${LOG_PATH}\" 2>&1 &\n",
    "\n",
    "# 记录PID并验证\n",
    "echo $! > \"${PID_FILE}\"\n",
    "sleep 10  # 等待子进程生成\n",
    "\n",
    "if ! pgrep -af \"vllm serve\" > /dev/null; then\n",
    "  echo \"[ERROR] 服务启动失败，查看日志: ${LOG_PATH}\"\n",
    "  rm \"${PID_FILE}\"\n",
    "  exit 1\n",
    "fi\n",
    "\n",
    "echo \"服务已启动，PID: $(cat ${PID_FILE})\"\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a483abd-bc62-4b30-8591-852f5c11fa0f",
   "metadata": {},
   "source": [
    "**启动进程**\n",
    "\n",
    "```shell\n",
    "chmod +x startvllm_qwen3_4b.sh  # 添加执行权限\n",
    "./startvllm_qwen3_4b.sh  # 启动服务\n",
    "```\n",
    "\n",
    "\n",
    "**查看进程**\n",
    "\n",
    "\n",
    "```shell\n",
    "ps aux | grep <command_name>\n",
    "```\n",
    "\n",
    "查看后台进程\n",
    "\n",
    "**终止进程**：使用 `kill PID` 或强制终止 `kill -9 PID`（需先通过 `ps` 获取进程 ID）\n",
    "\n",
    "```\n",
    "ps aux | grep \"vllm serve\"  # 通过关键词定位进程\n",
    "\n",
    "kill 12345          # 常规终止（发送 SIGTERM 信号）\n",
    "kill -9 12345       # 强制终止（发送 SIGKILL 信号）\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5097abc8-0d1b-4260-9c34-529616ff7bdd",
   "metadata": {},
   "source": [
    "## LLM-vllm部署-docker部署"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e9eba1b-1694-4e1f-b4fe-28fba818ccb6",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "\n",
    "### docker run方式\n",
    "\n",
    "```shell\n",
    "docker run -d \\\n",
    "  --name my-qwen3-8b-server \\\n",
    "  --runtime=nvidia \\\n",
    "  --gpus '\"device=2,3\"' \\\n",
    "  -p 9991:8000 \\\n",
    "  --ipc=host \\\n",
    "  -v /opt/workspace/models/Qwen/Qwen3-8B:/models/Qwen3-8B \\\n",
    "  vllm/vllm-openai:latest \\\n",
    "  --model /models/Qwen3-8B \\\n",
    "  --max-model-len 16384 \\\n",
    "  --tensor-parallel-size 2 \\\n",
    "  --trust-remote-code \\\n",
    "  --served-model-name my_qwen3_8b \\\n",
    "  --dtype=half \\\n",
    "  --enable-auto-tool-choice \\\n",
    "  --tool-call-parser hermes \\\n",
    "  --reasoning-parser deepseek_r1 \\\n",
    "  --api-key token-abc123\n",
    "\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3493a304-b900-477a-be95-51f736db60ba",
   "metadata": {},
   "source": [
    "### docker compose方式\n",
    "\n",
    "`docker-compose-qwen3-8b.yml`\n",
    "\n",
    "```yaml\n",
    "version: '3.8'\n",
    "services:\n",
    "  qwen3-8b-server:\n",
    "    image: vllm/vllm-openai:latest\n",
    "    container_name: my-qwen3-8b-server\n",
    "    runtime: nvidia  # 使用 NVIDIA 运行时\n",
    "    deploy:\n",
    "      resources:\n",
    "        reservations:\n",
    "          devices:\n",
    "            - driver: nvidia\n",
    "              capabilities: [gpu]\n",
    "              device_ids: [\"2\", \"3\"]  # 指定 GPU 2 和 3 [1,6](@ref)\n",
    "    ports:\n",
    "      - \"9991:8000\"  # 主机端口:容器端口\n",
    "    ipc: host  # 共享主机 IPC 命名空间\n",
    "    volumes:\n",
    "      - /opt/workspace/models/Qwen/Qwen3-8B:/models/Qwen3-8B  # 挂载模型目录\n",
    "    command: [\n",
    "      \"--model\", \"/models/Qwen3-8B\",\n",
    "      \"--max-model-len\", \"16384\",\n",
    "      \"--tensor-parallel-size\", \"2\",\n",
    "      \"--trust-remote-code\",\n",
    "      \"--served-model-name\", \"my_qwen3_8b\",\n",
    "      \"--dtype\", \"half\",\n",
    "      \"--enable-auto-tool-choice\",\n",
    "      \"--tool-call-parser\", \"hermes\",\n",
    "      \"--reasoning-parser\", \"deepseek_r1\",\n",
    "      \"--api-key\", \"token-abc123\"\n",
    "    ]\n",
    "\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9cd14c92-1ba1-40ab-9809-a90bca2d42b2",
   "metadata": {},
   "source": [
    "启动： \n",
    "```shell\n",
    "docker compose -f docker-compose-qwen3-8b.yml up -d\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54c5b8ef-b104-41f6-8850-d794702c90d4",
   "metadata": {},
   "source": [
    "\n",
    "查看启动日志： \n",
    "```shell\n",
    "docker compose -f docker-compose-qwen3-8b.yml logs -f\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "94e6b141-767d-40bc-8e5c-5de48c1d739a",
   "metadata": {},
   "source": [
    "## 提供APP和API服务"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7303fd37-e0de-42cc-b4a0-d6f20c6672a4",
   "metadata": {},
   "source": [
    "### Chat服务"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b918e00c-ca4b-4de6-a1b2-b587851e9827",
   "metadata": {},
   "source": [
    "安装open-webui：https://github.com/open-webui/open-webui\n",
    "\n",
    "操作说明： https://docs.openwebui.com.cn/"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1879194d-04a0-4e76-a435-00aa2f65d306",
   "metadata": {},
   "source": [
    "终端窗口部署WebUI服务：\n",
    "\n",
    "```shell\n",
    "docker run -d -p 9990:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3ba89d83-6cda-4279-bf8f-1c9a06509ae3",
   "metadata": {},
   "source": [
    "登陆地址： http://127.201.70.35:9990/"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "6e2a308c-d5a2-4540-b4ad-06ebb95489e6",
   "metadata": {},
   "source": [
    "![image.png](../assets/openwebui01.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "546af7b0-2485-4b08-8417-443183fdc4a6",
   "metadata": {},
   "source": [
    "admin  \n",
    "admin@zte.com.cn  \n",
    "ShiYan@12345  "
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "77539948-a11d-4423-b24a-35f222348ebb",
   "metadata": {},
   "source": [
    "![image.png](../assets/openwebui02.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1a2c4db2-11fe-4496-9dfa-3c29df923f1b",
   "metadata": {},
   "source": [
    "![image.png](../assets/openwebui03.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bc53578b-d097-4a93-b21e-eb64842d671d",
   "metadata": {},
   "source": [
    "![image.png](../assets/openwebui04.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59f8f09d-bbd1-4e05-b96c-379bc8c2b170",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "0fd7f75d-a0c6-471e-aa4b-dcffb213aee6",
   "metadata": {},
   "source": [
    "### API服务"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e7e5c89f-c97a-40bd-b1b0-25a6b36f5076",
   "metadata": {},
   "source": [
    "#### Curl"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58d475e8-5898-4dd1-a7e9-76a9689fd9b1",
   "metadata": {},
   "source": [
    "```shell\n",
    "curl http://localhost:8082/v1/chat/completions \\\n",
    "  -H \"Content-Type: application/json\" \\\n",
    "  -H \"Authorization: Bearer token-abc123\" \\\n",
    "  -d '{\n",
    "    \"model\": \"my_qwen3_4b\",\n",
    "    \"messages\": [\n",
    "      {\"role\": \"system\", \"content\": \"你是一个科学家助手\"},\n",
    "      {\"role\": \"user\", \"content\": \"量子纠缠现象如何解释？\"}\n",
    "    ],\n",
    "    \"max_tokens\": 512,\n",
    "    \"stream\": false\n",
    "  }'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f38564a3-60ad-47a2-9e27-579a0a7323a0",
   "metadata": {},
   "source": [
    "#### openai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "367b5682-fb16-413b-a335-6ca0ab5f09c5",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'\\n\\n我是通义千问，阿里巴巴集团旗下的通义实验室研发的大型语言模型。我能够帮助您解答各种问题、创作文字、进行编程等。如果您有任何问题或需要帮助，请随时告诉我！'"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from openai import OpenAI\n",
    "client = OpenAI(\n",
    "    base_url=\"http://localhost:8082/v1\",\n",
    "    api_key=\"token-abc123\",\n",
    ")\n",
    "prompt = '你是谁？/no_think'\n",
    "messages = [{\"role\":\"user\", \"content\":prompt}]\n",
    "response = client.chat.completions.create(\n",
    "    model = 'my_qwen3_4b',\n",
    "    messages = messages,\n",
    "    temperature=0.95\n",
    ")\n",
    "\n",
    "response.choices[0].message.content"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "dceb1163-80b1-46fa-98d9-59f2fed9a2b3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Chat response: ChatCompletion(id='chatcmpl-8ee5736bd0d3413099ae010c61d595d5', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='\\n\\n量子纠缠是量子力学中最令人着迷的现象之一，它挑战了我们对现实的直观理解。以下是逐步解释：\\n\\n---\\n\\n### 1. **基本定义**\\n量子纠缠是指两个或多个粒子在相互作用后，它们的量子态**无法单独描述**，而是形成一个整体的量子态（称为**纠缠态**）。即使这些粒子被分隔到宇宙两端，它们的状态仍会**瞬间关联**。\\n\\n---\\n\\n### 2. **经典物理与量子的对比**\\n- **经典物理**：两个物体的状态是独立的。比如两枚骰子，分别掷出的结果互不干扰。\\n- **量子纠缠**：两个粒子的量子态**纠缠在一起**，无论相距多远，测量其中一个的态会瞬间决定另一个的态。\\n\\n---\\n\\n### 3. **爱因斯坦的质疑**\\n爱因斯坦曾称这种现象为“**幽灵般的超距作用**”，认为它违反了局部实在性（即物体的状态只受本地因素影响）。他提出**隐变量理论**试图解释，但后来的实验（如贝尔不等式实验）证明量子力学的预言是正确的。\\n\\n---\\n\\n### 4. **数学描述（简化）**\\n假设两个粒子A和B处于以下纠缠态：\\n$$\\n|\\\\Psi\\\\rangle = \\\\frac{1}{\\\\sqrt{2}}(|0\\\\rangle_A |1\\\\rangle_B + |1\\\\rangle_A |0\\\\rangle_B)\\n$$\\n- **测量A**：若A被测到是$ |0\\\\rangle $，则B必为$ |1\\\\rangle $；反之亦然。\\n- **测量B**：若B被测到是$ |0\\\\rangle $，则A必为$ |1\\\\rangle $；反之亦然。\\n\\n**关键点**：纠缠态的**叠加性**和**非局域性**，即测量结果不依赖于粒子的物理距离。\\n\\n---\\n\\n### 5. **非定域性与信息传递**\\n- **非定域性**：纠缠态的关联性不依赖于空间距离，这与经典物理的局域性（局部因果性）矛盾。\\n- **不传递信息**：虽然测量结果有关联，但无法通过纠缠传递信息（因为结果是随机的，无法被操控）。\\n\\n---\\n\\n### 6. **实验验证**\\n- **贝尔实验**：通过测量纠缠粒子的关联性，验证量子力学的非定域性。\\n- **量子隐形传态**：利用纠缠实现量子态的传输，但无需经典信道传递信息。\\n\\n---\\n\\n### 7. **实际意义与应用**\\n- **量子计算**：纠缠态是量子比特（qubit）的核心资源，用于实现并行计算。\\n- **量子通信**：量子密钥分发（QKD）利用纠缠态保证通信安全。\\n- **量子力学基础研究**：揭示自然界的深层规律，如量子纠缠与时空结构的联系。\\n\\n---\\n\\n### 8. **哲学意义**\\n量子纠缠迫使我们重新思考：\\n- **现实的本质**：是否在测量前，粒子的状态是“未决定”的？\\n- **因果性**：是否存在超越时空的联系？\\n- **观测者的作用**：测量行为如何影响量子态？\\n\\n---\\n\\n### 总结\\n量子纠缠是量子力学中**非定域性**和**叠加性**的直接体现，它既挑战了经典物理的直觉，又为现代科技提供了革命性的工具。尽管其机制仍需更深入的探索，但它已成为理解量子世界的关键窗口。', refusal=None, role='assistant', annotations=None, audio=None, function_call=None, tool_calls=[], reasoning_content='\\n嗯，用户问的是量子纠缠现象如何解释。首先，我需要确定用户对量子力学的基础知识了解程度。可能他们刚接触这个概念，或者对量子力学有一定了解但想深入理解。我应该从基本概念开始，逐步解释量子纠缠的原理，同时避免使用过于专业的术语，或者如果必须使用的话，要加以解释。\\n\\n用户可能对量子纠缠感到困惑，因为它和经典物理中的概念有很大不同。我需要先解释量子纠缠的基本定义，比如两个或多个粒子在相互作用后，即使相隔很远，其状态也会相互关联。然后要提到爱因斯坦的“幽灵般的超距作用”这个说法，说明他对此的质疑，但后来被实验验证，比如贝尔不等式实验。\\n\\n接下来，可能需要解释量子纠缠的数学描述，比如波函数的叠加态，以及纠缠态如何用数学公式表示。不过用户可能不需要太深入的数学公式，所以应该用简单的例子，比如光子对或电子对，说明它们的状态如何相互关联。\\n\\n还要考虑用户可能的误解，比如认为纠缠意味着信息传递超光速，但实际上纠缠本身不传递信息，所以需要澄清这一点。可能用户担心量子纠缠是否违反相对论，所以需要解释纠缠不涉及信息传递，只是状态的关联。\\n\\n另外，可能需要提到量子纠缠在现代科技中的应用，比如量子计算、量子通信，这样用户能理解它的实际意义。不过用户的问题主要是解释现象，所以这部分可能作为补充。\\n\\n最后，要确保整个解释逻辑清晰，从现象到原理，再到实验验证和应用，让用户有一个全面的理解。同时，语言要通俗易懂，避免过于抽象的术语，必要时使用类比，比如将纠缠比作一对骰子总是显示相同数字，但具体结果在测量前是不确定的。\\n\\n需要检查是否有遗漏的重要点，比如量子纠缠的非定域性，或者与量子力学其他原理如叠加态、观测的影响之间的关系。可能还要提到薛定谔猫这样的思想实验，但可能不需要太深入，除非用户有进一步的问题。\\n\\n总结下来，回答的结构应该是：定义量子纠缠，解释其特性，对比经典物理，提到爱因斯坦的观点和实验验证，澄清信息传递的问题，最后可能提到应用和意义。这样用户能逐步理解这个复杂的现象。\\n'), stop_reason=None, token_ids=None)], created=1758679150, model='my_qwen3_4b', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=1232, prompt_tokens=23, total_tokens=1255, completion_tokens_details=None, prompt_tokens_details=None), prompt_logprobs=None, prompt_token_ids=None, kv_transfer_params=None)\n"
     ]
    }
   ],
   "source": [
    "from openai import OpenAI\n",
    "\n",
    "client = OpenAI(\n",
    "    api_key=\"token-abc123\",\n",
    "    base_url=\"http://localhost:8082/v1\",\n",
    ")\n",
    "\n",
    "\n",
    "chat_response = client.chat.completions.create(\n",
    "    model=\"my_qwen3_4b\",\n",
    "    messages=[\n",
    "        {\"role\": \"system\", \"content\": \"你是一个科学家助手\"},\n",
    "        {\"role\": \"user\", \"content\": \"量子纠缠现象如何解释？\"},\n",
    "    ],\n",
    "    temperature=0.95\n",
    ")\n",
    "print(\"Chat response:\", chat_response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "323f4f49-8122-4242-a778-5ca82b6612f4",
   "metadata": {},
   "source": [
    "#### langchain"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6e589061-adba-42a6-8221-b9b68106d72a",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.llms import VLLMOpenAI  # 注意类名为 VLLMOpenAI[3](@ref)\n",
    "llm = VLLMOpenAI(\n",
    "    openai_api_key=\"token-abc123\",          # vLLM 无需鉴权，设为空字符串[3](@ref)\n",
    "    openai_api_base=\"http://localhost:8082/v1\",  # 服务端地址\n",
    "    model_name=\"my_qwen3_4b\",  # 需与部署的模型路径一致\n",
    "    max_tokens=1024,                # 控制生成文本最大长度\n",
    "    temperature=0.45,               # 生成多样性参数（0~1）\n",
    "    top_p=0.9,                      # 采样阈值\n",
    "    streaming=True                  # 支持流式输出（可选）\n",
    ")\n",
    "response = llm.invoke(\"你是谁？\")\n",
    "print(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "66422cc7-d899-4936-bd4e-6aea81015150",
   "metadata": {},
   "source": [
    "## Embedding模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "08f5c8a7-40ef-42f8-be61-9c8748ec7fee",
   "metadata": {},
   "source": [
    "### vllm serve 启动服务\n",
    "\n",
    "\n",
    "```\n",
    "CUDA_VISIBLE_DEVICES=2 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-Embedding-0.6B \\\n",
    "    --port 8000 \\\n",
    "    --served-model-name Qwen3-Embedding-0.6B \\\n",
    "    --dtype float16 \\\n",
    "    --enforce-eager \\\n",
    "    --max-model-len 2048\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d03262c8-a7ba-4633-8311-88e235456ed7",
   "metadata": {},
   "source": [
    "启动成功会提示： \n",
    "\n",
    "```bash\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [api_server.py:1634] Supported_tasks: ['embed']\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [api_server.py:1912] Starting vLLM API server 0 on http://0.0.0.0:8000\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:34] Available routes are:\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /openapi.json, Methods: HEAD, GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /docs, Methods: HEAD, GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /docs/oauth2-redirect, Methods: HEAD, GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /redoc, Methods: HEAD, GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /health, Methods: GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /load, Methods: GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /ping, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /ping, Methods: GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /tokenize, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /detokenize, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/models, Methods: GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /version, Methods: GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/responses, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/responses/{response_id}, Methods: GET\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/responses/{response_id}/cancel, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/chat/completions, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/completions, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/embeddings, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /pooling, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /classify, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /score, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/score, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/audio/transcriptions, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/audio/translations, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /rerank, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v1/rerank, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /v2/rerank, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /scale_elastic_ep, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /is_scaling_elastic_ep, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /invocations, Methods: POST\n",
    "(APIServer pid=3848256) INFO 11-20 09:47:47 [launcher.py:42] Route: /metrics, Methods: GET\n",
    "(APIServer pid=3848256) INFO:     Started server process [3848256]\n",
    "(APIServer pid=3848256) INFO:     Waiting for application startup.\n",
    "(APIServer pid=3848256) INFO:     Application startup complete.\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7ec2f4a3-d0d1-4e9a-9fea-c3ca4c09d6b7",
   "metadata": {},
   "source": [
    "### 使用openai调用进行测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bcd50520-0910-4f80-82e2-f24407fa38f8",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from openai import OpenAI\n",
    "\n",
    "client = OpenAI(\n",
    "    api_key=\"EMPTY\",                       # vLLM 无需密钥\n",
    "    base_url=\"http://localhost:8000/v1\"    # 如果是远程机器写 IP\n",
    ")\n",
    "\n",
    "res = client.embeddings.create(\n",
    "    model=\"Qwen3-Embedding-0.6B\",\n",
    "    input=\"hello world\"\n",
    ")\n",
    "\n",
    "print(\"Embedding size:\", len(res.data[0].embedding))\n",
    "print(\"前10个值:\", res.data[0].embedding[:10])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca5e5b01-6c91-437f-8fb3-345b1f034593",
   "metadata": {},
   "source": [
    "返回结果： \n",
    "\n",
    "```\n",
    "Embedding size: 1024\n",
    "前10个值: [-0.014682444743812084, 0.017169922590255737, -0.011952542699873447, -0.07264278829097748, 0.00274044182151556, -0.019710101187229156, -0.01535701472312212, 0.013934092596173286, -0.10472703725099564, -0.005006576422601938]\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "638cb435-bc75-4093-8b8f-689f22e988b0",
   "metadata": {},
   "source": [
    "### 使用langchain调用进行测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b4d9f816-b78c-4090-a137-5bf0bb39ac65",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_openai import OpenAIEmbeddings\n",
    "\n",
    "emb = OpenAIEmbeddings(\n",
    "    model=\"Qwen3-Embedding-0.6B\",\n",
    "    openai_api_key=\"EMPTY\",\n",
    "    openai_api_base=\"http://localhost:8000/v1\"\n",
    ")\n",
    "\n",
    "result = emb.embed_query(\"hello world\")\n",
    "print(\"embedding 前10维：\", result[:10])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6347b9ac-6fff-4f15-b637-fad8d03bcfc8",
   "metadata": {},
   "source": [
    "返回结果： \n",
    "```\n",
    "embedding 前10维： [0.02636154182255268, -0.039030708372592926, -0.01021346915513277, -0.16088539361953735, 0.004943951964378357, -0.07430345565080643, 0.05134640261530876, 0.001794101088307798, -0.04022134840488434, 0.11095288395881653]\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fe38fabf-7299-4044-95a3-407e74f2de5c",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "2aa49a7e-1bc6-48f7-98c4-08d4c0537f4d",
   "metadata": {},
   "source": [
    "## ReRanker模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7c39265-20bd-4801-968d-a2538965875a",
   "metadata": {},
   "source": [
    "### vllm serve 启动服务\n",
    "\n",
    "\n",
    "```\n",
    "CUDA_VISIBLE_DEVICES=2 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-Reranker-0.6B \\\n",
    "    --port 8001 \\\n",
    "    --served-model-name Qwen3-Reranker-0.6B \\\n",
    "    --dtype float16 \\\n",
    "    --enforce-eager \\\n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "```\n",
    "CUDA_VISIBLE_DEVICES=2 \\\n",
    "vllm serve /workspace/models/Qwen/Qwen3-Reranker-0.6B \\\n",
    "  --task score \\\n",
    "  --served-model-name Qwen3-Reranker-0.6B \\\n",
    "  --dtype float16 \\\n",
    "  --max-model-len 8192 \\\n",
    "  --port 8001\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9809994a-5788-4fde-b8f8-6491bfa738fa",
   "metadata": {},
   "source": [
    "启动成功会提示： \n",
    "\n",
    "```bash\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [api_server.py:1912] Starting vLLM API server 0 on http://0.0.0.0:8001\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:34] Available routes are:\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /openapi.json, Methods: HEAD, GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /docs, Methods: HEAD, GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /docs/oauth2-redirect, Methods: HEAD, GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /redoc, Methods: HEAD, GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /health, Methods: GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /load, Methods: GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /ping, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /ping, Methods: GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /tokenize, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /detokenize, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/models, Methods: GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /version, Methods: GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/responses, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/responses/{response_id}, Methods: GET\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/responses/{response_id}/cancel, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/chat/completions, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/completions, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/embeddings, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /pooling, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /classify, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /score, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/score, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/audio/transcriptions, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/audio/translations, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /rerank, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v1/rerank, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /v2/rerank, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /scale_elastic_ep, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /is_scaling_elastic_ep, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /invocations, Methods: POST\n",
    "(APIServer pid=3918621) INFO 11-20 10:13:39 [launcher.py:42] Route: /metrics, Methods: GET\n",
    "(APIServer pid=3918621) INFO:     Started server process [3918621]\n",
    "(APIServer pid=3918621) INFO:     Waiting for application startup.\n",
    "(APIServer pid=3918621) INFO:     Application startup complete.\n",
    "\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41bbf596-e27c-455a-8986-4f7cabcfb8b1",
   "metadata": {},
   "source": [
    "### 使用openai调用进行测试"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ab23352-b19b-4695-af56-b795c07a7eb8",
   "metadata": {},
   "source": [
    "### 使用langchain调用进行测试"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
