{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0",
   "metadata": {},
   "source": [
    "# PD Disaggregation\n",
    "\n",
    "## Why and What is PD Disaggregation?\n",
    "\n",
    "Large Language Models (LLMs) separate Prefill (context preparation) and Decode (token generation) phases to:\n",
    "\n",
    "Boost Efficiency: Prefill precomputes attention keys/values (KV caching) for the input sequence, enabling fast autoregressive decoding by reusing cached data—reducing compute from O(n²) to O(n) per token.\n",
    "Optimize Memory: Caching KV matrices during Prefill avoids redundant recomputation, slashing memory overhead during long-sequence generation.\n",
    "Leverage Hardware: Prefill exploits parallel processing for known inputs (full-sequence batched compute), while Decode optimizes latency-critical step-by-step generation.\n",
    "Scale Applications: Separation allows dynamic resource allocation (e.g., high-throughput Prefill for prompts + low-latency Decode for streaming outputs), vital for real-time use cases like chatbots.\n",
    "\n",
    "## start\n",
    "### start prefill"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "import subprocess\n",
    "from rtp_llm.utils.util import wait_sever_done, stop_server\n",
    "prefill_port=8090\n",
    "decode_port=27001\n",
    "server_process = subprocess.Popen(\n",
    "        [\"/opt/conda310/bin/python\", \"-m\", \"rtp_llm.start_server\",\n",
    "         \"--checkpoint_path=/mnt/nas1/hf/models--Qwen--Qwen1.5-0.5B-Chat/snapshots/6114e9c18dac0042fa90925f03b046734369472f/\",\n",
    "         \"--model_type=qwen_2\",\n",
    "         \"--role_type=PREFILL\",\n",
    "         f\"--start_port={prefill_port}\",\n",
    "         \"--use_local=1\",\n",
    "         f\"--remote_rpc_server_ip=127.0.0.1:{decode_port}\"\n",
    "         ]\n",
    "    )\n",
    "\n",
    "wait_sever_done(server_process, prefill_port)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2",
   "metadata": {},
   "source": [
    "### start decode\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "import subprocess\n",
    "from rtp_llm.utils.util import wait_sever_done, stop_server\n",
    "prefill_port=8090\n",
    "decode_port=27001\n",
    "server_process = subprocess.Popen(\n",
    "        [\"/opt/conda310/bin/python\", \"-m\", \"rtp_llm.start_server\",\n",
    "         \"--checkpoint_path=/mnt/nas1/hf/models--Qwen--Qwen1.5-0.5B-Chat/snapshots/6114e9c18dac0042fa90925f03b046734369472f/\",\n",
    "         \"--model_type=qwen_2\",\n",
    "         \"--role_type=DECODE\",\n",
    "         f\"--start_port={decode_port}\",\n",
    "         \"--use_local=1\",\n",
    "         f\"--remote_rpc_server_ip=127.0.0.1:{port}\"\n",
    "         ]\n",
    "    )\n",
    "\n",
    "wait_sever_done()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4",
   "metadata": {},
   "outputs": [],
   "source": [
    "import openai\n",
    "\n",
    "prefill_port=8090\n",
    "client = openai.Client(base_url=f\"http://127.0.0.1:{prefill_port}/v1/chat/completions\", api_key=\"None\")\n",
    "\n",
    "response = client.chat.completions.create(\n",
    "    model=\"qwen/qwen2.5-0.5b-instruct\",\n",
    "    messages=[\n",
    "        {\"role\": \"user\", \"content\": \"List 3 countries and their capitals.\"},\n",
    "    ],\n",
    "    temperature=0,\n",
    "    max_tokens=64,\n",
    ")\n",
    "\n",
    "print(f\"Response: {response}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5",
   "metadata": {},
   "source": [
    "### Advanced Configuration\n",
    "\n",
    "PD Disaggregation supports the following environment variables.\n",
    "\n",
    "#### Prefill Server Configuration\n",
    "| Variable | Description | Default |\n",
    "|:--------:|:-----------:|:--------:\n",
    "| **PREFILL_RETRY_TIMES** | Number of retries for prefill process, 0 means retry disabled | `0` |\n",
    "| **PREFILL_RETRY_TIMEOUT_MS** | Total timeout for prefill retries (milliseconds) | `0` |\n",
    "| **PREFILL_MAX_WAIT_TIMEOUT_MS** | Maximum wait timeout for prefill execution (milliseconds) | `600000` |\n",
    "| **LOAD_CACHE_TIMEOUT_MS** | Timeout for remote KVCache loading (milliseconds) | `5000` |\n",
    "| **DECODE_RETRY_TIMES** | Number of retries for decode process, 0 means retry disabled | `100` |\n",
    "| **DECODE_RETRY_TIMEOUT_MS** | Total timeout for decode process retries (milliseconds) | `100` |\n",
    "| **DECODE_RETRY_INTERVAL_MS** | interval for decode process retries (milliseconds) | `1` |\n",
    "| **RDMA_CONNECT_RETRY_TIMES** | Number of retries for RDMA connection establishment | `5000` |\n",
    "| **DECODE_POLLING_KV_CACHE_STEP_MS** | Interval time for polling KV loading status (milliseconds) | `30` |\n",
    "| **DECODE_ENTRANCE** | Whether Decode serves as traffic entry point | `false` |"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
