{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# RTP-LLM Native APIs\n",
    "\n",
    "Apart from the OpenAI compatible APIs, the RTP-LLM Runtime also provides its native server APIs. We introduce these following APIs:\n",
    "\n",
    "| method_name | example_request | is_post | is_get | desc |\n",
    "|----------------------------|-----------------------------------------------------------------------------------------------|---------|--------|----------|\n",
    "| `/` | `{\"prompt\": \"Hello\", \"generate_config\": {\"max_new_tokens\": 10, \"top_k\": 1, \"top_p\": 0}}` | ✅ | ❌ | Basic text-generation endpoint (backward-compatible with early versions). |\n",
    "| `/chat/render` | `{\"messages\": [{\"role\": \"user\",\"content\": \"hello？\"}]}` | ✅ | ✅ | Render the chat template into the final prompt that will be sent to the model. |\n",
    "| `/v1/chat/render` | `{\"messages\": [{\"role\": \"user\",\"content\": \"hello？\"}]}` | ✅ | ❌ | v1 path for `/chat/render` (POST only). |\n",
    "| `/tokenizer/encode` | `{\"prompt\": \"hello\"}` | ✅ | ❌ | Encode text into a list of token IDs using the internal tokenizer. |\n",
    "| `/tokenize` |`{\"prompt\": \"hello\"}` | ✅ | ❌ | Lightweight tokenization endpoint that returns an array of tokens. |\n",
    "| `/rtp_llm/worker_status` | `{ \"latest_cache_version\": -1}` | ✅ | ✅ | Detailed status of a worker in the RTP-LLM framework. |\n",
    "| `/worker_status` | `{ \"latest_cache_version\": -1}` | ✅ | ✅ | Query runtime status of the inference worker. |\n",
    "| `/health` | `{}` | ✅ | ✅ | Generic health check; returns whether the service is alive. |\n",
    "| `/status` | `{}` | ✅ | ✅ | Retrieve comprehensive status information for the current service instance. |\n",
    "| `/health_check` | `{\"latest_cache_version\": -1}` | ✅ | ✅ | Deep health check that includes a cache version number. |\n",
    "| `/update` | `{\"peft_info\": {\"lora_info\": {\"lora_0\": \"/lora/llama-lora-test/\"}}}` | ✅ | ❌ | Hot-reload LoRA info into the running service. |\n",
    "| `/v1/models` | | ❌ | ✅ | List currently deployed models (OpenAI-compatible). |\n",
    "| `/set_log_level` | `{ \"log_level\": \"INFO\"}` | ✅ | ❌ | Dynamically adjust the service log level. |\n",
    "| `/update_eplb_config` | `{\"model\": \"EPLB\", \"update_time\":1000}` | ✅ | ❌ | Update the EPLB (Elastic Load Balancer) configuration. |\n",
    "| `/v1/embeddings` | `{\"input\": \"who are u\", \"model\": \"text-embedding-ada-002\"}` | ✅ | ❌ | OpenAI-compatible dense-vector embedding endpoint. |\n",
    "| `/v1/embeddings/dense` | `{\"input\": \"who are u\"}` | ✅ | ❌ | Return **dense** embeddings only. |\n",
    "| `/v1/embeddings/sparse` | `{\"input\": \"who are u\"}` | ✅ | ❌ | Return **sparse** embeddings only (e.g., BM25/TF-IDF). |\n",
    "| `/v1/embeddings/colbert` | `{\"input\":[\"hello, what is your name?\",\"hello\"],\"model\":\"xx\"}` | ✅ | ❌ | Return **ColBERT** late-interaction multi-vector representations for high-accuracy semantic retrieval. |\n",
    "| `/v1/embeddings/similarity`| `{\"left\":[\"hello, what is your name?\"],\"right\":[\"hello\",\"what is your name\"],\"embedding_config\":{\"type\":\"sparse\"},\"model\":\"xx\"}` | ✅ | ❌ | Accept query–doc pairs and return pairwise similarities (cosine/dot) directly, skipping the separate embedding step. |\n",
    "| `/v1/classifier` | `{\"input\":[[\"what is panda?\",\"hi\"],[\"what is panda?\",\"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.\"]],\"model\":\"xx\"}` | ✅ | ❌ | Generic text-classification endpoint supporting tasks such as sentiment or topic classification. |\n",
    "| `/v1/reranker` | `{\"query\":\"what is panda? \",\"documents\":[\"hi\",\"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.\",\"gg\"]}` | ✅ | ❌ | Rerank a list of retrieved documents by relevance and return the reordered results. |\n",
    "\n",
    "We mainly use **requests** to test these APIs in the following examples. You can also use **curl**."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Launch A Server"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import socket\n",
    "import subprocess\n",
    "import time\n",
    "import logging\n",
    "import psutil\n",
    "import requests\n",
    "import json\n",
    "from rtp_llm.utils.util import wait_sever_done, stop_server\n",
    "port=8090\n",
    "server_process = subprocess.Popen(\n",
    "        [\"/opt/conda310/bin/python\", \"-m\", \"rtp_llm.start_server\",\n",
    "         \"--checkpoint_path=/mnt/nas1/hf/models--Qwen--Qwen1.5-0.5B-Chat/snapshots/6114e9c18dac0042fa90925f03b046734369472f/\",\n",
    "         \"--model_type=qwen_2\",\n",
    "         f\"--start_port={port}\"\n",
    "         ]\n",
    "    )\n",
    "wait_sever_done(server_process, port)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Generate (text generation model)\n",
    "Generate completions. This is similar to the `/v1/completions` in OpenAI API. Detailed parameters can be found in the [sampling parameters](./sampling_params.md)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}\"\n",
    "json_data = {\n",
    "     \"prompt\": \"who are you\",\n",
    "     \"generate_config\": {\"max_new_tokens\": 32, \"temperature\": 0}\n",
    "}\n",
    "\n",
    "response = requests.post(url, json=json_data)\n",
    "print(f\"Output 0: {response.json()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Chat Render / Tokenizer \n",
    "- `/chat/render`、`/v1/chat/render`: Chat Template Render\n",
    "- `/tokenizer/encode`, `/tokenize`: Raw prompt tokenize"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}/v1/chat/render\"\n",
    "data = {\"messages\": [{\"role\": \"user\",\"content\": \"hello？\"}]}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Render Result: {response_json}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}/tokenizer/encode\"\n",
    "data = {\"prompt\": \"hello\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Encode Result: {response_json}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Worker Status\n",
    "- `/rtp_llm/worker_status`、`/worker_status`: Server for processing snapshot, includes RunningTask, FinishedTask, CacheStatus."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}/rtp_llm/worker_status\"\n",
    "data = { \"latest_cache_version\": -1}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Worker Status: {response_json}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Health Check\n",
    "- `/health`、`/status`: Check the health of the server."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}/health\"\n",
    "\n",
    "response = requests.get(url)\n",
    "print(response.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Update Lora Info\n",
    "- `/update`: Update full LoRA Info"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}/rtp_llm/worker_status\"\n",
    "data = {\"peft_info\": {\"lora_info\": {\"lora_0\": \"/lora/llama-lora-test/\"}}}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Update Result: {response_json}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Get Model Info\n",
    "- `/v1/models`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}//v1/models\"\n",
    "\n",
    "response = requests.get(url)\n",
    "print(response.text)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Update Log Level\n",
    "- `/set_log_level`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}/set_log_level\"\n",
    "data = { \"log_level\": \"INFO\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Update Result: {response_json}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Update EPLB Config for MoE\n",
    "- `/update_eplb_config`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = f\"http://localhost:{port}/update_eplb_config\"\n",
    "data = {\"model\": \"EPLB\", \"update_time\":1000}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Update Result: {response_json}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Encode (embedding model)\n",
    "\n",
    "Encode text into embeddings. Note that this API is only available for [embedding models](openai_api_embeddings.html#openai-apis-embedding) and will raise an error for generation models.\n",
    "Therefore, we launch a new server to server an embedding model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import socket\n",
    "import subprocess\n",
    "import time\n",
    "import logging\n",
    "import psutil\n",
    "import requests\n",
    "import json\n",
    "from rtp_llm.utils.util import wait_sever_done, stop_server\n",
    "port=8090\n",
    "server_process = subprocess.Popen(\n",
    "        [\"/opt/conda310/bin/python\", \"-m\", \"rtp_llm.start_server\",\n",
    "         \"--checkpoint_path=/mnt/nas1/hf/bge-large-en-v1.5\",\n",
    "         \"--model_type=bert\",\n",
    "         \"--embedding_model=1\",\n",
    "         f\"--start_port={port}\"\n",
    "         ]\n",
    "    )\n",
    "wait_sever_done(server_process, port)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# successful encode for embedding model\n",
    "\n",
    "url = f\"http://localhost:{port}/v1/embeddings\"\n",
    "data = {\"input\": \"who are u\", \"model\": \"bge-large-en-v1.5\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Text embedding: {response_json}\")\n",
    "\n",
    "url = f\"http://localhost:{port}/v1/embeddings/dense\"\n",
    "data = {\"input\": \"who are u\", \"model\": \"bge-large-en-v1.5\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Text embedding: {response_json}\")\n",
    "\n",
    "url = f\"http://localhost:{port}/v1/embeddings/sparse\"\n",
    "data = {\"input\": \"who are u\", \"model\": \"bge-large-en-v1.5\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Text embedding: {response_json}\")\n",
    "\n",
    "url = f\"http://localhost:{port}/v1/embeddings/colbert\"\n",
    "data = {\"input\": \"who are u\", \"model\": \"bge-large-en-v1.5\"}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Text embedding: {response_json}\")\n",
    "\n",
    "url = f\"http://localhost:{port}/v1/embeddings/similarity\"\n",
    "data = {\n",
    "    \"left\": [\n",
    "        \"hello, what is your name?\"\n",
    "    ],\n",
    "    \"right\": [\n",
    "        \"hello\",\n",
    "        \"what is your name\"\n",
    "    ],\n",
    "    \"model\": \"xx\"\n",
    "}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "print(f\"Text embedding: {response_json}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "stop_server(server_process)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## v1/rerank (cross encoder rerank model)\n",
    "Rerank a list of documents given a query using a cross-encoder model. Note that this API is only available for cross encoder model like [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) with `attention-backend` `triton` and `torch_native`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import socket\n",
    "import subprocess\n",
    "import time\n",
    "import logging\n",
    "import psutil\n",
    "import requests\n",
    "import json\n",
    "from rtp_llm.utils.util import wait_sever_done, stop_server\n",
    "port=8090\n",
    "server_process = subprocess.Popen(\n",
    "        [\"/opt/conda310/bin/python\", \"-m\", \"rtp_llm.start_server\",\n",
    "         \"--checkpoint_path=/mnt/nas1/hf/bge-large-en-v1.5\",\n",
    "         \"--model_type=bert\",\n",
    "         \"--embedding_model=1\",\n",
    "         f\"--start_port={port}\"\n",
    "         ]\n",
    "    )\n",
    "wait_sever_done(server_process, port)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# compute rerank scores for query and documents\n",
    "\n",
    "url = f\"http://localhost:{port}/v1/rerank\"\n",
    "data = {\n",
    "    \"query\": \"what is panda? \",\n",
    "    \"documents\": [\n",
    "        \"hi\",\n",
    "        \"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.\",\n",
    "        \"gg\"\n",
    "    ]\n",
    "}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "for item in response_json.get('results'):\n",
    "    print(f\"Score: {item['relevance_score']:.2f} - Document: '{item['document']}'\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "stop_server(server_process)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Classify\n",
    "\n",
    "RTP-LL Runtime also supports classify models. Here we use a classify model to classify the quality of pairwise generations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import socket\n",
    "import subprocess\n",
    "import time\n",
    "import logging\n",
    "import psutil\n",
    "import requests\n",
    "import json\n",
    "from rtp_llm.utils.util import wait_sever_done, stop_server\n",
    "port=8090\n",
    "server_process = subprocess.Popen(\n",
    "        [\"/opt/conda310/bin/python\", \"-m\", \"rtp_llm.start_server\",\n",
    "         \"--checkpoint_path=/mnt/nas1/hf/models--unitary--toxic-bert\",\n",
    "         \"--model_type=bert\",\n",
    "         \"--embedding_model=1\",\n",
    "         f\"--start_port={port}\"\n",
    "         ]\n",
    "    )\n",
    "wait_sever_done(server_process, port)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "url = f\"http://localhost:{port}/v1/classifier\"\n",
    "data = {\n",
    "    \"input\": [\n",
    "        [\n",
    "            \"what is panda?\",\n",
    "            \"hi\"\n",
    "        ],\n",
    "        [\n",
    "            \"what is panda?\",\n",
    "            \"The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.\"\n",
    "        ]\n",
    "    ],\n",
    "    \"model\": \"xx\"\n",
    "}\n",
    "\n",
    "response = requests.post(url, json=data)\n",
    "response_json = response.json()\n",
    "for item in response_json.get('results'):\n",
    "    print(f\"Score: {item['relevance_score']:.2f} - Document: '{item['document']}'\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "stop_server(server_process)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.6.8 64-bit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  },
  "vscode": {
   "interpreter": {
    "hash": "e7370f93d1d0cde622a1f8e1c04877d8463912d04d973331ad4851f04de6915a"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
