{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "14ce9447cd7acd65",
      "metadata": {},
      "source": [
        "# 2.8 Deploying models\n",
        "## 🚄 Introduction\n",
        "After fine-tuning and evaluating the model, the development of the Q&A bot is nearly complete. This lesson will explore how to deploy the model on computing resources so it can be accessed as a real application service. We’ll also introduce common cloud-based deployment methods and help you choose the most suitable approach based on your needs.\n",
        "\n",
        "## 🍁 Goals\n",
        "Upon completing this lesson, you will be able to:\n",
        "* Understand how to manually deploy a model\n",
        "* Learn about common cloud-based model deployment methods\n",
        "* Choose the most appropriate way to deploy a model based on your requirements\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "740aa389",
      "metadata": {},
      "source": [
        "## 1. Direct model invocation (No deployment required)\n",
        "\n",
        "Model deployment refers to moving a trained AI model from the development environment into production, enabling it to process real-time data and serve actual users—thereby creating practical value.\n",
        "\n",
        "As you reviewed in Sections 2.1 to 2.6, you’ve already invoked models multiple times (such as `qwq-32b` and `qwen-plus`). However, you didn’t deploy these models yourself. Instead, you used pre-deployed models provided by Alibaba Cloud, which are hosted on their servers and accessible via API.\n",
        "\n",
        "There are several advantages to directly invoking fully managed API services like those provided by Alibaba Cloud:\n",
        "* **Direct invocation**: No need for manual deployment—just call the API.\n",
        "* **Pay-as-you-go billing**: You’re charged based on token usage, avoiding upfront costs and idle GPU resource waste.\n",
        "* **No operational overhead**: Tasks like scaling, monitoring, and model version upgrades are handled automatically by the service provider.\n",
        "\n",
        "This approach is ideal for early-stage businesses or small- to medium-scale scenarios, helping reduce initial investment and simplify operations.\n",
        "\n",
        "**Note**: Direct model invocation is typically subject to \"[rate limiting](https://help.aliyun.com/zh/model-studio/rate-limit)\" For example, when using the Model Studio API, there are limits on the number of calls per minute (QPM) and tokens per minute (TPM). Exceeding these limits will result in failed requests until the rate limit resets.\n",
        "\n",
        "Additionally, if your use case requires a custom fine-tuned model that isn't supported by the provider's API, direct invocation may not meet your needs."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e4a0e0bc",
      "metadata": {},
      "source": [
        "## 2. Deploying the model in the test environment\n",
        "\n",
        "In Section 2.7, you fine-tuned a small parameter model (Qwen2.5-1.5B-Instruct) to maintain high accuracy while improving inference speed. Next, you'll deploy this fine-tuning model so it can provide services.\n",
        "\n",
        "Model deployment usually involves:\n",
        "\n",
        "* Downloading the trained model\n",
        "* Writing code to load it\n",
        "* Publishing it as an API-accessible service\n",
        "\n",
        "This process can require significant manual effort. To simplify it, we’ll use vLLM—an open-source framework designed specifically for efficient LLM inference.\n",
        "vLLM allows you to deploy models quickly using simple command-line parameters. It improves inference speed and supports high-concurrency requests through advanced memory management and caching techniques.\n",
        "\n",
        "In this section, we’ll use vLLM to:\n",
        "\n",
        "* Load the fine-tuned model\n",
        "* Start a service with an HTTP interface compatible with the OpenAI API\n",
        "\n",
        "Once running, you can test the model’s inference capabilities by calling standard endpoints such as `/v1/chat/completions`."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "bacbeb4b",
      "metadata": {},
      "source": [
        "### 2.1 Environment preparation\n",
        "\n",
        "The experimental environment for this chapter must match the one used in Section 2.7 (Fine-tuning), ensuring that deployment is performed in a GPU-enabled environment.\n",
        "\n",
        "* If you’re following the course sequentially, continue using the PAI-DSW instance launched in Section 2.7.\n",
        "* If studying this chapter independently, set up the environment following the preparation steps outlined in Section 2.7."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "87854404",
      "metadata": {},
      "source": [
        "Open a Terminal window in the specified directory.\n",
        "\n",
        "Navigate to the course directory:`/mnt/workspace/alibabacloud_acp_learning/ACP/p2_Build LLM Q&A System`\n",
        "\n",
        "Open a new terminal window in this location to proceed with the deployment commands.\n",
        "\n",
        "<img src=\"https://img.alicdn.com/imgextra/i1/O1CN01TboZMt1pwFKnS6Gdx_!!6000000005424-2-tps-1460-1470.png\" width=\"800\">\n",
        "\n",
        "You can use the `pwd` command in the terminal window to check the current directory. \n",
        "\n",
        "If needed, switch to the course directory by running the following command:\n",
        "\n",
        "<style>\n",
        "    table {\n",
        "      width: 80%;\n",
        "      margin: 20px; /* Center the table */\n",
        "      border-collapse: collapse; /* Collapse borders for a cleaner look */\n",
        "      font-family: sans-serif; \n",
        "    }\n",
        "\n",
        "    th, td {\n",
        "      padding: 10px;\n",
        "      text-align: left;\n",
        "      border: 1px solid #ddd; /* Light gray border */\n",
        "    }\n",
        "\n",
        "    th {\n",
        "      background-color: #f2f2f2; /* Light gray background for header */\n",
        "      font-weight: bold;\n",
        "    }\n",
        "\n",
        "    tr:nth-child(even) { /* Zebra striping */\n",
        "      background-color: #f9f9f9;\n",
        "    }\n",
        "\n",
        "    tr:hover { /* Highlight row on hover */\n",
        "      background-color: #e0f2ff; /* Light blue */\n",
        "    }\n",
        "</style>\n",
        "<table width=\"90%\">\n",
        "<tbody>\n",
        "<tr>\n",
        "<td>  \n",
        "\n",
        "\n",
        "\n",
        "```bash\n",
        "cd /mnt/workspace/alibabacloud_acp_learning/ACP/p2_Build\\ LLM\\ Q&A\\ System\n",
        "```\n",
        "\n",
        "</td>\n",
        "</tr>\n",
        "</tbody>\n",
        "</table>  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "dbdc61f0",
      "metadata": {},
      "source": [
        "### 2.2 Deploying models with vLLM\n",
        "\n",
        "#### 2.2.1 Deploying open source models\n",
        "\n",
        "It is recommended to download the **Qwen2.5-1.5B-Instruct** model from either the [ModelScope Model Library](https://modelscope.cn/models) or the [HuggingFace Model Library](https://huggingface.co/models) for deployment purposes. In the following steps, we will use ModelScope as an example.\n",
        "\n",
        "First, download the model files to your local machine."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "1abcb172",
      "metadata": {},
      "outputs": [],
      "source": [
        "!mkdir -p ./model/qwen2_5-1_5b-instruct\n",
        "!modelscope download --model qwen/Qwen2.5-1.5B-Instruct --local_dir './model/qwen2_5-1_5b-instruct'"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "24f60239",
      "metadata": {},
      "source": [
        "Once downloaded, the model files will be saved in the `./model/qwen2_5-1_5b-instruct` folder.\n",
        "\n",
        "<img src=\"https://img.alicdn.com/imgextra/i3/O1CN01vTzOrP1n0sUaNfdIO_!!6000000005028-2-tps-710-666.png\" width=\"400\">  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "071bfb11",
      "metadata": {},
      "source": [
        "Next, install the dependencies by running the following command in the terminal window to install `vllm`. If you encounter version conflicts, you can alternatively install a specific version using `vllm==0.6.2`:\n",
        "\n",
        "<style>\n",
        "    table {\n",
        "      width: 80%;\n",
        "      margin: 20px; /* Center the table */\n",
        "      border-collapse: collapse; /* Collapse borders for a cleaner look */\n",
        "      font-family: sans-serif; \n",
        "    }\n",
        "\n",
        "    th, td {\n",
        "      padding: 10px;\n",
        "      text-align: left;\n",
        "      border: 1px solid #ddd; /* Light gray border */\n",
        "    }\n",
        "\n",
        "    th {\n",
        "      background-color: #f2f2f2; /* Light gray background for header */\n",
        "      font-weight: bold;\n",
        "    }\n",
        "\n",
        "    tr:nth-child(even) { /* Zebra striping */\n",
        "      background-color: #f9f9f9;\n",
        "    }\n",
        "\n",
        "    tr:hover { /* Highlight row on hover */\n",
        "      background-color: #e0f2ff; /* Light blue */\n",
        "    }\n",
        "</style>\n",
        "<table width=\"90%\">\n",
        "<tbody>\n",
        "<tr>\n",
        "<td>   \n",
        "\n",
        "```bash\n",
        "pip install vllm==0.6.0\n",
        "```\n",
        "\n",
        "</td>\n",
        "</tr>\n",
        "</tbody>\n",
        "</table>\n",
        "\n",
        "After installing vllm, execute the **vllm command** in the terminal  to start the model service:\n",
        "\n",
        "<table width=\"90%\">\n",
        "<tbody>\n",
        "<tr>\n",
        "<td>  \n",
        "\n",
        "\n",
        "\n",
        "```bash\n",
        "vllm serve \"./model/qwen2_5-1_5b-instruct\" --load-format \"safetensors\" --port 8000\n",
        "```\n",
        "\n",
        "</td>\n",
        "</tr>\n",
        "</tbody>\n",
        "</table>\n",
        "\n",
        "- vllm serve: Indicates starting the model service.\n",
        "- \"./model/qwen2_5-1_5b-instruct\": Specifies the path to the  model to be loaded, which typically contains model files, configuration, and version information.\n",
        "- --load-format \"safetensors\": Specifies the format used when loading the modelweights; here, it uses the safe and efficient `safetensors` format.\n",
        "- --port 8000: Specifies the port number for the service. If this port is occupied, you can switch to another available port, such as 8100.\n",
        "\n",
        "After the service starts successfully, the terminal window will display the message **\"Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)\"**.\n",
        "\n",
        "This means the model service is now running and ready to accept inference requests via the specified endpoint.\n",
        "\n",
        "<img src=\"https://img.alicdn.com/imgextra/i2/O1CN01aJBpG11UvEWl0jdOr_!!6000000002579-2-tps-2806-952.png\" width=1000>\n",
        "\n",
        "Please note that closing the terminal window will immediately terminate the model service. Since subsequent tests and performance evaluations depend on this service,  do not close the window.\n",
        "\n",
        "> If you want the service to run continuously in the background—even after closing the terminal—you can use the following command.\n",
        "> ```bash\n",
        "> # Run the service in the background, with the service logs stored in vllm.log\n",
        "> nohup vllm serve \"./model/qwen2_5-1_5b-instruct\" --load-format \"safetensors\" --port 8000 > vllm.log 2>&1 &\n",
        "> ```  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "617b7a50",
      "metadata": {},
      "source": [
        "#### 2.2.2 Deploy fine-tuned Model (Optional)\n",
        "\n",
        "The fine-tuned model from Section 2.7 is saved by default in the `output` directory. In this example, we’ll deploy the merged version of the fine-tuned model (where LoRA weights have been fused with the base model).\n",
        "\n",
        "Open a new terminal window and run the following `vllm` command:\n",
        "\n",
        "<style>\n",
        "    table {\n",
        "      width: 80%;\n",
        "      margin: 20px; /* Center the table */\n",
        "      border-collapse: collapse; /* Collapse borders for a cleaner look */\n",
        "      font-family: sans-serif; \n",
        "    }\n",
        "\n",
        "    th, td {\n",
        "      padding: 10px;\n",
        "      text-align: left;\n",
        "      border: 1px solid #ddd; /* Light gray border */\n",
        "    }\n",
        "\n",
        "    th {\n",
        "      background-color: #f2f2f2; /* Light gray background for header */\n",
        "      font-weight: bold;\n",
        "    }\n",
        "\n",
        "    tr:nth-child(even) { /* Zebra striping */\n",
        "      background-color: #f9f9f9;\n",
        "    }\n",
        "\n",
        "    tr:hover { /* Highlight row on hover */\n",
        "      background-color: #e0f2ff; /* Light blue */\n",
        "    }\n",
        "</style>\n",
        "<table width=\"90%\">\n",
        "<tbody>\n",
        "<tr>\n",
        "<td>  \n",
        "\n",
        "\n",
        "\n",
        "```bash\n",
        "vllm serve \"./output/qwen2_5-1_5b-instruct/v0-202xxxxx-xxxxxx/checkpoint-xxx-merged\" --load-format \"safetensors\" --port 8001\n",
        "```\n",
        "\n",
        "</td>\n",
        "</tr>\n",
        "</tbody>\n",
        "</table>\n",
        "\n",
        "- \"./output/qwen2_5-1_5b-instruct/v0-202xxxxx-xxxxxx/checkpoint-xxx-merged\": Replace this path with the actual location of your merged fine-tuned model.\n",
        "- --port 8001: Uses a different port than the one in Section 2.2.1 (which used port 8000) to avoid conflicts.\n",
        "\n",
        "This starts a second inference service specifically for the fine-tuned model."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c8331432",
      "metadata": {},
      "source": [
        "### 2.3 Test service running status\n",
        "\n",
        "vLLM supports starting a local server that is compatible with the OpenAI API, meaning it returns responses in the same format as OpenAI’s API.\n",
        "\n",
        "Use `cURL` to send an HTTP request and test whether the Qwen2.5-1.5B-Instruct model service deployed in Section 2.2.1 is running correctly.\n",
        "\n",
        "If you're testing the fine-tuned model service (running on port 8001), make sure to change the port number in the request URL from `8000` to `8001`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4b3cc03c",
      "metadata": {},
      "outputs": [],
      "source": [
        "%%bash\n",
        " curl -X POST http://localhost:8000/v1/chat/completions \\\n",
        "     -H \"Content-Type: application/json\" \\\n",
        "     -d '{\n",
        "         \"model\": \"./model/qwen2_5-1_5b-instruct\",\n",
        "         \"messages\": [\n",
        "             {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
        "             {\"role\": \"user\", \"content\": \"Please tell me how many gold medals the Chinese team won in total at the 2008 Beijing Olympics?\"}\n",
        "         ]\n",
        "     }'\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6fd9ce42",
      "metadata": {},
      "source": [
        "A successful response from the above interface indicates that the service is running properly.\n",
        "\n",
        "Additionally, the vLLM server is compatible with the `/v1/models` endpoint, which allows you to view the list of deployed models. For more details, please refer to [vLLM-compatible OpenAI API]([vLLM-compatible OpenAI API](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#api-reference)) documentation."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "6414050b",
      "metadata": {},
      "outputs": [],
      "source": [
        "%%bash\n",
        "curl -X GET http://localhost:8000/v1/models"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "061a3edd",
      "metadata": {},
      "source": [
        "### 2.4 Evaluate service performance\n",
        "\n",
        "To evaluate the performance of the deployed model service, we’ll use wrk, a lightweight HTTP benchmarking tool, to simulate stress testing by sending concurrent requests and generating performance reports.\n",
        "\n",
        "Below, we’ll use a stress test on the `POST /v1/chat/completions` interface as an example to demonstrate key service performance metrics.\n",
        "\n",
        "First, open a new terminal window and install the dependencies for wrk.\n",
        "\n",
        "> Note : Ensure the terminal is in the course’s specified directory (as set in Step 1). \n",
        "\n",
        "```bash\n",
        "sudo apt update\n",
        "sudo apt install wrk\n",
        "```\n",
        "\n",
        "Next, prepare the request body data required for the POST request. The data is stored in the file `./resources/2_9/post.lua`, and its content is shown below.\n",
        "\n",
        "```bash\n",
        "wrk.method = \"POST\"\n",
        "wrk.headers[\"Content-Type\"] = \"application/json\"\n",
        "wrk.body = [[\n",
        "    {\n",
        "       \"model\": \"./model/qwen2_5-1_5b-instruct\",\n",
        "       \"messages\": [\n",
        "           {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n",
        "           {\"role\": \"user\", \"content\": \"Please tell me how many gold medals the Chinese team won in total at the 2008 Beijing Olympics?\"}\n",
        "       ]\n",
        "   }\n",
        "]]\n",
        "```\n",
        "\n",
        "Then, execute the `wrk` stress test command in the terminal. Set the concurrency level (`-c`) to 1 and 10 respectively, and set the test duration (`-d`) to 10 seconds for both cases. Run the two experiments and observe their results.\n",
        "\n",
        "\n",
        "```bash\n",
        "wrk -t1 -c1 -d10s -s ./resources/2_9/post.lua http://localhost:8000/v1/chat/completions\n",
        "\n",
        "wrk -t1 -c10 -d10s -s ./resources/2_9/post.lua http://localhost:8000/v1/chat/completions\n",
        "```\n",
        "\n",
        "The wrk stress test results are shown below:\n",
        "\n",
        "<img src=\"https://img.alicdn.com/imgextra/i3/O1CN01ybO7TU1X6LJ12FYdV_!!6000000002874-2-tps-1452-322.png\" width=\"500\" height=\"150\">\n",
        "<img src=\"https://img.alicdn.com/imgextra/i2/O1CN01bberC61txr86CpFjU_!!6000000005969-2-tps-1472-362.png\" width=\"500\" height=\"150\">\n",
        "\n",
        "According to the stress test results, as concurrency increased from 1 to 10, the QPS improved by approximately 6 times (from 3.30 to 20.08), while the average latency increased by about 30% (from 324.61 to 426.84 ms). Notably, in the second test, two timeout errors occurred. This happened because, under higher concurrency, the server load exceeded its processing capacity, and limited model inference performance led to some requests timing out."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "06b0db6a",
      "metadata": {},
      "source": [
        "## ☁3. Deploying models on the cloud\n",
        "\n",
        "The above stress test results show that, due to the limited computing power on the local device , the model service struggles to meet inference requirements for low latency and high concurrency.\n",
        "\n",
        "The traditional solution is to purchase higher-performance servers and redeploy the model onto them. However, this approach comes with several challenges:\n",
        "\n",
        "* Resource cost: Requires an upfront investment in expensive high-performance hardware.\n",
        "* Operational cost: Ongoing server maintenance—including monitoring, updates, and troubleshooting—demands specialized technical expertise.\n",
        "* Reliability: Service stability depends heavily on both the skill of the operations team and the available budget. With limited resources, it’s difficult to build a highly available and reliable model service.\n",
        "* Low flexibility: Hardware capacity is fixed, making it hard to scale resources up or down based on demand. This can lead to either poor performance during peak loads or wasted resources during low usage.\n",
        "\n",
        "Compared to managing physical servers, using cloud services for model deployment is often a more effective and scalable solution. Cloud platforms offer flexible deployment options tailored to different needs and capabilities.\n",
        "\n",
        "You can choose from a range of Alibaba Cloud services—such as [**Model Studio**](https://help.aliyun.com/zh/model-studio/getting-started/what-is-model-studio), [**Function Compute FC**](https://help.aliyun.com/zh/functioncompute/fc-3-0/product-overview/what-is-function-compute), [**AI Platform PAI-EAS**](https://help.aliyun.com/zh/pai/user-guide/overview-2), [**Elastic GPU Service**](https://help.aliyun.com/zh/egs/what-is-elastic-gpu-service), [**Container Service ACK**](https://help.aliyun.com/zh/ack/product-overview/product-introduction), [**Container Compute Service ACS**](https://help.aliyun.com/zh/cs/product-overview/product-introduction) — to build a model service that is:\n",
        "\n",
        "* Scalable\n",
        "* Capable of handling high concurrency\n",
        "* Low-latency\n",
        "* Easy to manage\n",
        "* Stable and adaptable to changing business demands\n",
        "\n",
        "This enables you to quickly deploy and adjust your AI services in response to real-world usage patterns—without the burden of infrastructure management."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "0e51e026",
      "metadata": {},
      "source": [
        "### 3.1 Deploying models using Model Studio\n",
        "\n",
        "You can use the console page of Alibaba Cloud Model Studio to quickly deploy models. This approach is simple and user-friendly—no need to master complex deployment procedures. With just a few clicks, you can have your own dedicated model service up and running. Model Studio also supports deploying models via a simple [API deployment model](https://help.aliyun.com/en/model-studio/developer-reference/model-deployment-quick-start), enabling automation and integration into workflows.\n",
        "\n",
        "The deployment process is as follows:\n",
        "\n",
        "<img src=\"https://img.alicdn.com/imgextra/i3/O1CN01jWg2VE1nOEi0dvZsK_!!6000000005079-2-tps-825-112.png\" width=\"700\">\n",
        "\n",
        "- **Select Model**: Choose either a pre-configured model or a custom model.\n",
        "    - Pre-configured Model: Standard models provided and supported by Alibaba Cloud Model Studio. Select the one that best fits your use case. A list of available models can be viewed when [deploying a new model](https://bailian.console.aliyun.com/?spm=a2c4g.11186623.0.0.63e56cfcXIU4Qj#/efm/model_deploy).\n",
        "    - Custom Model: Models fine-tuned or optimized by  Alibaba Cloud Model Studio. For details, refer to: [Optimization-supported models](https://help.aliyun.com/en/model-studio/model-training-on-console?spm=a2c4g.11186623.0.0.63e56cfcMC90g9#a6da1accf0dun).\n",
        "- **One-click Model Deployment**: The console supports one-click model deployment. You can also deploy models programmatically via API.\n",
        "- **Using Models in the Model Studio Ecosystem**: Once deployed, models can be seamlessly integrated into the Alibaba Cloud Model Studio ecosystem. They can be used directly in the console or accessed via HTTP and DashScope APIs for reuse across applications.\n",
        "\n",
        "For detailed operations, refer to the [Alibaba Cloud Model Studio Model Deployment](https://help.aliyun.com/en/model-studio/user-guide/model-deployment) documentation.\n",
        "\n",
        "While deploying through Model Studio greatly reduces the complexity of model deployment and maintenance, the range of supported models is limited. If your model is not within the supported list, consider the alternative deployment methods described below."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9f93472d",
      "metadata": {},
      "source": [
        "### 3.2 Deploying models using FC\n",
        "\n",
        "Function Compute (FC) supports a broader range of model types and provides Serverless GPU services, eliminating the need to manage underlying infrastructure. It offers automatic scaling within seconds and a pay-as-you-go billing model—ideal for reducing costs when models are used infrequently, especially for short-term, high-resource tasks.\n",
        "However, there are some limitations:\n",
        "\n",
        "* Cold start latency: If no requests arrive for a period, the function may enter a \"cold\" state. When a new request comes in, the instance must restart, leading to longer initial response times.\n",
        "* Increased debugging difficulty: Function-based applications can be harder to monitor and debug, especially in multi-step processing pipelines.\n",
        "In summary, FC is well-suited for:\n",
        "* Lightweight inference tasks\n",
        "* Low-frequency access scenarios\n",
        "* Use cases with less stringent real-time requirements (e.g., offline batch processing, scheduled or event-triggered tasks)\n",
        "\n",
        "If your application requires high real-time performance or advanced monitoring and debugging capabilities for complex inference workflows, consider the more centralized deployment options described next.\n",
        "\n",
        "**Deployment Reference**: \n",
        "\n",
        "* You can [one-click deploy the QwQ-32B inference model](https://help.aliyun.com/zh/functioncompute/fc-3-0/use-cases/two-ways-to-quickly-deploy-qwq-32b-reasoning-model) to experience FC’s deployment capabilities. \n",
        "* For more  practices, see the [Function Compute 3.0 - Practical Tutorials](https://help.aliyun.com/zh/functioncompute/fc-3-0/use-cases/?spm=a2c4g.11186623.help-menu-2508973.d_3.228e493fj6un1Y&scm=20140722.H_2509019._.OR_help-V_1).\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "bfcbf468",
      "metadata": {},
      "source": [
        "### 3.3 Deploying models using PAI-EAS\n",
        "\n",
        "You can deploy models downloaded from open-source communities or trained yourself as online services using PAI-EAS (Elastic Algorithm Service) on Alibaba Cloud's PAI.\n",
        "PAI-EAS provides enterprise-grade features such as:\n",
        "\n",
        "* Elastic scaling\n",
        "* Blue-green deployment\n",
        "* Resource group management\n",
        "* Version control\n",
        "* Real-time resource monitoring\n",
        "\n",
        "These help you efficiently manage and operate model services in production.\n",
        "\n",
        "PAI-EAS is particularly suitable for real-time synchronous inference scenarios. To address long initial response times, it includes a model warm-up feature that pre-initializes the model before going live—ensuring the service is ready to respond immediately after deployment.\n",
        "\n",
        "Compared to Function Compute, PAI-EAS typically has higher fixed costs. For low-traffic use cases, it may be less cost-effective than FC. However, you can reduce costs by using Spot Instances. For guidance, see: [PAI-EAS Spot Best Practices](https://help.aliyun.com/zh/pai/use-cases/pai-eas-spot-best-practices).\n",
        "\n",
        "**Deployment Reference**: \n",
        "* Try [Deploy LLM Applications with EAS in 5 Minutes](https://help.aliyun.com/zh/pai/use-cases/use-pai-eas-to-quickly-deploy-tongyi-qianwen?spm=a2c4g.11186623.0.i0#ba6b53303bb66) to quickly set up a general-purpose model and  experience PAI-EAS's capabilities.\n",
        "* For custom models, refer to: [How to Mount Custom Models?](https://help.aliyun.com/zh/pai/use-cases/deploy-llm-in-eas#c1d769ba33kh5).  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9519841b",
      "metadata": {},
      "source": [
        "### 3.4 Deploying models using Elastic Computing Service or Container Services\n",
        "\n",
        "Deploying models via Elastic Compute Service (ECS) is a widely adopted approach, offering full control over server configurations, operating systems, and software environments. This is ideal for models requiring deep customization or specific dependencies.\n",
        "\n",
        "ECS provides stable computing resources and avoids cold-start delays common in serverless platforms. It can be combined with:\n",
        "\n",
        "* Elastic Scaling Service (ESS) for automatic scaling\n",
        "* Load Balancers (SLB) for high availability and traffic distribution\n",
        "* Security groups, access control, and data encryption for enhanced security\n",
        "\n",
        "However, managing these components requires technical expertise, leading to higher operational and maintenance overhead.\n",
        "\n",
        "\n",
        "**Suitable Scenarios:**\n",
        "\n",
        "* LLMs requiring high customization, consistent performance, and long-term operation\n",
        "* Enterprises with strong DevOps teams and a need for predictable costs and full resource control\n",
        "\n",
        "**Unsuitable Scenarios:**\n",
        "\n",
        "* Small projects needing rapid deployment and elasticity\n",
        "* Teams with limited operational resources or sensitivity to complexity\n",
        "\n",
        "**Deployment Reference**: \n",
        "\n",
        "* See [Using vLLM Container Image to Quickly Build a Large Language Model Inference Environment on GPU](https://help.aliyun.com/zh/egs/use-cases/use-a-vllm-container-image-to-run-inference-tasks-on-a-gpu-accelerated-instance) for step-by-step instructions. \n",
        "* For models like Llama, ChatGLM, Baichuan, Qwen-Max, or their fine-tuned versions, we recommend: [Install and Use DeepGPU-LLM for Model Inference](https://help.aliyun.com/zh/egs/developer-reference/install-and-use-deepgpu-llm-for-model-inference?spm=a2c4g.11186623.0.i6) to accelerate performance."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8d4c5f42",
      "metadata": {},
      "source": [
        "If your team already has containerization experience, you can use ACK (Alibaba Cloud Kubernetes Service) with GPU-enabled nodes without learning many new concepts.\n",
        "\n",
        "Alternatively, consider ACS (Alibaba Cloud Container Compute Service), which allows you to run GPU-powered containers directly within a familiar Kubernetes environment—while offloading cluster operations and maintenance.\n",
        "\n",
        "**Deployment Reference**:    \n",
        "- ACK: [Deploy DeepSeek Distillation Model Inference Service Based on ACK](https://help.aliyun.com/zh/ack/cloud-native-ai-suite/use-cases/deploy-deepseek-distillation-model-inference-service-based-on-ack)     \n",
        "- ACS: [Build QwQ-32B Model Inference Service Using ACS GPU Computing Power](https://help.aliyun.com/zh/cs/user-guide/build-qwq-32b-model-inference-service-using-acs-gpu-computing-power)  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8612efb3",
      "metadata": {},
      "source": [
        "### 3.5 Cloud service solution comparison and decision recommendations\n",
        "\n",
        "When deploying models on Alibaba Cloud, selecting the right service requires balancing multiple factors, including:\n",
        "\n",
        "* Business requirements\n",
        "* Model characteristics\n",
        "* Team technical capability\n",
        "* Operational complexity\n",
        "* Cost efficiency\n",
        "\n",
        "Below is a comparative analysis of common cloud deployment options to help guide your decision:\n",
        "\n",
        "| Service Name | Features | Applicable Scenarios |\n",
        "| --- | --- | --- |\n",
        "| Model Studio | A dedicated platform for LLMs, providing one-click deployment, model optimization, API call management, and encapsulating underlying complexities. | Rapid deployment of LLMs (such as the Qwen series), without the need to focus on infrastructure. |\n",
        "| Function Compute (FC) | Serverless architecture, billed by request volume, with automatic scaling and no operations required. | Suitable for lightweight inference tasks and low-frequency access scenarios (such as scheduled tasks and event triggers). |\n",
        "| PAI-EAS | An online model serving platform that supports custom model deployment, elastic scaling, monitoring, and other capabilities. | Medium and small deep learning models (such as image classification and NLP), requiring elastic scaling and fine-grained resource management. |\n",
        "| Elastic GPU Service | IaaS-level resources, flexible installation of any framework and dependencies, with manual operations and maintenance required. | Custom model training/inference, requiring full control over the environment (such as complex dependencies andspecial hardware needs). |\n",
        "| Container Service ACK/Container Compute Service ACS | Kubernetes cluster deployment, integrating CI/CD, automatic scaling, load balancing. | Complex microservice architectures, mixed workloads, large-scale distributed inference or training. |\n",
        "\n",
        "Model Deployment Service Selection Recommendations:\n",
        "1. What are your core requirements?\n",
        "    * Rapid deployment of LLMs → Use Model Studio\n",
        "        * Ideal for conversational bots, generative AI, and quick prototyping.\n",
        "    * Low-cost lightweight services / low-frequency non-real-time tasks → Use Function Compute (FC) \n",
        "        * Suitable for small tools handling hundreds of queries per day.\n",
        "    * Conventional model deployment (image, text, NLP) → Use PAI-EAS \n",
        "        * Offers a good balance between performance and ease of use.\n",
        "    * Custom environments or complex dependencies → Use Elastic GPU Service or ACK\n",
        "        * Best for advanced customization and control.\n",
        "2. Is the service compatible with your model?\n",
        "    *  Tongyi series models: Prioritize Model Studio for native support and optimization.\n",
        "    * General-purpose models: Supported across multiple platforms:\n",
        "        * Function Compute FC\n",
        "        * PAI-EAS\n",
        "        * Elastic GPU Service (supports TensorFlow, PyTorch, ONNX ecosystems)\n",
        "        * Containerized deployment via ACK/ACS\n",
        "3. Operations Complexity and Team Technical Capabilities?\n",
        "    * No operations needed / non-technical teams → Model Studio Visual interface, minimal setup, no DevOps required.\n",
        "    * Low operational complexity :\n",
        "        * Algorithm engineers → PAI-EAS\n",
        "        * Development teams → Function Compute FC\n",
        "    * High operational complexity :\n",
        "        * Mature DevOps teams → ACK (requires managing pipelines and clusters)\n",
        "        * Or Elastic GPU Service (manual environment management)\n",
        "4. Cost Control\n",
        "    * Low-cost, lightweight scenarios → FC\n",
        "        * Billed by request count and resource usage—no cost when idle.\n",
        "    * Moderate cost, stable traffic → PAI-EAS\n",
        "        * Billed by instance type and duration. Can be optimized using auto-scaling.\n",
        "    * Higher cost but flexible → Elastic GPU Service\n",
        "    * Pay-as-you-go or subscription-based. Requires manual optimization of resource utilization.\n",
        "    * Comprehensive cost (infrastructure + management) → ACK\n",
        "        * Includes cluster management fees and scheduling complexity—justified for large-scale deployments."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "594d6313",
      "metadata": {},
      "source": [
        "## ✅Summary \n",
        "\n",
        "In this lesson, you’ve learned the fundamentals of model deployment:\n",
        "\n",
        "* How to deploy a model—whether open-source or fine-tuned—as an accessible inference service through practical steps.\n",
        "* Deployment is not mandatory: You can  call fully managed API services (such as from Alibaba Cloud) to reduce initial investment and avoid wasting idle GPU resources.\n",
        "* How to choose the right cloud service—such as Model Studio, FC, PAI-EAS, ECS, ACK, or ACS—based on your business needs, team capabilities, and cost considerations, achieving an optimal balance between performance and efficiency.\n",
        "\n",
        "By mastering these deployment methods, you now have a solid foundation for building high-performance, scalable LLM applications.\n",
        "\n",
        "Next, you’ll learn how to ensure availability, security, and performance of models in real-world production environments.\n",
        "\n",
        ">⚠️ **Note**: After completing this lesson, please stop your current PAI-DSW GPU instance to avoid unnecessary charges. \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "8b3f1e88d7d95ac9",
      "metadata": {},
      "source": [
        "## Further reading\n",
        "\n",
        "The course has focused on cloud deployment, which in practice can be divided into:\n",
        "\n",
        "**Public Cloud Deployment**\n",
        "\n",
        "* The model is encapsulated as an API and hosted on a public cloud platform (similar to SaaS).\n",
        "* Lowers the barrier to entry and simplifies integration.\n",
        "* Requires attention to API stability, rate limiting, and security (such as authentication and data encryption).\n",
        "\n",
        "**Private Cloud Deployment**\n",
        "\n",
        "* Deploy the model within an enterprise’s private cloud infrastructure.\n",
        "* Offers higher data security, compliance control, and customization options.\n",
        "* Involves higher maintenance costs and requires dedicated IT resources.\n",
        "\n",
        "**Edge-Cloud Collaborative Deployment**\n",
        "\n",
        "Combines the strengths of both edge and cloud computing:\n",
        "* Simple or latency-sensitive tasks are processed on edge devices (such as mobile phones and IoT devices).\n",
        "* Complex computations are offloaded to the cloud.\n",
        "\n",
        "This approach enables fast response times while leveraging powerful cloud resources for heavy lifting.\n",
        "\n",
        "Use Case Example: Rakuten’s collaboration with Tongyi LLM to build an end-side companion intelligent voice bot.\n",
        "\n",
        "* The \"end\" refers to a small, fine-tuned model running locally on the client device, responsible for basic tasks like wake-word detection and input preprocessing.\n",
        "* The \"cloud\" hosts the full LLM, which performs deep reasoning and generates responses.\n",
        "* Preprocessed data is sent to the cloud, and results are returned quickly—balancing speed, efficiency, and intelligence.\n",
        "\n",
        "<img src=\"https://img.alicdn.com/imgextra/i4/O1CN01U6Jkr71Xl6dLdLVJI_!!6000000002963-2-tps-1112-237.png\" width=\"1000\">\n",
        "\n",
        "**Embedded System Deployment**\n",
        "\n",
        "In specific domains such as automotive systems, robots, and medical devices, deploying models directly onto embedded hardware is often necessary.\n",
        "\n",
        "* Enables real-time decision-making and control.\n",
        "* Requires significant model compression (such as quantization and pruning) and hardware-level optimization.\n",
        "* Commonly uses frameworks like TensorFlow Lite, ONNX Runtime, or specialized SDKs.\n",
        "\n",
        "When evaluating deployment options for real-world applications, always consider:\n",
        "\n",
        "* Performance and latency requirements\n",
        "* Data privacy and regulatory compliance\n",
        "* Implementation and maintenance complexity\n",
        "\n",
        "Choose a solution that ensures efficiency, scalability, and long-term sustainability.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c4afe05d",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "llm_learn",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.10"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
