{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "bc12c0d2",
   "metadata": {},
   "source": [
    "# Quickstarts for LLM serving\n",
    "\n",
    "These guides provide a fast path to serving LLMs using Ray Serve on Anyscale, with focused tutorials for different deployment scales, from single-GPU setups to multi-node clusters.\n",
    "\n",
    "Each tutorial includes development and production setups, tips for configuring your cluster, and guidance on monitoring and scaling with Ray Serve.\n",
    "\n",
    "---\n",
    "\n",
    "## Why use Ray Serve for LLM serving?\n",
    "\n",
    "Ray Serve LLM provides production-grade features beyond what standalone vLLM offers:\n",
    "\n",
    "**Horizontal scaling**: Replicate your model across multiple GPUs or nodes and automatically balance traffic across replicas. As request volume grows, Ray Serve automatically adds more replicas to handle the load.\n",
    "\n",
    "**Production readiness**: Ray Serve provides built-in autoscaling, fault tolerance, rolling updates, and comprehensive monitoring through Grafana dashboards. The system handles replica failures gracefully and scales based on traffic patterns.\n",
    "\n",
    "**Multi-model serving**: Deploy multiple models with different configurations on the same cluster. Each model can have its own autoscaling policy and resource requirements.\n",
    "\n",
    "**Modular architecture**: Separate your application logic from infrastructure concerns. You can customize request routing, add authentication layers, or integrate with existing systems without modifying your model serving code.\n",
    "\n",
    "For simple single-GPU deployments or experimentation, standalone vLLM might be sufficient. However, for production workloads that need to scale, handle failures, or serve multiple models efficiently, Ray Serve provides the infrastructure you need.\n",
    "\n",
    "---\n",
    "\n",
    "## Understanding the Ray Serve LLM architecture\n",
    "\n",
    "Ray Serve LLM is built on two main components that work together to serve your model:\n",
    "\n",
    "**LLMServer**: A Ray Serve deployment that manages a vLLM engine instance. Each replica of this deployment:\n",
    "\n",
    "- Manages a single vLLM engine instance.\n",
    "- Handles GPU placement through Ray's placement groups.\n",
    "- Processes inference requests with continuous batching.\n",
    "- Exposes engine metrics for monitoring.\n",
    "\n",
    "**OpenAiIngress**: A FastAPI-based ingress deployment that:\n",
    "\n",
    "- Provides OpenAI-compatible API endpoints (`/v1/chat/completions`, etc.).\n",
    "- Routes requests to the appropriate LLMServer replicas.\n",
    "- Handles load balancing across multiple replicas.\n",
    "- Manages model multiplexing (for example, LoRA adapters).\n",
    "\n",
    "When you call `build_openai_app`, Ray Serve LLM creates both components and connects them automatically. The ingress receives HTTP requests and forwards them to available LLMServer replicas through deployment handles. This architecture enables:\n",
    "\n",
    "- **Independent scaling**: Scale the ingress and LLMServer independently based on CPU and GPU utilization.\n",
    "- **Fault tolerance**: Replica failures don't affect the entire service.\n",
    "- **Flexibility**: Customize routing logic or add authentication without modifying the model serving code.\n",
    "\n",
    "<img src=\"https://anyscale-materials.s3.us-west-2.amazonaws.com/public-images/ray-serve-llm/diagrams/llmserver.png\" width=\"800\">\n",
    "\n",
    "For detailed technical information, including diagrams of request flow and placement strategies, see the [Architecture overview](https://docs.ray.io/en/latest/serve/llm/architecture/overview.html).\n",
    "\n",
    "## Tutorial categories\n",
    "\n",
    "**[Deploy a small-sized LLM](https://docs.ray.io/en/latest/serve/tutorials/deployment-serve-llm/small-size-llm/README.html)**  \n",
    "Deploy small-sized models on a single GPU, such as Llama 3 8&nbsp;B, Mistral 7&nbsp;B, or Phi-2.  \n",
    "\n",
    "---\n",
    "\n",
    "**[Deploy a medium-sized LLM](https://docs.ray.io/en/latest/serve/tutorials/deployment-serve-llm/medium-size-llm/README.html)**  \n",
    "Deploy medium-sized models using tensor parallelism across 4—8 GPUs on a single node, such as Llama 3 70&nbsp;B, Qwen 14&nbsp;B, Mixtral 8x7&nbsp;B.  \n",
    "\n",
    "---\n",
    "\n",
    "**[Deploy a large-sized LLM](https://docs.ray.io/en/latest/serve/tutorials/deployment-serve-llm/large-size-llm/README.html)**  \n",
    "Deploy massive models using pipeline parallelism across a multi-node cluster, such as Deepseek-R1 or Llama-Nemotron-253&nbsp;B.  \n",
    "\n",
    "---\n",
    "\n",
    "**[Deploy a vision LLM](https://docs.ray.io/en/latest/serve/tutorials/deployment-serve-llm/vision-llm/README.html)**  \n",
    "Deploy models with image and text input such as Qwen 2.5-VL-7&nbsp;B-Instruct, MiniGPT-4, or Pixtral-12&nbsp;B.  \n",
    "\n",
    "---\n",
    "\n",
    "**[Deploy a reasoning LLM](https://docs.ray.io/en/latest/serve/tutorials/deployment-serve-llm/reasoning-llm/README.html)**  \n",
    "Deploy models with reasoning capabilities designed for long-context tasks, coding, or tool use, such as QwQ-32&nbsp;B.  \n",
    "\n",
    "---\n",
    "\n",
    "**[Deploy a hybrid reasoning LLM](https://docs.ray.io/en/latest/serve/tutorials/deployment-serve-llm/hybrid-reasoning-llm/README.html)**  \n",
    "Deploy models that can switch between reasoning and non-reasoning modes for flexible usage, such as Qwen-3.\n",
    "\n",
    "---\n",
    "\n",
    "**[Deploy gpt-oss](https://docs.ray.io/en/latest/ray-overview/examples/deployment-serve-llm/gpt-oss/README.html)**  \n",
    "Deploy gpt-oss reasoning models for high-reasoning, production-scale workloads, for lower latency (`gpt-oss-20b`) and high-reasoning (`gpt-oss-120b`) use cases."
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  },
  "myst": {
   "front_matter": {
    "orphan": true
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
