{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "source": [
    "# Tracing Guardrails with Jaeger\n",
    "\n",
    "NeMo Guardrails supports the Open Telemetry ([OTEL](https://opentelemetry.io/)) standard, providing granular visibility into server-side latency. It automatically captures the latency of each LLM and API call, then exports this telemetry using OTEL. You can visualize this latency with any OTEL-compatible backend, including Grafana, Jaeger, Prometheus, SigNoz, New Relic, Datadog, and Honeycomb.\n",
    "\n",
    "In this notebook, you will learn how to use [Jaeger](https://www.jaegertracing.io/) to visualize NeMo Guardrails latency. Jaeger is a popular, open-source distributed tracing platform used to monitor production services. This notebook walks through the process in three stages:\n",
    "\n",
    "1.  Download and run Jaeger in standalone mode.\n",
    "2.  Configure NeMo Guardrails to emit metrics to Jaeger.\n",
    "3.  Run inferences and view the results in Jaeger.\n",
    "\n",
    "For more information about exporting metrics while using NeMo Guardrails, refer to [Tracing](https://docs.nvidia.com/nemo/guardrails/latest/user-guides/tracing/quick-start.html) in the Guardrails toolkit documentation.\n",
    "\n",
    "---\n",
    "\n",
    "## Prerequisites\n",
    "\n",
    "This notebook requires the following:\n",
    "\n",
    "- An NVIDIA NGC account and an NGC API key. You need to provide the key to the `NVIDIA_API_KEY` environment variable. To create a new key, go to [NGC API Key](https://org.ngc.nvidia.com/setup/api-key) in the NGC console.\n",
    "- Python 3.10 or later.\n",
    "- Running Docker Daemon"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "## Running Jaeger in Local Mode Using Docker\n",
    "\n",
    "[Jaeger](https://www.jaegertracing.io/) is a popular tool to visualize Open Telemetry data and operate systems in production. \n",
    "\n",
    "Run the following command to create a standalone Docker container running Jaeger.\n",
    "\n",
    "```bash\n",
    "$ docker run --rm --name jaeger \\\n",
    "  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \\\n",
    "  -p 6831:6831/udp \\\n",
    "  -p 6832:6832/udp \\\n",
    "  -p 5778:5778 \\\n",
    "  -p 16686:16686 \\\n",
    "  -p 4317:4317 \\\n",
    "  -p 4318:4318 \\\n",
    "  -p 14250:14250 \\\n",
    "  -p 14268:14268 \\\n",
    "  -p 14269:14269 \\\n",
    "  -p 9411:9411 \\\n",
    "  jaegertracing/all-in-one:1.62.0\n",
    "```\n",
    "\n",
    "You'll see that the container prints debug messages that end with the following lines. This indicates the Jaeger server is up and ready to accept requests. These can be sent over either gRPC or REST on the corresponding ports listed below.\n",
    "\n",
    "```bash\n",
    "{\"level\":\"info\",\"ts\":1756236324.295533,\"caller\":\"healthcheck/handler.go:118\",\"msg\":\"Health Check state change\",\"status\":\"ready\"}\n",
    "{\"level\":\"info\",\"ts\":1756236324.2955446,\"caller\":\"app/server.go:309\",\"msg\":\"Starting GRPC server\",\"port\":16685,\"addr\":\":16685\"}\n",
    "{\"level\":\"info\",\"ts\":1756236324.2955563,\"caller\":\"grpc@v1.67.1/server.go:880\",\"msg\":\"[core] [Server #7 ListenSocket #8]ListenSocket created\"}\n",
    "{\"level\":\"info\",\"ts\":1756236324.2955787,\"caller\":\"app/server.go:290\",\"msg\":\"Starting HTTP server\",\"port\":16686,\"addr\":\":16686\"}\n",
    "```"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once the docker container is up and running, open a web-browser and navigate to http://localhost:16686/search . You should see the following screen. The Service dropdown will be empty as we haven't connected any traces to the Jaeger server yet, and no data is loaded to visualize. We'll work on this in the next section.\n",
    "\n",
    "<img src=\"./images/jaeger_blank.png\" width=\"500\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "## Install and Import Packages\n",
    "\n",
    "Before you begin, install and import the following packages that you'll use in the notebook."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: pip in /Users/tgasser/Library/Caches/pypoetry/virtualenvs/nemoguardrails-qkVbfMSD-py3.13/lib/python3.13/site-packages (25.2)\n"
     ]
    }
   ],
   "source": [
    "!pip install --upgrade pip"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-18T18:37:35.030465Z",
     "start_time": "2025-08-18T18:37:35.028290Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "!pip install pandas plotly langchain_nvidia_ai_endpoints aiofiles opentelemetry-exporter-otlp -q"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-18T18:37:35.858952Z",
     "start_time": "2025-08-18T18:37:35.323139Z"
    }
   },
   "outputs": [],
   "source": [
    "# Import some useful modules\n",
    "import os\n",
    "from typing import Any, Dict, List"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-18T18:37:36.458565Z",
     "start_time": "2025-08-18T18:37:36.456308Z"
    }
   },
   "outputs": [],
   "source": [
    "# Check the NVIDIA_API_KEY environment variable is set\n",
    "assert os.getenv(\"NVIDIA_API_KEY\"), (\n",
    "    \"Please create a key at build.nvidia.com and set the NVIDIA_API_KEY environment variable\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "------\n",
    "\n",
    "## Guardrail Configurations\n",
    "\n",
    "You'll create a Guardrail configuration to run three input rails in parallel, generate an LLM response, and run an output rail on the LLM response."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Models Configuration\n",
    "\n",
    "Store the model configuration required for tracing in the dictionary format as shown below. Each model configuration entry contains `type`, `engine`, and `model` fields:\n",
    "\n",
    "* **`type`**: This field identifies the task type of a model you want to use. The keyword `main` is reserved for the application LLM, which is responsible for generating a response to the client's request. Any other model names are referenced in the Guardrail flows to build specific workflows.\n",
    "* **`engine`**: This controls the library used to communicate with the model. The `nim` engine uses [`langchain_nvidia_ai_endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) to interact with NVIDIA-hosted LLMs, while the `openai` engine connects to [OpenAI-hosted models](https://platform.openai.com/docs/models).\n",
    "* **`model`**: This is the name of the specific model you want to use for the task type.    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "CONFIG_MODELS: List[Dict[str, str]] = [\n",
    "    {\n",
    "        \"type\": \"main\",\n",
    "        \"engine\": \"nim\",\n",
    "        \"model\": \"meta/llama-3.3-70b-instruct\",\n",
    "    },\n",
    "    {\n",
    "        \"type\": \"content_safety\",\n",
    "        \"engine\": \"nim\",\n",
    "        \"model\": \"nvidia/llama-3.1-nemoguard-8b-content-safety\",\n",
    "    },\n",
    "    {\n",
    "        \"type\": \"topic_control\",\n",
    "        \"engine\": \"nim\",\n",
    "        \"model\": \"nvidia/llama-3.1-nemoguard-8b-topic-control\",\n",
    "    },\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Rails\n",
    "\n",
    "The `rails` configuration section defines a workflow that executes on every client request. The high-level sections are `input` for input rails, `output` for output rails, and `config` for any additional model condfiguration. Guardrails flows reference models defined in the `CONFIG_MODELS` variable above using the `$model=<model.type>` syntax. The following list describes each section in more detail:\n",
    "\n",
    "* `input`: Input rails run on the client request only. The config below uses three classifiers to predict whether a user request is safe, on-topic, or a jailbreak attempt. These rails can be run in parallel to reduce the latency. If any of the rails predicts an unsafe input, a refusal text is returned to the user, and no LLM generation is triggered.\n",
    "* `output`: Output rails run on both client request and the LLM response to that request. The example below checks whether the LLM response to the user request is safe to return. Output rails are needed as well as input because a safe request may give an unsafe response from the LLM if it interprets the request incorrectly. A refusal text is returned to the client if the response is unsafe.\n",
    "* `config`: Any configuration used outside of a Langchain LLM interface is included in this section. The [Jailbreak detection model](https://build.nvidia.com/nvidia/nemoguard-jailbreak-detect) uses an embedding model as a feature-generation step, followed by a Random Forest classifier to detect a jailbreak attempt."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "def config_rails(parallel: bool) -> Dict[str, Any]:\n",
    "    \"\"\"Create the rails configuration with programmable parallel setup\"\"\"\n",
    "    return {\n",
    "        \"input\": {\n",
    "            \"parallel\": parallel,\n",
    "            \"flows\": [\n",
    "                \"content safety check input $model=content_safety\",\n",
    "                \"topic safety check input $model=topic_control\",\n",
    "                \"jailbreak detection model\",\n",
    "            ],\n",
    "        },\n",
    "        \"output\": {\"flows\": [\"content safety check output $model=content_safety\"]},\n",
    "        \"config\": {\n",
    "            \"jailbreak_detection\": {\n",
    "                \"nim_base_url\": \"https://ai.api.nvidia.com\",\n",
    "                \"nim_server_endpoint\": \"/v1/security/nvidia/nemoguard-jailbreak-detect\",\n",
    "                \"api_key_env_var\": \"NVIDIA_API_KEY\",\n",
    "            }\n",
    "        },\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Tracing\n",
    "\n",
    "The tracing configuration configures the adapter and any adapter-specific controls. Here we're sending metrics over opentelemetry for visualization by another tool."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "CONFIG_TRACING = {\"enabled\": True, \"adapters\": [{\"name\": \"OpenTelemetry\"}]}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prompts\n",
    "\n",
    "Each Nemoguard model is fine-tuned for a specific task using a customized prompt. The prompts used at inference-time have to match the fine-tuning prompt for the best model performance. We'll load these prompts from other locations in the Guardrails repo and show them below.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "import yaml\n",
    "\n",
    "\n",
    "def load_yaml_file(filename: str) -> Dict[str, Any]:\n",
    "    \"\"\"Load a YAML file\"\"\"\n",
    "\n",
    "    with open(filename, \"r\") as infile:\n",
    "        data = yaml.safe_load(infile)\n",
    "    return data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "content_safety_prompts = load_yaml_file(\"../../../examples/configs/content_safety/prompts.yml\")\n",
    "topic_safety_prompts = load_yaml_file(\"../../../examples/configs/topic_safety/prompts.yml\")\n",
    "all_prompts = content_safety_prompts[\"prompts\"] + topic_safety_prompts[\"prompts\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loaded prompt tasks:\n",
      "content_safety_check_input $model=content_safety\n",
      "content_safety_check_output $model=content_safety\n",
      "content_safety_check_input $model=llama_guard\n",
      "content_safety_check_output $model=llama_guard_2\n",
      "content_safety_check_input $model=shieldgemma\n",
      "content_safety_check_output $model=shieldgemma\n",
      "topic_safety_check_input $model=topic_control\n"
     ]
    }
   ],
   "source": [
    "all_prompt_tasks = [prompt[\"task\"] for prompt in all_prompts]\n",
    "print(\"Loaded prompt tasks:\")\n",
    "print(\"\\n\".join(all_prompt_tasks))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Putting All Configurations Together\n",
    "\n",
    "Use the helper functions, model definitions, and prompts from the above cells and create the sequential and parallel configurations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "SEQUENTIAL_CONFIG = {\n",
    "    \"models\": CONFIG_MODELS,\n",
    "    \"rails\": config_rails(parallel=False),\n",
    "    \"tracing\": CONFIG_TRACING,\n",
    "    \"prompts\": all_prompts,\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "PARALLEL_CONFIG = {\n",
    "    \"models\": CONFIG_MODELS,\n",
    "    \"rails\": config_rails(parallel=True),\n",
    "    \"tracing\": CONFIG_TRACING,\n",
    "    \"prompts\": all_prompts,\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-------\n",
    "\n",
    "## Tracing Guardrails Requests\n",
    "\n",
    "In this section of the notebook, you'll first import and set up OTEL Tracing to export data to `http://localhost:4317`. The Jaeger server has opened this port to receive telemetry, and will store it in-memory and make it available for visualization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-18T18:37:40.231716Z",
     "start_time": "2025-08-18T18:37:40.228434Z"
    },
    "collapsed": false,
    "jupyter": {
     "outputs_hidden": false
    }
   },
   "outputs": [],
   "source": [
    "import nest_asyncio\n",
    "\n",
    "# Need to run this command when running in a notebook\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "from opentelemetry import trace\n",
    "from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter\n",
    "from opentelemetry.sdk.resources import Resource\n",
    "from opentelemetry.sdk.trace import TracerProvider\n",
    "from opentelemetry.sdk.trace.export import BatchSpanProcessor\n",
    "\n",
    "# Configure OpenTelemetry before NeMo Guardrails\n",
    "resource = Resource.create({\"service.name\": \"my-guardrails-app\"})\n",
    "tracer_provider = TracerProvider(resource=resource)\n",
    "trace.set_tracer_provider(tracer_provider)\n",
    "\n",
    "# Export traces to the port location matching\n",
    "otlp_exporter = OTLPSpanExporter(endpoint=\"http://localhost:4317\", insecure=True)\n",
    "tracer_provider.add_span_processor(BatchSpanProcessor(otlp_exporter))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Running Sequential Request\n",
    "\n",
    "To run a sequential request, you'll create a `RailsConfig` object with the sequential config YAML files from above. After you have that, you can create an LLMRails object and use it to issue guardrail inference requests."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-08-18T18:37:41.172531Z",
     "start_time": "2025-08-18T18:37:40.773719Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'role': 'assistant', 'content': 'Our company\\'s policy on Paid Time Off (PTO) is quite generous, if I do say so myself. We believe that taking breaks and vacations is essential for our employees\\' well-being and productivity. \\n\\nAccording to our company handbook, full-time employees are eligible for 15 days of paid vacation per year, in addition to 10 paid holidays and 5 personal days. Part-time employees, on the other hand, accrue PTO at a rate of 1 hour for every 20 hours worked, up to a maximum of 40 hours per year.\\n\\nNow, here\\'s how it works: employees can start accruing PTO from their very first day of work, but they can\\'t take any time off until they\\'ve completed their 90-day probationary period. After that, they can request time off through our online portal, and their manager will review and approve the request.\\n\\nIt\\'s worth noting that we also offer a flexible PTO policy, which allows employees to take time off in increments as small as 30 minutes. We understand that sometimes, you just need to take a few hours off to attend to personal matters or simply recharge.\\n\\nWe also have a \"use it or lose it\" policy, where any unused PTO days will be forfeited at the end of the calendar year. However, employees can carry over up to 5 unused days to the next year, as long as they\\'ve accrued a minimum of 10 days in the previous year.\\n\\nOh, and one more thing: we observe all major holidays, including New Year\\'s Day, Memorial Day, Independence Day, Labor Day, Thanksgiving Day, and Christmas Day. On these days, our offices are closed, and employees are not expected to work.\\n\\nI hope that helps clarify our company\\'s PTO policy! Do you have any specific questions or scenarios you\\'d like me to address?'}]\n"
     ]
    }
   ],
   "source": [
    "from nemoguardrails import LLMRails, RailsConfig\n",
    "\n",
    "sequential_rails_config = RailsConfig.model_validate(SEQUENTIAL_CONFIG)\n",
    "sequential_rails = LLMRails(sequential_rails_config)\n",
    "\n",
    "safe_request = \"What is the company policy on PTO?\"\n",
    "\n",
    "response = await sequential_rails.generate_async(\n",
    "    messages=[\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": safe_request,\n",
    "        }\n",
    "    ]\n",
    ")\n",
    "\n",
    "print(response.response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Running Parallel request\n",
    "\n",
    "Repeat the same request with the three input rails running in parallel, rather than running sequentially."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'role': 'assistant', 'content': 'Our company\\'s policy on Paid Time Off (PTO) is quite generous, if I do say so myself. We believe that taking breaks and vacations is essential for our employees\\' well-being and productivity. \\n\\nAccording to our company handbook, full-time employees are eligible for 15 days of paid vacation per year, in addition to 10 paid holidays and 5 personal days. Part-time employees, on the other hand, accrue PTO at a rate of 1 hour for every 20 hours worked, up to a maximum of 40 hours per year.\\n\\nNow, here\\'s how it works: employees can start accruing PTO from their very first day of work, but they can\\'t take any time off until they\\'ve completed their 90-day probationary period. After that, they can start requesting time off, and we encourage them to give us as much notice as possible so we can make sure to cover their responsibilities while they\\'re away.\\n\\nWe also offer a flexible PTO policy, which allows employees to take time off in increments as small as a half-day. And, if an employee needs to take an extended leave of absence for a family or medical emergency, we have a separate policy in place to support them.\\n\\nOne thing to note is that PTO accrues throughout the year, but it doesn\\'t roll over to the next year if it\\'s not used. So, employees should make sure to use their PTO before the end of the year, or they\\'ll lose it. We do, however, offer a \"cash-out\" option for unused PTO at the end of the year, which can be a nice little bonus for employees who haven\\'t taken all their time off.\\n\\nI hope that helps clarify our company\\'s PTO policy! Do you have any specific questions or scenarios you\\'d like me to address?'}]\n"
     ]
    }
   ],
   "source": [
    "from nemoguardrails import LLMRails, RailsConfig\n",
    "\n",
    "parallel_rails_config = RailsConfig.model_validate(PARALLEL_CONFIG)\n",
    "parallel_rails = LLMRails(parallel_rails_config)\n",
    "\n",
    "response = await parallel_rails.generate_async(\n",
    "    messages=[\n",
    "        {\n",
    "            \"role\": \"user\",\n",
    "            \"content\": safe_request,\n",
    "        }\n",
    "    ]\n",
    ")\n",
    "\n",
    "print(response.response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now you've run both sequential and parallel Guardrails on an identical request, we can visualize the results in Jaeger in the next section."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-------\n",
    "\n",
    "## Visualize Guardrails Traces in Jaeger\n",
    "\n",
    "You will now visualize the sequential and parallel traces using Jaeger. You'll need to refresh the page at http://localhost:16686/search, click on the Service drop-down, and select \"my-guardrails-app\". Then click the \"Find Traces\" button at the bottom of the left sidebar. You'll see two \"my-guardrails-app:guardrails.request\" items in the Traces sections. \n",
    "\n",
    "These are listed with the most recent at the top, and oldest at the bottom. The top entry is the parallel call, and bottom entry is the sequential call. Clicking on each of these brings up visualization below."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Sequential trace\n",
    "\n",
    "The trace below shows the sequential rail execution. Each step has two components: `guardrails.rail` and `guardrails.action`. Each `guardrails.action` may have an LLM call underneath it, for example `content_safety_check_input`, `topic_safety_check_input`, `general`, or `content_safety_check_output`.\n",
    "\n",
    "The three input rails run sequentially in this example, taking around 500ms - 700ms each. This is a safe prompt, and is passed on to the Application LLM to generate a response in 7.85s. Finally the Content-Safety output-rail runs in 560ms.\n",
    "\n",
    "\n",
    "<img src=\"./images/jaeger_sequential.png\" width=\"800\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Parallel trace\n",
    "\n",
    "The Parallel trace runs each of the input rails in parallel, rather than sequentially. The Content-Safety, Topic-Control, and Jailbreak models run in parallel. Guardrails waits until all three complete, and once the checks pass, the Application LLM starts generating a response. Finally the content-safety output check runs on the LLM response.\n",
    "\n",
    "<img src=\"./images/jaeger_parallel.png\" width=\"800\"/>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "# Conclusions\n",
    "\n",
    "In this notebook, you learned how to trace Guardrails requests in both **sequential** and **parallel** modes, using Jaeger to visualize results. While we used a local in-memory local Jaeger Docker container in this case, a production-grade deployment of Jaeger has the same functionality. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Cleanup steps\n",
    "\n",
    "Once you've finished experimenting with Jaeger and Guardrails tracing, you'll need to clean up the Docker container with the commands below.\n",
    "\n",
    "First, check the Jaeger container ID:\n",
    "\n",
    "```\n",
    "$ docker ps\n",
    "CONTAINER ID   IMAGE                             COMMAND                  CREATED              STATUS              PORTS                                                                                                                                                                                                                    NAMES\n",
    "76215286b61b   jaegertracing/all-in-one:1.62.0   \"/go/bin/all-in-one-…\"   About a minute ago   Up About a minute   0.0.0.0:4317-4318->4317-4318/tcp, 0.0.0.0:5778->5778/tcp, 0.0.0.0:9411->9411/tcp, 0.0.0.0:14250->14250/tcp, 0.0.0.0:14268-14269->14268-14269/tcp, 0.0.0.0:16686->16686/tcp, 5775/udp, 0.0.0.0:6831-6832->6831-6832/udp   jaeger\n",
    "```\n",
    "\n",
    "Now, copy the Container ID and run the command below. \n",
    "\n",
    "```\n",
    "$ docker kill 76215286b61b\n",
    "76215286b61b\n",
    "```"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
