{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "257c1153",
   "metadata": {},
   "source": [
    "## Model Selection and Parameter Optimization"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9fbf6a12",
   "metadata": {},
   "source": [
    "In this notebook, we will demonstrate how the NVIDIA NeMo Agent toolkit (NAT) optimizer can be used to create a robust model evaluation, comparison, and selection pipeline for custom datasets.\n",
    "\n",
    "**Goal**:\n",
    "\n",
    "By the end of this notebook, you will be able to:\n",
    "- Build an LLM-as-a-judge evaluation for a simple chat workflow: define evaluators and optimizer settings, create an eval dataset, run the optimizer, and interpret results.\n",
    "- Select optimal backbone models and parameters for a tool-calling agent (Alert Triage Agent): configure, test, evaluate, optimize, and re-evaluate.\n",
    "- Perform concurrent numeric tuning (models, hyperparameters) and prompt tuning using the genetic optimizer, then compare before and after results.\n",
    "- Weigh trade-offs across accuracy, groundedness, relevance, latency, and token efficiency, and export an optimized config for downstream production use."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "328ee544",
   "metadata": {},
   "source": [
    "## Table of Contents\n",
    " \n",
    "- [0.0) Setup](#setup)\n",
    "  - [0.1) Prerequisites](#prereqs)\n",
    "  - [0.2) API Keys](#api-keys)\n",
    "  - [0.3) Installing NeMo Agent Toolkit](#install-nat)\n",
    "  - [0.4) Additional dependencies](#deps)\n",
    "- [1.0) LLM-as-a-judge with NAT](#llm-judge-h1)\n",
    "  - [1.1) Create a new workflow](#new-workflow)\n",
    "  - [1.2) Head-to-head comparison of multiple LLMs using eval](#nat-eval)\n",
    "    - [1.2.1) LLM-as-a-judge workflow config](#config)\n",
    "    - [1.2.2) Add optimizer settings to the configuration](#optimizer-settings)\n",
    "    - [1.2.3) Create an eval dataset](#dataset)\n",
    "    - [1.2.4) Run the optimizer](#optimize-first)\n",
    "    - [1.2.5) Interpret first optimizer run](#interpret-optimizer-first)\n",
    "- [2.0) Optimized model and parameter selection for tool-calling agents](#optimize-tool-calling-agents)\n",
    "  - [2.1) Create a tool-calling agent](#create-triage-agent)\n",
    "  - [2.2) Configure the tool-calling agent](#configure-triage-agent)\n",
    "  - [2.3) Test the tool-calling agent](#test-triage-agent)\n",
    "  - [2.4) Evaluate the tool-calling agent](#eval-triage-agent1)\n",
    "  - [2.5) Optimize the tool-calling agent's LLM](#optimize-triage-agent)\n",
    "  - [2.6) Re-evaluate the optimized tool-calling agent](#eval-triage-agent2)\n",
    "- [3.0) Concurrent model parameter and prompt tuning](#model-and-prompt-tuning)\n",
    "  - [3.1) Optimizer configuration for all parameters (models, hyperparameters, and prompts)](#all-tuning-config)\n",
    "  - [3.2) Evaluate the agent](#all-tuning-initial-eval)\n",
    "  - [3.3) Optimize the agent](#all-tuning-optimize)\n",
    "  - [3.4) Re-evaluate the optimized tool-calling agent](#eval-triage-agent2)\n",
    "- [4.0) Next steps](#next-steps)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d46297fd",
   "metadata": {},
   "source": [
    "<a id=\"setup\"></a>\n",
    "# 0.0) Setup"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8421ac5",
   "metadata": {},
   "source": [
    "<a id=\"prereqs\"></a>\n",
    "## 0.1) Prerequisites\n",
    "\n",
    "We strongly recommend that users begin this notebook with a working understanding of NAT workflows. Please refer to earlier iterations of this notebook series prior to beginning this notebook.\n",
    "\n",
    "- **Platform:** Linux, macOS, or Windows\n",
    "- **Python:** version 3.11, 3.12, or 3.13\n",
    "- **Python Packages:** `pip`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "248a4ad7",
   "metadata": {},
   "source": [
    "<a id=\"api-keys\"></a>\n",
    "## 0.2) API Keys"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b790034",
   "metadata": {},
   "source": [
    "For this notebook, you will need the following API keys to run all examples end-to-end:\n",
    "\n",
    "- **NVIDIA Build:** You can obtain an NVIDIA Build API Key by creating an [NVIDIA Build](https://build.nvidia.com) account and generating a key at https://build.nvidia.com/settings/api-keys\n",
    "\n",
    "Then you can run the cell below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b4ff151d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import getpass\n",
    "import os\n",
    "\n",
    "if \"NVIDIA_API_KEY\" not in os.environ:\n",
    "    nvidia_api_key = getpass.getpass(\"Enter your NVIDIA API key: \")\n",
    "    os.environ[\"NVIDIA_API_KEY\"] = nvidia_api_key"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "18d98208",
   "metadata": {},
   "source": [
    "<a id=\"install-nat\"></a>\n",
    "## 0.3) Installing NeMo Agent Toolkit"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c8b6b321",
   "metadata": {},
   "source": [
    "The recommended way to install NAT is through `pip` or `uv pip`.\n",
    "\n",
    "First, we will install `uv` which offers parallel downloads and faster dependency resolution."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8b8855c8",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install uv"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d1109b11",
   "metadata": {},
   "source": [
    "NeMo Agent toolkit can be installed through the PyPI `nvidia-nat` package.\n",
    "\n",
    "There are several optional subpackages available for NAT. For this example, we will rely on three subpackages:\n",
    "* The `nvidia-nat[langchain]` subpackage contains components for integrating with [LangChain](https://python.langchain.com/docs/introduction/).\n",
    "* The `nvidia-nat[profiling]` subpackage contains components for profiling and performance analysis."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dc18e8ef",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "uv pip show -q \"nvidia-nat-langchain\"\n",
    "nat_langchain_installed=$?\n",
    "uv pip show -q \"nvidia-nat-profiling\"\n",
    "nat_profiling_installed=$?\n",
    "if [[ ${nat_langchain_installed} -ne 0 || ${nat_profiling_installed} -ne 0 ]]; then\n",
    "    uv pip install \"nvidia-nat[langchain,profiling]\"\n",
    "else\n",
    "    echo \"nvidia-nat[langchain,profiling] is already installed\"\n",
    "fi"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07a34464",
   "metadata": {},
   "source": [
    "<a id=\"deps\"></a>\n",
    "## 0.4) Additional dependencies"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d73082df",
   "metadata": {},
   "outputs": [],
   "source": [
    "# needed for the alert triage agent used later\n",
    "!uv pip install ansible-runner"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2a71e5cc",
   "metadata": {},
   "source": [
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Note:</strong> Uncomment and run this cell to install git-lfs if using Google Colab.\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d95af680",
   "metadata": {},
   "outputs": [],
   "source": [
    "# !apt-get update\n",
    "# !apt-get install git git-lfs -y\n",
    "# !git lfs install"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7ba3615a",
   "metadata": {},
   "source": [
    "<a id=\"llm-judge-h1\"></a>\n",
    "# 1.0) LLM-as-a-judge with NAT"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98db2f3a",
   "metadata": {},
   "source": [
    "The `nat eval` and `nat optimize` utilities enable developers to easily integrate LLM-as-a-judge capabilities with their workflows. `nat eval` allows for simple evaluations of a NAT workflow against an eval dataset. `nat optimize` extends this functionality by integrating with the **Optuna** library to perform grid and stochastic parameter sweeps and evaluations to identify optimal configurations for a task.\n",
    "\n",
    "**Note:** _In this notebook, we will primarily demonstrate how to use `nat optimize` to identify a potentially optimal set of parameters for a NAT workflow. It is assumed that users will already have a strong understanding of ML model evaluations before building this concept into their workflows - as we will not be covering cross validation and train, validation, and test splitting of datasets. Please refer to python's [SciKit-Learn](https://scikit-learn.org/stable/) package as a strong reference for these concepts._"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5114d358",
   "metadata": {},
   "source": [
    "<a id=\"new-workflow\"></a>\n",
    "## 1.1) Create a new workflow\n",
    "\n",
    "Create a basic chat completions workflow (using LangChain chat completions on backend)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e94f46ff",
   "metadata": {},
   "outputs": [],
   "source": [
    "!nat workflow create tmp_workflow --description \"A simple chat completion workflow to compare model performance\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c757d63",
   "metadata": {},
   "source": [
    "Let's look at the default configuration of this agent and confirm the agent type, LLMs, tool calls, and functions..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f53d5365",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile ./tmp_workflow/configs/config_a.yml\n",
    "llms:\n",
    "  nim_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-8b-instruct\n",
    "    temperature: 0.7\n",
    "    max_tokens: 1024\n",
    "\n",
    "workflow:\n",
    "  _type: chat_completion  # Use the type directly\n",
    "  system_prompt: |\n",
    "    You are a helpful AI assistant. Provide clear, accurate, and helpful\n",
    "    responses to user queries. Be concise and informative.\n",
    "  llm_name: nim_llm"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f1a0afc",
   "metadata": {},
   "source": [
    "Now let's run this workflow for a simple Q&A example..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e6510270",
   "metadata": {},
   "outputs": [],
   "source": [
    "!nat run --config_file tmp_workflow/configs/config_a.yml --input \"Suggest a single name for my new dog\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a740010",
   "metadata": {},
   "source": [
    "<a id=\"nat-eval\"></a>\n",
    "## 1.2) Head-to-head comparison of multiple LLMs using eval"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aba364eb",
   "metadata": {},
   "source": [
    "Now that we've made a new workflow and shown that it works for a cursory `nat run` example, we will begin to build out an LLM-as-a-judge evaluation with trace profiling enabled for additional observability. In this next section, we are going to update the workflow configuration for evaluation and profiling.\n",
    "\n",
    "Step-by-step instructions can be found in [4_observability_evaluation_and_profiling.ipynb](./4_observability_evaluation_and_profiling.ipynb). An end-to-end example of using the Optimizer can be viewed in the [Email Phishing Analyzer](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/examples/evaluation_and_profiling/email_phishing_analyzer/src/nat_email_phishing_analyzer/configs/config_optimizer.yml).\n",
    "\n",
    "The profiler instruments and measures your workflow's performance, while evaluators judge the quality of the outputs. They're separate concepts, so they belong in different sections of the config!\n",
    "\n",
    "In this next step we will combine the eval and profile configuration into a single config for brevity."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f222012b",
   "metadata": {},
   "source": [
    "<a id=\"config\"></a>\n",
    "### 1.2.1) LLM-as-a-judge workflow config\n",
    "\n",
    "In the cell below we edit our initial workflow configuration to include `eval` and `optimizer` configurations.\n",
    "\n",
    "Key components of this configuration:\n",
    "\n",
    "**LLM Configuration:**\n",
    "- `chat_completion_llm`: The backbone LLM that powers the workflow\n",
    "- `optimizable_params`: Specifies which parameters the optimizer can tune (model name, temperature)\n",
    "- `search_space`: Defines the values the optimizer will explore during optimization\n",
    "\n",
    "**Judge LLM:**\n",
    "- `nim_judge_llm`: A separate, more capable LLM (meta/llama-3.1-405b-instruct) used by the evaluator to assess the quality of the workflow's outputs\n",
    "  - This LLM acts as an \"LLM-as-a-judge\" to score responses\n",
    "\n",
    "**Evaluation Components:**\n",
    "- `evaluators`: Define metrics to measure workflow quality (for example, accuracy, relevance)\n",
    "- `profiler`: Instruments the workflow to collect performance metrics (latency, token usage, costs)\n",
    "\n",
    "**Optimizer Components:**\n",
    "- `reps_per_param_set`: Number of times to evaluate each parameter combination for statistical reliability\n",
    "- `grid_search`: Strategy for exploring the search space (tests all combinations)\n",
    "- `eval_metrics`: Metrics used to guide optimization decisions (for example, maximize accuracy while minimizing cost)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9f354066",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile tmp_workflow/configs/config_b.yml\n",
    "llms:\n",
    "  chat_completion_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-8b-instruct\n",
    "    temperature: 0.0\n",
    "    max_tokens: 1024\n",
    "    optimizable_params:\n",
    "      - model_name\n",
    "      - temperature\n",
    "    search_space:\n",
    "      model_name:\n",
    "        values:\n",
    "          - meta/llama-3.1-8b-instruct\n",
    "          - meta/llama-3.1-70b-instruct\n",
    "      temperature:\n",
    "        values:\n",
    "          - 0.0\n",
    "          - 0.7\n",
    "\n",
    "  # Judge LLM for accuracy evaluation\n",
    "  nim_judge_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-405b-instruct\n",
    "    temperature: 0.0\n",
    "    max_tokens: 8  # RAGAS accuracy only needs a score (0-1)\n",
    "\n",
    "workflow:\n",
    "  _type: chat_completion\n",
    "  system_prompt: |\n",
    "    You are a helpful AI assistant. Provide clear, accurate, and helpful\n",
    "    responses to user queries. Be concise and informative.\n",
    "  llm_name: chat_completion_llm\n",
    "\n",
    "general:\n",
    "  telemetry:\n",
    "    logging:\n",
    "      console:\n",
    "        _type: console\n",
    "        level: INFO\n",
    "\n",
    "eval:\n",
    "  general:\n",
    "    output_dir: ./tmp_workflow/eval_output\n",
    "    verbose: true\n",
    "    dataset:\n",
    "        _type: json\n",
    "        file_path: ./tmp_workflow/data/eval_data.json\n",
    "\n",
    "  evaluators:\n",
    "    answer_accuracy:\n",
    "      _type: ragas\n",
    "      metric: AnswerAccuracy\n",
    "      llm_name: nim_judge_llm\n",
    "    llm_latency:\n",
    "      _type: avg_llm_latency\n",
    "    token_efficiency:\n",
    "      _type: avg_tokens_per_llm_end\n",
    "\n",
    "  profiler:\n",
    "      token_uniqueness_forecast: true\n",
    "      workflow_runtime_forecast: true\n",
    "      compute_llm_metrics: true\n",
    "      csv_exclude_io_text: true\n",
    "      prompt_caching_prefixes:\n",
    "        enable: true\n",
    "        min_frequency: 0.1\n",
    "      bottleneck_analysis:\n",
    "        enable_nested_stack: true\n",
    "      concurrency_spike_analysis:\n",
    "        enable: true\n",
    "        spike_threshold: 7\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11e04758",
   "metadata": {},
   "source": [
    "<a id=\"optimizer-settings\"></a>\n",
    "### 1.2.2) Add optimizer settings to the configuration\n",
    "\n",
    "**For a complete reference of all optimizer configuration parameters, see the [Optimizer documentation](../../docs/source/reference/optimizer.md) or go to your working branch on [GitHub - dev](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/reference/optimizer.md).**\n",
    "\n",
    "\n",
    "\n",
    "Next, we will append the optimizer-specific settings to our configuration file under the \"optimizer\" section. The following describes the purpose and configurability of each.\n",
    "\n",
    "**Top-Level Settings**\n",
    "\n",
    "`output_path` - Specifies where all optimization results will be saved\n",
    "\n",
    "Files created here:\n",
    "- `optimized_config.yml` - The best configuration found\n",
    "- `trials_dataframe_params.csv` - Detailed results from all trials\n",
    "- `config_numeric_trial_{N}.yml` - Individual trial configurations\n",
    "- `plots/` - Pareto front visualizations (if multiple metrics)\n",
    "\n",
    "`reps_per_param_set: 10`\n",
    "\n",
    "> What it does: Number of times to run your workflow with each parameter configuration. This is important because LLMs are > non-deterministic (same input can give different outputs) and we often want to determine performance over a larger sample.\n",
    "> \n",
    "> How it works:\n",
    "> - If testing 5 different configurations × 10 reps = 50 total workflow runs\n",
    "> - Results are averaged across the 10 runs for statistical reliability\n",
    "> \n",
    "> Trade-off:\n",
    "> - Higher reps = more reliable results but slower optimization and more compute used\n",
    "> - Lower reps = faster but less confidence in which config is truly better, cheaper\n",
    "\n",
    "**Evaluation Metrics (`eval_metrics`)**\n",
    "\n",
    "This section defines what you're optimizing for. You can have multiple objectives.\n",
    "\n",
    "- `accuracy` (custom name, you choose this)\n",
    "- `token_efficiency` (another custom name)\n",
    "- `latency` (another custom name)\n",
    "\n",
    "Key Concepts:\n",
    "- `evaluator_name`: References an evaluator you've defined elsewhere in your config (must match exactly)\n",
    "- `direction`:\n",
    "  - `maximize` - Higher scores are better (accuracy, precision, F1)\n",
    "  - `minimize` - Lower scores are better (latency, cost, error rate)\n",
    "- Multi-objective optimization: With 3 metrics here, the optimizer finds configurations that balance all three goals (Pareto optimization)\n",
    "  - `weight` - coefficient of relative importance for the optimizer (defaults to 1.0)\n",
    "\n",
    "**Numeric Optimization (`numeric`)**\n",
    "\n",
    "Controls how numeric (and categorical) parameters are optimized (uses Optuna library).\n",
    "\n",
    "`enabled: true`\n",
    "\n",
    "> What it does: Turns on optimization of numeric parameters (like `temperature`, `max_tokens`, model selection)\n",
    "> \n",
    "> When to enable: When you have optimizable parameters marked with `OptimizableField()` in your config\n",
    "> \n",
    "> When to disable: If you only want to optimize prompts, or run a single evaluation\n",
    "\n",
    "`sampler: grid`\n",
    "\n",
    "> What it does: Determines the search strategy for finding the best parameters\n",
    "> \n",
    "> Options:\n",
    "> - `grid` - Exhaustive search: Tests every combination of parameter values\n",
    ">   - Use when: Small search space, want guaranteed best result\n",
    ">   - Example: 3 models × 2 temperatures = 6 combinations\n",
    "> - `bayesian` or `null` - Smart search: Uses Bayesian optimization to intelligently sample promising areas\n",
    ">   - Use when: Large search space, limited time/budget\n",
    ">   - Example: Continuous ranges like temperature 0.0-1.0\n",
    "> \n",
    "> Must specify either:\n",
    "> - Explicit values: `[0.5, 0.7, 0.9]`, OR\n",
    "> - Range with step: `low: 0.0, high: 1.0, step: 0.1`\n",
    "\n",
    "**Prompt Optimization (`prompt`)**\n",
    "\n",
    "Controls genetic algorithm-based prompt optimization.\n",
    "\n",
    "`enabled: false`\n",
    "\n",
    "> What it does: Turns on and off LLM-based prompt evolution\n",
    "> \n",
    "> When to enable: When you want to optimize the actual text of prompts (like system prompts)\n",
    "> \n",
    "> When to disable:\n",
    "> - Comparing models and numeric parameters only (like this example)\n",
    "> - Don't have prompt parameters marked for optimization\n",
    "> - Want faster results (prompt optimization is slower)\n",
    "> \n",
    "> Requires:\n",
    "> - Prompt parameters marked with `OptimizableField(space=SearchSpace(is_prompt=True))`\n",
    "> - LLM functions for generating prompt variations\n",
    "\n",
    "**How This Configuration Works Together**\n",
    "\n",
    "With this specific config, here's what happens:\n",
    "\n",
    "Optimizer will:\n",
    "- Test different parameter combinations (models, settings, etc.)\n",
    "- Run each combination 10 times for reliability\n",
    "- Measure 3 things: accuracy (↑), token efficiency (↓), latency (↓)\n",
    "- Use grid search to test every combination systematically\n",
    "- Skip prompt optimization (only testing model/parameter combinations)\n",
    "\n",
    "Example workflow (if testing 3 models × 2 temperatures):\n",
    "- Total unique configurations: 6\n",
    "- Runs per config: 10\n",
    "- Total workflow runs: 60\n",
    "- Result: Best config balancing accuracy, cost, and speed\n",
    "\n",
    "Output:\n",
    "- One \"best\" configuration file\n",
    "- Detailed comparison of all tested configurations\n",
    "- Visualizations showing trade-offs between metrics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "050f6c91",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile -a tmp_workflow/configs/config_b.yml\n",
    "optimizer:\n",
    "  output_path: ./tmp_workflow/eval_output/optimizer/\n",
    "  reps_per_param_set: 10 # Number of times to evaluate EACH config (for statistical significance)\n",
    "  eval_metrics: # specifies which evaluatin metrics to optimize for\n",
    "    accuracy: # custom name for the metric\n",
    "      evaluator_name: answer_accuracy  # References the evaluator defined under the 'eval' section\n",
    "      direction: maximize\n",
    "      weight: 1.0 # coefficient of relative importance for the optimizer (defaults to 1.0)\n",
    "    token_efficiency: # custom name for the metric\n",
    "      evaluator_name: token_efficiency # References the evaluator defined under the 'eval' section\n",
    "      direction: minimize\n",
    "      weight: 1.0\n",
    "    latency: # custom name for the metric\n",
    "      evaluator_name: llm_latency # References the evaluator defined under the 'eval' section\n",
    "      direction: minimize\n",
    "      weight: 1.0\n",
    "\n",
    "  numeric:\n",
    "    enabled: true # enables numeric and categorical parameters to be optimized\n",
    "    sampler: grid # uses Optuna GridSearch to determine the unique parameter sets to evaluate\n",
    "\n",
    "  prompt:\n",
    "    enabled: false  # Disable for pure model and hyperparameter comparison"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "692dfb0b",
   "metadata": {},
   "source": [
    "<a id=\"dataset\"></a>\n",
    "### 1.2.3) Create an eval dataset\n",
    "\n",
    "The dataset below is intended to be difficult for simple LLM chat completions, because:\n",
    "- Math calculations (questions 1, 2, 5, 7, 9) require precise arithmetic that LLMs often struggle with\n",
    "- Real-time data queries (questions 3, 8) need current information beyond the model's training cutoff\n",
    "- Factual knowledge (questions 4, 6) may be outdated or incorrect without access to recent data\n",
    "- Multi-step reasoning (questions 2, 7) requires combining multiple operations accurately"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c388ff67",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile tmp_workflow/data/eval_data.json\n",
    "[\n",
    "    {\n",
    "        \"id\": \"1\",\n",
    "        \"question\": \"What is 15% of 847?\",\n",
    "        \"answer\": \"The answer is 127.05\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"2\",\n",
    "        \"question\": \"If I invest $10,000 at 5% annual interest compounded monthly for 3 years, how much will I have?\",\n",
    "        \"answer\": \"Approximately $11,614.72\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"3\",\n",
    "        \"question\": \"What is the current weather in Tokyo?\",\n",
    "        \"answer\": \"This requires real-time weather data for Tokyo, Japan.\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"4\",\n",
    "        \"question\": \"Who won the FIFA World Cup in 2022 and where was it held?\",\n",
    "        \"answer\": \"Argentina won the 2022 FIFA World Cup, which was held in Qatar.\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"5\",\n",
    "        \"question\": \"Calculate the average of these numbers: 23, 45, 67, 89, 12, 34\",\n",
    "        \"answer\": \"The average is 45\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"6\",\n",
    "        \"question\": \"What is the capital of Australia and what is its approximate population?\",\n",
    "        \"answer\": \"Canberra is the capital of Australia with a population of approximately 460,000 people.\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"7\",\n",
    "        \"question\": \"If a train travels 120 miles in 2 hours, then 180 miles in 3 hours, what is its average speed over the entire journey?\",\n",
    "        \"answer\": \"The average speed is 60 miles per hour (300 miles / 5 hours).\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"8\",\n",
    "        \"question\": \"Search for information about the latest NASA Mars mission and summarize the key findings.\",\n",
    "        \"answer\": \"Requires web search for current NASA Mars mission information and synthesis of findings.\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"9\",\n",
    "        \"question\": \"What is 2 to the power of 10?\",\n",
    "        \"answer\": \"1024\"\n",
    "    },\n",
    "    {\n",
    "        \"id\": \"10\",\n",
    "        \"question\": \"Who is the current CEO of Microsoft and when did they take the position?\",\n",
    "        \"answer\": \"Satya Nadella has been CEO of Microsoft since February 2014.\"\n",
    "    }\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a2b71b0b",
   "metadata": {},
   "source": [
    "<a id=\"optimize-first\"></a>\n",
    "### 1.2.4) Run the optimizer\n",
    "\n",
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Developer warning:</strong> Running the optimizer can take significant time (~30 minutes for search space of n=10 using NeMo endpoints) and  LLM inference tokens. Double check your config for unneeded search parameters or reduce the number of samples in the evaluation dataset to reduce cost.\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "71420933",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "!nat optimize --config_file tmp_workflow/configs/config_b.yml"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f79ce00",
   "metadata": {},
   "source": [
    "<a id=\"interpret-optimizer-first\"></a>\n",
    "### 1.2.5) Interpret first optimizer run"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6029eeeb",
   "metadata": {},
   "source": [
    "**Understanding Evaluation Outputs**\n",
    "\n",
    "This evaluation will have generated two artifacts for analysis at the `output_dir` specified in `config_b.yml`:\n",
    " - **`answer_accuracy_output.json`**\n",
    " - **`workflow_output.json`**\n",
    " - **`llm_latency_output.json`**\n",
    " - **`token_efficiency_output.json`**\n",
    "\n",
    "**Interpreting `trajectory_accuracy_output.json`**\n",
    "\n",
    "The `trajectory_accuracy_output.json` file contains the results of agent trajectory evaluation.\n",
    "\n",
    "**Top-level fields:**\n",
    "- **`average_score`** - Mean trajectory accuracy score across all evaluated examples (0.0 to 1.0)\n",
    "- **`eval_output_items`** - Array of individual evaluation results for each test case\n",
    "\n",
    "**Per-item fields:**\n",
    "- **`id`** - Unique identifier for the test case\n",
    "- **`score`** - Trajectory accuracy score for this specific example (0.0 to 1.0)\n",
    "- **`reasoning`** - Evaluation reasoning, either:\n",
    "  - String containing error message if evaluation failed\n",
    "  - Object with:\n",
    "    - **`reasoning`** - LLM judge's explanation of the score\n",
    "    - **`trajectory`** - Array of [AgentAction, Output] pairs showing the agent's execution path\n",
    "\n",
    "The trajectory accuracy evaluator assesses whether the agent used appropriate tools, followed a logical sequence of steps, and efficiently reached the correct answer.\n",
    "\n",
    "**Interpreting `workflow_output.json`**\n",
    "\n",
    "The `workflow_output.json` file contains the raw execution results from running the workflow on each test case.\n",
    "\n",
    "**Top-level fields:**\n",
    "- **`output_items`** - Array of workflow execution results for each test case in the dataset\n",
    "\n",
    "**Per-item fields:**\n",
    "- **`id`** - Unique identifier matching the test case ID\n",
    "- **`input_obj`** - The input question or prompt sent to the workflow\n",
    "- **`output_obj`** - The final answer generated by the workflow\n",
    "- **`trajectory`** - Detailed execution trace containing:\n",
    "  - **`event_type`** - Type of event (e.g., `LLM_START`, `LLM_END`, `TOOL_START`, `TOOL_END`, `SPAN_START`, `SPAN_END`)\n",
    "  - **`event_timestamp`** - Unix timestamp of when the event occurred\n",
    "  - **`metadata`** - Event-specific data including:\n",
    "    - Tool names and inputs\n",
    "    - LLM prompts and responses\n",
    "    - Token counts (`prompt_tokens`, `completion_tokens`)\n",
    "    - Model names\n",
    "    - Function names\n",
    "    - Error information\n",
    "\n",
    "The workflow output provides complete observability into each execution, enabling detailed analysis of agent behavior, performance profiling, and debugging."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ab620774",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "\n",
    "import pandas as pd\n",
    "\n",
    "# Load the optimizer results\n",
    "trials_df_path = Path(\"tmp_workflow/eval_output/optimizer/trials_dataframe_params.csv\")\n",
    "\n",
    "if trials_df_path.exists():\n",
    "    trials_df = pd.read_csv(trials_df_path)\n",
    "\n",
    "    print(\"Grid Search Optimization Results\")\n",
    "    print(\"=\" * 80)\n",
    "    print(\"\\nTrials Summary:\")\n",
    "    print(trials_df.to_string(index=False))\n",
    "    print(\"\\n\" + \"=\" * 80)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3836ec19",
   "metadata": {},
   "source": [
    "The results above show:\n",
    " \n",
    "**Grid Search Optimization Summary:**\n",
    "- The optimizer evaluated all combinations of models and temperatures defined in the search space\n",
    "- Each configuration was tested multiple times (repetitions) to account for variability\n",
    "- Three key metrics were tracked: accuracy, token efficiency (tokens used), and latency (response time)\n",
    "\n",
    "**Key Insights:**\n",
    " - Different models show different trade-offs between accuracy, efficiency, and speed\n",
    "- Temperature settings affect response variability and quality\n",
    "- The \"Best Configuration\" represents the optimal balance based on the weighted combination of all metrics\n",
    " \n",
    "**Interpreting Your Results:**\n",
    "When you run this optimization, look for:\n",
    "- Which model/temperature combination achieves the highest aggregated accuracy\n",
    "- How token efficiency varies between models (lower is more efficient)\n",
    "- Latency differences (lower is faster)\n",
    "- The confidence intervals to understand result stability\n",
    "\n",
    "The optimizer automatically selects the best configuration and saves it to `optimized_config.yml` for use in production."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59876571",
   "metadata": {},
   "source": [
    "<a id=\"optimize-tool-calling-agents\"></a>\n",
    "# 2.0) Optimized model and parameter selection for tool-calling agents"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7223a3b2",
   "metadata": {},
   "source": [
    "<a id=\"create-triage-agent\"></a>\n",
    "## 2.1) Create a tool-calling agent\n",
    "As we explained above, in many real-world applications straightforward chat completions requests may not be adequate without agentic tool-calling integration. Therefore, for the next exercise we are going to build a similar optimize pipeline for an advanced tool calling agent: the [Alert Triage Agent](https://github.com/NVIDIA/NeMo-Agent-Toolkit/tree/develop/examples/advanced_agents/alert_triage_agent). This agent uses tool calling to automate the triage of server-monitoring alerts. It demonstrates how to build an intelligent troubleshooting workflow using NeMo Agent toolkit and LangGraph.\n",
    "\n",
    "The Alert Triage Agent is an advanced example that demonstrates:\n",
    "- **Multi-tool orchestration** - Dynamically selects and uses diagnostic tools\n",
    "- **Structured report generation** - Creates comprehensive analysis reports\n",
    "- **Root cause categorization** - Classifies alerts into predefined categories\n",
    "- **Offline evaluation mode** - Test with synthetic data before live deployment\n",
    "\n",
    "We aim to demonstrate the power of model evaluation and optimization on agentic AI platforms. There are many foundational models to choose as your agent's backbone and academic benchmarks are not always representative of potential performance on your institutional data (refer to training data leakage and data domain shift research for more motivation).\n",
    "\n",
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Note:</strong> As the Alert Triage Agent is not shipped with the NAT PyPI package, we will either clone it from GitHub (by selecting your branch of choice), or if the package was installed with the `-e` editable code flag, we can work locally. We will parameterize the path to this agent to easily alter the configuration in the next cell\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2a101122",
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.core.error import StdinNotImplementedError\n",
    "\n",
    "# Simple input prompt for branch selection\n",
    "print(\"=\" * 60)\n",
    "print(\"Alert Triage Agent Installation\")\n",
    "print(\"=\" * 60)\n",
    "print(\"\\nOptions:\")\n",
    "print(\"  - Enter 'local' for editable install from local repository\")\n",
    "print(\"  - Enter a branch name (e.g., 'develop', 'main') for git install\")\n",
    "print(\"=\" * 60)\n",
    "\n",
    "try:\n",
    "    branch_name = input(\"\\nEnter your choice: \").strip()\n",
    "except StdinNotImplementedError:\n",
    "    branch_name = 'local'\n",
    "    print(f\"\\nNo input available. Defaulting to branch: {branch_name}\")\n",
    "\n",
    "if branch_name.lower() == 'local':\n",
    "    # Local editable install\n",
    "    print(\"\\nInstalling alert triage agent in editable mode from local repository...\")\n",
    "\n",
    "    # Try to find the local path relative to current directory\n",
    "    from pathlib import Path\n",
    "    # path-check-skip-next-line\n",
    "    local_path = Path('../../examples/advanced_agents/alert_triage_agent')\n",
    "\n",
    "    if local_path.exists():\n",
    "        get_ipython().system(f'pip install -e {local_path}')\n",
    "        print(f\"✓ Installed from local path: {local_path.absolute()}\")\n",
    "    else:\n",
    "        print(f\"✗ Error: Local path not found: {local_path.absolute()}\")\n",
    "        print(\"Make sure you're running this from the correct directory\")\n",
    "else:\n",
    "    # Git install from specified branch\n",
    "    print(f\"\\nInstalling alert triage agent from branch: {branch_name}\")\n",
    "    get_ipython().system(f'pip install --no-deps \"git+https://github.com/NVIDIA/NeMo-Agent-Toolkit.git@{branch_name}#subdirectory=examples/advanced_agents/alert_triage_agent\"')\n",
    "    print(f\"✓ Installed from git branch: {branch_name}\")\n",
    "\n",
    "print(\"\\n\" + \"=\" * 60)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a1fc34f7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import importlib.resources\n",
    "\n",
    "# Find the installed package data directory\n",
    "package_data = importlib.resources.files('nat_alert_triage_agent').joinpath('data')\n",
    "\n",
    "maintenance_csv = str(package_data / 'maintenance_static_dataset.csv')\n",
    "offline_csv = str(package_data / 'offline_data.csv')\n",
    "benign_json = str(package_data / 'benign_fallback_offline_data.json')\n",
    "offline_json = str(package_data / 'offline_data.json')\n",
    "\n",
    "print(f\"Package data directory: {package_data}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc40fd05",
   "metadata": {},
   "source": [
    "<a id=\"configure-triage-agent\"></a>\n",
    "## 2.2) Configure the tool-calling agent\n",
    "\n",
    "**Configuring the Alert Triage Agent**\n",
    "\n",
    "The Alert Triage Agent requires several components:\n",
    "\n",
    "1. **Diagnostic Tools** - Hardware checks, network connectivity, performance monitoring, telemetry analysis\n",
    "2. **Sub-agents** - Telemetry metrics analysis agent that coordinates multiple telemetry tools\n",
    "3. **Categorizer** - Classifies root causes into predefined categories\n",
    "4. **Maintenance Check** - Filters out alerts during maintenance windows\n",
    "\n",
    "We'll create a **local configuration file** and run in **offline mode** using synthetic data.\n",
    "\n",
    "In the configuration file, you can see the list of LLMs that we have predefined to be compared when the optimizer runs. We will only run the initial search across two models, for brevity and token efficiency. However, you can uncomment the entire list of 11 models (or add [more models](https://catalog.ngc.nvidia.com/)) to run a more robust search. This model will be used as the agent's backbone LLM for reasoning steps. The `tool_reasoning_llm` and `nim_rag_eval_llm` remain fixed to `meta/llama-3.1-70b-instruct`, but in a modified evaluation these models could be evaluated as well. \n",
    "```\n",
    "- Meta: llama-3.1-8b-instruct\n",
    "- Meta: llama-3.1-70b-instruct\n",
    "- Meta: llama-3.1-405b-instruct\n",
    "- Meta: llama-3.3-3b-instruct\n",
    "- Meta: llama-3.3-70b-instruct\n",
    "- Meta: llama-4-scout-17b-16e-instruct\n",
    "- OpenAI: gpt-oss-20b\n",
    "- OpenAI: gpt-oss-120b\n",
    "- IBM: granite-3.3-8b-instruct\n",
    "- MistralAI: mistral-small-3.1-24b-instruct-2503\n",
    "- MistralAI: mistral-medium-3-instruct\n",
    "```\n",
    "\n",
    "We additionally provide two different values for `temperature` to exemplify concurrent model and parameter searches:\n",
    "```\n",
    "- 0.0\n",
    "- 0.5\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a97b2e49",
   "metadata": {},
   "source": [
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Developer warning:</strong> Running the optimizer can consume a significant amount of LLM inference tokens. To protect users from unexpected costs only 2 models remain uncommented in the configuration below. Uncomment models to increase the search space.\n",
    "</div>\n",
    "\n",
    "We will create a YAML configuration file using Python code rather than a static file. This approach allows us to dynamically reference the package data directory and ensures the configuration is created in the notebook's working directory, making it easier to modify and experiment with different settings for optimization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e8f1940e",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile ./tmp_workflow/configs/alert_triage_config_model_selection.yml\n",
    "# path-check-skip-begin\n",
    "functions:\n",
    "  hardware_check:\n",
    "    _type: hardware_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  host_performance_check:\n",
    "    _type: host_performance_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  monitoring_process_check:\n",
    "    _type: monitoring_process_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  network_connectivity_check:\n",
    "    _type: network_connectivity_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  telemetry_metrics_host_heartbeat_check:\n",
    "    _type: telemetry_metrics_host_heartbeat_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  telemetry_metrics_host_performance_check:\n",
    "    _type: telemetry_metrics_host_performance_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  telemetry_metrics_analysis_agent:\n",
    "    _type: telemetry_metrics_analysis_agent\n",
    "    tool_names:\n",
    "      - telemetry_metrics_host_heartbeat_check\n",
    "      - telemetry_metrics_host_performance_check\n",
    "    llm_name: agent_llm\n",
    "  maintenance_check:\n",
    "    _type: maintenance_check\n",
    "    llm_name: agent_llm\n",
    "    static_data_path: PLACEHOLDER_maintenance_static_dataset.csv\n",
    "  categorizer:\n",
    "    _type: categorizer\n",
    "    llm_name: agent_llm\n",
    "\n",
    "workflow:\n",
    "  _type: alert_triage_agent\n",
    "  tool_names:\n",
    "    - hardware_check\n",
    "    - host_performance_check\n",
    "    - monitoring_process_check\n",
    "    - network_connectivity_check\n",
    "    - telemetry_metrics_analysis_agent\n",
    "  llm_name: agent_llm\n",
    "  offline_mode: true\n",
    "  offline_data_path: PLACEHOLDER_offline_data.csv\n",
    "  benign_fallback_data_path: PLACEHOLDER_benign_fallback_offline_data.json\n",
    "\n",
    "llms:\n",
    "  agent_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-8b-instruct\n",
    "    temperature: 0.0\n",
    "    max_tokens: 2048\n",
    "    optimizable_params:\n",
    "      - model_name\n",
    "      - temperature\n",
    "    search_space:\n",
    "      model_name:\n",
    "        values:\n",
    "          - meta/llama-3.1-8b-instruct\n",
    "          - meta/llama-3.1-70b-instruct\n",
    "          # - meta/llama-3.1-405b-instruct\n",
    "          # - meta/llama-3.3-3b-instruct\n",
    "          # - meta/llama-3.3-70b-instruct\n",
    "          # - meta/llama-4-scout-17b-16e-instruct\n",
    "          # - openai/gpt-oss-20b\n",
    "          # - openai/gpt-oss-120b\n",
    "          # - ibm/granite-3.3-8b-instruct\n",
    "          # - mistralai/mistral-small-3.1-24b-instruct-2503\n",
    "          # - mistralai/mistral-medium-3-instruct\n",
    "      temperature:\n",
    "        values:\n",
    "          - 0.0\n",
    "          - 0.5\n",
    "  tool_reasoning_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-70b-instruct\n",
    "    temperature: 0.2\n",
    "    max_tokens: 2048\n",
    "  nim_rag_eval_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-70b-instruct\n",
    "    max_tokens: 8\n",
    "\n",
    "eval:\n",
    "  general:\n",
    "    output_dir: ./tmp_workflow/alert_triage_model_selection_output/\n",
    "    dataset:\n",
    "      _type: json\n",
    "      file_path: PLACEHOLDER_offline_data.json\n",
    "  evaluators:\n",
    "    accuracy:\n",
    "      _type: ragas\n",
    "      metric: AnswerAccuracy\n",
    "      llm_name: nim_rag_eval_llm\n",
    "    groundedness:\n",
    "      _type: ragas\n",
    "      metric: ResponseGroundedness\n",
    "      llm_name: nim_rag_eval_llm\n",
    "    relevance:\n",
    "      _type: ragas\n",
    "      metric: ContextRelevance\n",
    "      llm_name: nim_rag_eval_llm\n",
    "    classification_accuracy:\n",
    "      _type: classification_accuracy\n",
    "    llm_latency:\n",
    "      _type: avg_llm_latency\n",
    "    token_efficiency:\n",
    "      _type: avg_tokens_per_llm_end\n",
    "  profiler:\n",
    "    token_uniqueness_forecast: true\n",
    "    workflow_runtime_forecast: true\n",
    "    compute_llm_metrics: true\n",
    "    csv_exclude_io_text: true\n",
    "    prompt_caching_prefixes:\n",
    "      enable: true\n",
    "      min_frequency: 0.1\n",
    "    bottleneck_analysis:\n",
    "      enable_nested_stack: true\n",
    "    concurrency_spike_analysis:\n",
    "      enable: true\n",
    "      spike_threshold: 7"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af5e86bf",
   "metadata": {},
   "source": [
    "Above we have defined the `SearchSpace` to include two different LLMs (variants of Meta's llama 3.1 model), and temperature of 0.0 and 0.5 (making 4 unique combinations via grid search).\n",
    "\n",
    "Next, let's append some simple optimizer settings to our configuration. We will optimize specifically for the predefined `classification_accuracy` evaluator, use a grid search sampler, and **disable prompt optimization**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0c5d0fea",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile -a ./tmp_workflow/configs/alert_triage_config_model_selection.yml\n",
    "optimizer:\n",
    "  output_path: ./tmp_workflow/alert_triage_model_selection_output/optimizer/\n",
    "  reps_per_param_set: 1\n",
    "  eval_metrics:\n",
    "    classification_accuracy:\n",
    "      evaluator_name: classification_accuracy\n",
    "      direction: maximize\n",
    "    llm_latency:\n",
    "      evaluator_name: llm_latency\n",
    "      direction: minimize\n",
    "  numeric:\n",
    "    enabled: true\n",
    "    sampler: grid\n",
    "  prompt:\n",
    "    enabled: false\n",
    "# path-check-skip-end"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "099c4dc5",
   "metadata": {},
   "source": [
    "Before running, let's replace the placeholder paths in our config, depending on where we have installed the Alert Triage Agent. This step is only needed for compatibility of this notebook to source NAT in multiple ways."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "20ec99f3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Replace placeholder paths with actual package data paths\n",
    "import importlib.resources\n",
    "from pathlib import Path\n",
    "\n",
    "# Get the package data path\n",
    "package_data = importlib.resources.files('nat_alert_triage_agent').joinpath('data')\n",
    "\n",
    "# Read the YAML file\n",
    "config_path = Path('./tmp_workflow/configs/alert_triage_config_model_selection.yml')\n",
    "with open(config_path) as f:\n",
    "    config_content = f.read()\n",
    "\n",
    "# Replace placeholders with actual paths\n",
    "replacements = {\n",
    "    'PLACEHOLDER_maintenance_static_dataset.csv': str(package_data / 'maintenance_static_dataset.csv'),\n",
    "    'PLACEHOLDER_offline_data.csv': str(package_data / 'offline_data.csv'),\n",
    "    'PLACEHOLDER_benign_fallback_offline_data.json': str(package_data / 'benign_fallback_offline_data.json'),\n",
    "    'PLACEHOLDER_offline_data.json': str(package_data / 'offline_data.json')\n",
    "}\n",
    "\n",
    "for placeholder, actual_path in replacements.items():\n",
    "    config_content = config_content.replace(placeholder, actual_path)\n",
    "\n",
    "# Write back to file\n",
    "with open(config_path, 'w') as f:\n",
    "    f.write(config_content)\n",
    "\n",
    "print(f\"✓ Config written with data paths from: {package_data}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "299df6c9",
   "metadata": {},
   "source": [
    "<a id=\"test-triage-agent\"></a>\n",
    "## 2.3) Test the tool-calling agent\n",
    "\n",
    "Let's test the Alert Triage Agent with a single alert. This alert is an \"InstanceDown\" alert that, according to the offline dataset, is actually a false positive (the system is healthy).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "34b468a8",
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "\n",
    "alert = {\n",
    "    \"alert_id\": 0,\n",
    "    \"alert_name\": \"InstanceDown\",\n",
    "    \"host_id\": \"test-instance-0.example.com\",\n",
    "    \"severity\": \"critical\",\n",
    "    \"description\": (\n",
    "        \"Instance test-instance-0.example.com is not available for scraping for the last 5m. \"\n",
    "        \"Please check: - instance is up and running; - monitoring service is in place and running; \"\n",
    "        \"- network connectivity is ok\"\n",
    "    ),\n",
    "    \"summary\": \"Instance test-instance-0.example.com is down\",\n",
    "    \"timestamp\": \"2025-04-28T05:00:00.000000\"\n",
    "}\n",
    "\n",
    "!nat run --config_file tmp_workflow/configs/alert_triage_config_model_selection.yml --input '{json.dumps(alert)}'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64ac49a9",
   "metadata": {},
   "source": [
    "After running the cell above, we have confirmed that the tool calling agent is properly configured and ready for a naive evaluation. This evaluation will be our performance baseline."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "730ef191",
   "metadata": {},
   "source": [
    "<a id=\"eval-triage-agent1\"></a>\n",
    "## 2.4) Evaluate the tool-calling agent (naive parameters)\n",
    "\n",
    "*using `nat eval`...*\n",
    "\n",
    "Now let's run a full evaluation on the Alert Triage Agent using the complete offline dataset. This dataset contains seven alerts with different root causes:\n",
    "\n",
    "- **False positives** - System appears healthy despite alert\n",
    "- **Hardware issues** - Hardware failures or degradation  \n",
    "- **Software issues** - Malfunctioning monitoring services\n",
    "- **Maintenance** - Scheduled maintenance windows\n",
    "- **Repetitive behavior** - Benign recurring patterns\n",
    "\n",
    "The evaluation will measure:\n",
    "1. **Classification Accuracy** - How well the agent categorizes root causes\n",
    "2. **Answer Accuracy** - How well the generated reports match expected outcomes (using RAGAS)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "55c4fcc1",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "!nat eval --config_file ./tmp_workflow/configs/alert_triage_config_model_selection.yml\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cbe817e6",
   "metadata": {},
   "source": [
    "**Understanding Alert Triage Evaluation Results**\n",
    "\n",
    "The evaluation generates several output files in the `alert_triage_output` directory:\n",
    "\n",
    "1. **classification_accuracy_output.json** - Root cause classification metrics\n",
    "   - Shows accuracy, precision, recall, and F1 scores for each category\n",
    "   - Contains confusion matrix for detailed analysis\n",
    "   \n",
    "2. **rag_accuracy_output.json** - Answer quality metrics\n",
    "   - Measures how well generated reports match expected outcomes\n",
    "   - Uses LLM-as-a-judge to evaluate report quality\n",
    "\n",
    "3. **workflow_output.json** - Complete execution traces\n",
    "   - Contains full agent trajectories with tool calls\n",
    "   - Includes generated reports for each alert\n",
    "   - Shows token usage and performance metrics\n",
    "\n",
    "Let's examine the classification accuracy results:\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "754384df",
   "metadata": {},
   "source": [
    "We see that the classification accuracy results are around 43% based on RAG accuracy results of 46%.\n",
    "\n",
    "Next we will run the optimizer over a variety of models and some reasonable hyperparameters, then use that optimal configuration and run the evaluation again."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ddbbe01f",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "# Load and display classification accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_model_selection_output/classification_accuracy_output.json') as f:\n",
    "    classification_results = json.load(f)\n",
    "print(f\"Total Alerts Evaluated: {len(classification_results['eval_output_items'])}\")\n",
    "print(f\"Classification Accuracy Average Score: {classification_results['average_score']:.2%}\")\n",
    "\n",
    "# Load and display RAG accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_model_selection_output/llm_latency_output.json') as f:\n",
    "    latency_results = json.load(f)\n",
    "\n",
    "print(f\"LLM Latency Average Score: {latency_results['average_score']}sec\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b2c2cf1",
   "metadata": {},
   "source": [
    "<a id=\"optimize-triage-agent\"></a>\n",
    "## 2.5) Optimize the tool-calling agent's LLM\n",
    "\n",
    "*using `nat optimize`...*\n",
    "\n",
    "Next we will run `nat optimize` for the Alert Triage Agent using a GridSearch sweep over the `OptimizableField`s in `alert_triage_config.yml`. In this case, we are just comparing backbone LLM models for the core agent, not the `tool_reasoning_llm`. Optimizable fields have been previously explained in this notebook, but in this case we are going to run a similar optimization pass over a complex tool-calling agent to demonstrate the power of `nat optimize` at scale.\n",
    "\n",
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Developer warning:</strong> Running the optimizer can take significant time (~30 minutes for search space of n=10) and  LLM inference tokens. Double check your config for unneeded search parameters prior to running.\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3ddd831f",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "!nat optimize --config_file tmp_workflow/configs/alert_triage_config_model_selection.yml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bd59bb3c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "\n",
    "import pandas as pd\n",
    "\n",
    "# Load the optimizer results\n",
    "trials_df_path = Path(\"tmp_workflow/alert_triage_model_selection_output/optimizer/trials_dataframe_params.csv\")\n",
    "\n",
    "if trials_df_path.exists():\n",
    "    trials_df = pd.read_csv(trials_df_path)\n",
    "\n",
    "    print(\"Grid Search Optimization Results\")\n",
    "    print(\"=\" * 80)\n",
    "    print(\"\\nTrials Summary:\")\n",
    "    print(trials_df.to_string(index=False))\n",
    "    print(\"\\n\" + \"=\" * 80)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b67550b0",
   "metadata": {},
   "source": [
    "<!-- path-check-skip-begin -->\n",
    "<a id=\"eval-triage-agent2\"></a>\n",
    "## 2.6) Re-evaluate the optimized tool-calling agent\n",
    "\n",
    "After completing the `nat optimize` run above, a new file with the optimal parameters from the search have been serialized and saved to `./tmp_workflow/alert_triage_model_selection_output/optimizer/optimized_config.yml`.\n",
    "\n",
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Note:</strong> Performance of the optimized model may vary due to size of prior search space and number of evaluation trials.\n",
    "</div>\n",
    "<!-- path-check-skip-end -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "506ef10c",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "# path-check-skip-next-line\n",
    "!nat eval --config_file ./tmp_workflow/alert_triage_model_selection_output/optimizer/optimized_config.yml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b9a10743",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "# Load and display classification accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_model_selection_output/classification_accuracy_output.json') as f:\n",
    "    classification_results = json.load(f)\n",
    "print(f\"Total Alerts Evaluated: {len(classification_results['eval_output_items'])}\")\n",
    "print(f\"Classification Accuracy Average Score: {classification_results['average_score']:.2%}\")\n",
    "\n",
    "# Load and display RAG accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_model_selection_output/llm_latency_output.json') as f:\n",
    "    latency_results = json.load(f)\n",
    "\n",
    "print(f\"LLM Latency Average Score: {latency_results['average_score']}sec\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "58efda71",
   "metadata": {},
   "source": [
    "Up to this point, we have shown how to add models and tunable LLM parameters to the `SearchSpace`. We have demonstrated this using `sampler: grid`, which uses Optuna's grid search methods to create a deterministic search space for all of the unique combinations for all `optimizable_params` in the configuration. If range of search parameters is large, and a grid search produces too many unique combinations, users may optionally specify `sampler: bayesian` in their configuration, and use Optuna's `TPESampler` (one variable) and genetic algorithm (multiple variables) samplers to use non-deterministic search methods."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8b6eb63",
   "metadata": {},
   "source": [
    "<a id=\"model-and-prompt-tuning\"></a>\n",
    "# 3.0) Concurrent Model Parameter and Prompt Tuning\n",
    "\n",
    "NAT uses a Genetic Algorithm (GA) to automatically optimize prompts through evolutionary search. This is a sophisticated approach that treats prompts as \"individuals\" in a population that evolves over multiple generations to find better-performing variations. The genetic algorithm is inspired by natural evolution and uses LLMs themselves to intelligently mutate and recombine prompts. Instead of random mutations like traditional GAs, NAT leverages the reasoning capabilities of LLMs to make informed changes to prompts.\n",
    "\n",
    "*Note: The genetic algorithm for prompt optimization is configured through several parameters:*\n",
    "- *`prompt.enabled`: Enable GA-based prompt optimization (default: `false`)*\n",
    "- *`prompt.ga_population_size`: Population size - larger populations increase diversity but cost more per generation (default: `10`)*\n",
    "- *`prompt.ga_generations`: Number of generations to evolve prompts (default: `5`)*\n",
    "- *`prompt.ga_offspring_size`: Number of offspring per generation - if `null`, defaults to `ga_population_size - ga_elitism`*\n",
    "- *`prompt.ga_crossover_rate`: Probability of recombination between two parents for each prompt parameter (default: `0.7`)*\n",
    "- *`prompt.ga_mutation_rate`: Probability of mutating a child's prompt parameter using the LLM optimizer (default: `0.1`)*\n",
    "- *`prompt.ga_elitism`: Number of elite individuals copied unchanged to the next generation (default: `1`)*\n",
    "- *`prompt.ga_selection_method`: Parent selection scheme - `tournament` (default) or `roulette`*\n",
    "- *`prompt.ga_tournament_size`: Tournament size when using tournament selection (default: `3`)*\n",
    "- *`prompt.ga_parallel_evaluations`: Maximum number of concurrent evaluations (default: `8`)*\n",
    "- *`prompt.ga_diversity_lambda`: Diversity penalty strength to discourage duplicate prompt sets - `0.0` disables it (default: `0.0`)\n",
    "- *`prompt.prompt_population_init_function`: Function name used to mutate base prompts to seed the initial population and perform mutations. NAT includes a built-in `prompt_init` Function you can use.*\n",
    "- *`prompt.prompt_recombination_function`: Optional function name used to recombine two parent prompts into a child prompt. NAT includes a built-in `prompt_recombiner` Function you can use.*\n",
    "\n",
    "** For more information see the [Optimizer documentation](../../docs/source/reference/optimizer.md) or go to your working branch on [GitHub - dev](https://github.com/NVIDIA/NeMo-Agent-Toolkit/blob/develop/docs/source/reference/optimizer.md).**\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a3bbcac0",
   "metadata": {},
   "source": [
    "<a id=\"all-tuning-config\"></a>\n",
    "## 3.1) Optimizer configuration for all parameters (models, hyperparameters, and prompts)\n",
    "\n",
    "For this experiment we will create a new configuration at `tmp_workflow/configs/alert_triage_all_params_selection.yml`, for which we will configure an optimizer run to find the best model (backbone LLM only), hyperparameters (temperature only), and prompts. We can use our existing Alert Triage Agent here, with a modified config. Let's create a new config called `./tmp_workflow/configs/alert_triage_config_all_params_selection.yml` to manage this workflow for us.\n",
    "\n",
    "First we will copy the same base configuration as the last example - with updated output paths for this experiment."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cca1ad89",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile ./tmp_workflow/configs/alert_triage_config_all_params_selection.yml\n",
    "# path-check-skip-begin\n",
    "functions:\n",
    "  hardware_check:\n",
    "    _type: hardware_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  host_performance_check:\n",
    "    _type: host_performance_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  monitoring_process_check:\n",
    "    _type: monitoring_process_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  network_connectivity_check:\n",
    "    _type: network_connectivity_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  telemetry_metrics_host_heartbeat_check:\n",
    "    _type: telemetry_metrics_host_heartbeat_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  telemetry_metrics_host_performance_check:\n",
    "    _type: telemetry_metrics_host_performance_check\n",
    "    llm_name: tool_reasoning_llm\n",
    "    offline_mode: true\n",
    "  telemetry_metrics_analysis_agent:\n",
    "    _type: telemetry_metrics_analysis_agent\n",
    "    tool_names:\n",
    "      - telemetry_metrics_host_heartbeat_check\n",
    "      - telemetry_metrics_host_performance_check\n",
    "    llm_name: agent_llm\n",
    "  maintenance_check:\n",
    "    _type: maintenance_check\n",
    "    llm_name: agent_llm\n",
    "    static_data_path: PLACEHOLDER_maintenance_static_dataset.csv\n",
    "  categorizer:\n",
    "    _type: categorizer\n",
    "    llm_name: agent_llm\n",
    "  prompt_init:\n",
    "    _type: prompt_init\n",
    "    optimizer_llm: prompt_optimizer_llm  # Reference to an LLM for optimization\n",
    "    system_objective: \"Alert triage agent that diagnoses system alerts and determines root causes\"\n",
    "  prompt_recombination:\n",
    "    _type: prompt_recombiner\n",
    "    optimizer_llm: prompt_optimizer_llm  # Same or different LLM\n",
    "    system_objective: \"Alert triage agent that diagnoses system alerts and determines root causes\"\n",
    "workflow:\n",
    "  _type: alert_triage_agent\n",
    "  tool_names:\n",
    "    - hardware_check\n",
    "    - host_performance_check\n",
    "    - monitoring_process_check\n",
    "    - network_connectivity_check\n",
    "    - telemetry_metrics_analysis_agent\n",
    "  llm_name: agent_llm\n",
    "  offline_mode: true\n",
    "  offline_data_path: PLACEHOLDER_offline_data.csv\n",
    "  benign_fallback_data_path: PLACEHOLDER_benign_fallback_offline_data.json\n",
    "  optimizable_params:\n",
    "    - agent_prompt\n",
    "  search_space:\n",
    "    agent_prompt:\n",
    "      is_prompt: true\n",
    "      prompt_purpose: \"Guide the agent to effectively diagnose system alerts, gather relevant metrics, and provide clear triage analysis with actionable recommendations.\"\n",
    "      prompt: |\n",
    "        **Role**\n",
    "        You are a Triage Agent who determines if an alert is real,\n",
    "        identifies likely root cause, and recommends actions.\n",
    "        Steps\n",
    "        1) Read the alert and key context.\n",
    "        2) Choose and run only the most relevant diagnostic tools (each at most once).\n",
    "        3) Review outputs and correlate with the alert.\n",
    "        4) Decide root cause and alert validity.\n",
    "        5) Produce a concise Markdown report with:\n",
    "        - Alert Summary\n",
    "        - Collected Metrics\n",
    "        - Analysis\n",
    "        - Recommended Actions\n",
    "        - Alert Status (Valid | Abnormal but benign | False alarm)\n",
    "        Rules\n",
    "        - Be concise and structured.\n",
    "        - Analyze tool outputs before deciding next steps.\n",
    "llms:\n",
    "  agent_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-8b-instruct\n",
    "    temperature: 0.0\n",
    "    max_tokens: 2048\n",
    "    optimizable_params:\n",
    "      - model_name\n",
    "      - temperature\n",
    "    search_space:\n",
    "      model_name:\n",
    "        values:\n",
    "          - meta/llama-3.1-8b-instruct\n",
    "          - meta/llama-3.1-70b-instruct\n",
    "          # - meta/llama-3.1-405b-instruct\n",
    "          # - meta/llama-3.3-3b-instruct\n",
    "          # - meta/llama-3.3-70b-instruct\n",
    "          # - meta/llama-4-scout-17b-16e-instruct\n",
    "          # - openai/gpt-oss-20b\n",
    "          # - openai/gpt-oss-120b\n",
    "          # - ibm/granite-3.3-8b-instruct\n",
    "          # - mistralai/mistral-small-3.1-24b-instruct-2503\n",
    "          # - mistralai/mistral-medium-3-instruct\n",
    "      temperature:\n",
    "        values:\n",
    "          - 0.0\n",
    "          - 0.5\n",
    "  tool_reasoning_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-70b-instruct\n",
    "    temperature: 0.2\n",
    "    max_tokens: 2048\n",
    "  nim_rag_eval_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-70b-instruct\n",
    "    max_tokens: 8\n",
    "  prompt_optimizer_llm:\n",
    "    _type: nim\n",
    "    model_name: meta/llama-3.1-70b-instruct\n",
    "    temperature: 0.5\n",
    "    max_tokens: 2048\n",
    "\n",
    "eval:\n",
    "  general:\n",
    "    output_dir: ./tmp_workflow/alert_triage_all_params_selection_output/\n",
    "    dataset:\n",
    "      _type: json\n",
    "      file_path: PLACEHOLDER_offline_data.json\n",
    "  evaluators:\n",
    "    classification_accuracy:\n",
    "      _type: classification_accuracy\n",
    "    llm_latency:\n",
    "      _type: avg_llm_latency\n",
    "    token_efficiency:\n",
    "      _type: avg_tokens_per_llm_end\n",
    "    rag_accuracy:\n",
    "      _type: ragas\n",
    "      metric: AnswerAccuracy\n",
    "      llm_name: nim_rag_eval_llm\n",
    "  profiler:\n",
    "    token_uniqueness_forecast: true\n",
    "    workflow_runtime_forecast: true\n",
    "    compute_llm_metrics: true\n",
    "    csv_exclude_io_text: true\n",
    "    prompt_caching_prefixes:\n",
    "      enable: true\n",
    "      min_frequency: 0.1\n",
    "    bottleneck_analysis:\n",
    "      enable_nested_stack: true\n",
    "    concurrency_spike_analysis:\n",
    "      enable: true\n",
    "      spike_threshold: 7"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0c1eb804",
   "metadata": {},
   "source": [
    "Then we will add in updated optimizer configuration code that allows the system prompts to be optimized."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0f4646a8",
   "metadata": {},
   "outputs": [],
   "source": [
    "%%writefile -a ./tmp_workflow/configs/alert_triage_config_all_params_selection.yml\n",
    "optimizer:\n",
    "  output_path: ./tmp_workflow/alert_triage_all_params_selection_output/optimizer/\n",
    "  reps_per_param_set: 1\n",
    "  eval_metrics:\n",
    "    classification_accuracy:\n",
    "      evaluator_name: classification_accuracy\n",
    "      direction: maximize\n",
    "    llm_latency:\n",
    "      evaluator_name: llm_latency\n",
    "      direction: minimize\n",
    "  numeric:\n",
    "    enabled: true\n",
    "    sampler: grid\n",
    "  prompt:\n",
    "    enabled: true\n",
    "    prompt_population_init_function: prompt_init\n",
    "    prompt_recombination_function: prompt_recombination\n",
    "    ga_generations: 3\n",
    "    ga_population_size: 5\n",
    "# path-check-skip-end"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b1c14f8",
   "metadata": {},
   "source": [
    "Again, we will replace the placeholder paths for the output artifacts based on our earlier NAT source code pattern."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bb3316dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Replace placeholder paths with actual package data paths\n",
    "import importlib.resources\n",
    "from pathlib import Path\n",
    "\n",
    "# Get the package data path\n",
    "package_data = importlib.resources.files('nat_alert_triage_agent').joinpath('data')\n",
    "\n",
    "# Read the YAML file\n",
    "config_path = Path('./tmp_workflow/configs/alert_triage_config_all_params_selection.yml')\n",
    "with open(config_path) as f:\n",
    "    config_content = f.read()\n",
    "\n",
    "# Replace placeholders with actual paths\n",
    "replacements = {\n",
    "    'PLACEHOLDER_maintenance_static_dataset.csv': str(package_data / 'maintenance_static_dataset.csv'),\n",
    "    'PLACEHOLDER_offline_data.csv': str(package_data / 'offline_data.csv'),\n",
    "    'PLACEHOLDER_benign_fallback_offline_data.json': str(package_data / 'benign_fallback_offline_data.json'),\n",
    "    'PLACEHOLDER_offline_data.json': str(package_data / 'offline_data.json')\n",
    "}\n",
    "\n",
    "for placeholder, actual_path in replacements.items():\n",
    "    config_content = config_content.replace(placeholder, actual_path)\n",
    "\n",
    "# Write back to file\n",
    "with open(config_path, 'w') as f:\n",
    "    f.write(config_content)\n",
    "\n",
    "print(f\"✓ Config written with data paths from: {package_data}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ceceea3",
   "metadata": {},
   "source": [
    "<a id=\"all-tuning-initial-eval\"></a>\n",
    "## 3.2) Evaluate the agent\n",
    "\n",
    "As we've already tested this agent in Section 2.3, we will go right ahead to an initial evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cffc1569",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "!nat eval --config_file ./tmp_workflow/configs/alert_triage_config_all_params_selection.yml"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "baa7489c",
   "metadata": {},
   "source": [
    "Then let's analyze the results of the untuned agent."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d1b58195",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "# Load and display classification accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_all_params_selection_output/classification_accuracy_output.json') as f:\n",
    "    classification_results = json.load(f)\n",
    "print(f\"Total Alerts Evaluated: {len(classification_results['eval_output_items'])}\")\n",
    "print(f\"Classification Accuracy Average Score: {classification_results['average_score']:.2%}\")\n",
    "\n",
    "# Load and display RAG accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_all_params_selection_output/rag_accuracy_output.json') as f:\n",
    "    latency_results = json.load(f)\n",
    "\n",
    "print(f\"LLM Latency Average Score: {latency_results['average_score']}sec\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5813af73",
   "metadata": {},
   "source": [
    "<a id=\"all-tuning-optimize\"></a>\n",
    "## 3.3) Optimize the agent\n",
    "\n",
    "Now let's re-run the optimizer, but this time we will have model, parameter, and prompt tuning all enabled."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db193b2b",
   "metadata": {},
   "source": [
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Developer warning:</strong> Running the optimizer can consume a significant amount of LLM inference tokens. To protect users from unexpected costs the search space has been reduced above. Uncomment models, add hyperparameter combinations, or additional rigor to prompt tuning to increase the search space and potential of your optimization.\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "754bd302",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "!nat optimize --config_file tmp_workflow/configs/alert_triage_config_all_params_selection.yml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b3b51cba",
   "metadata": {},
   "outputs": [],
   "source": [
    "from pathlib import Path\n",
    "\n",
    "import pandas as pd\n",
    "\n",
    "# Load the optimizer results\n",
    "trials_df_path = Path(\"tmp_workflow/alert_triage_all_params_selection_output/optimizer/trials_dataframe_params.csv\")\n",
    "\n",
    "if trials_df_path.exists():\n",
    "    trials_df = pd.read_csv(trials_df_path)\n",
    "\n",
    "    print(\"Grid Search Optimization Results\")\n",
    "    print(\"=\" * 80)\n",
    "    print(\"\\nTrials Summary:\")\n",
    "    print(trials_df.to_string(index=False))\n",
    "    print(\"\\n\" + \"=\" * 80)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b69001d0",
   "metadata": {},
   "source": [
    "<!-- path-check-skip-begin -->\n",
    "<a id=\"eval-triage-agent2\"></a>\n",
    "## 3.4) Re-evaluate the optimized tool-calling agent\n",
    "\n",
    "After completing the `nat optimize` run above, a new file with the optimal parameters from the search have been serialized and saved to `./tmp_workflow/alert_triage_all_params_selection_output/optimizer/optimized_config.yml`. Let's re-run those optimized parameters back through `nat eval` and compare the performance.\n",
    "\n",
    "<div style=\"color: red; font-style: italic;\">\n",
    "<strong>Note:</strong> Performance of the optimized model may vary due to size of prior search space and number of evaluation trials.\n",
    "</div>\n",
    "<!-- path-check-skip-end -->"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "21a94fec",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "# path-check-skip-next-line\n",
    "!nat eval --config_file ./tmp_workflow/alert_triage_all_params_selection_output/optimizer/optimized_config.yml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "152cdd3d",
   "metadata": {
    "tags": [
     "skip_e2e_test"
    ]
   },
   "outputs": [],
   "source": [
    "# Load and display classification accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_all_params_selection_output/classification_accuracy_output.json') as f:\n",
    "    classification_results = json.load(f)\n",
    "print(f\"Total Alerts Evaluated: {len(classification_results['eval_output_items'])}\")\n",
    "print(f\"Classification Accuracy Average Score: {classification_results['average_score']:.2%}\")\n",
    "\n",
    "# Load and display RAG accuracy results\n",
    "# path-check-skip-next-line\n",
    "with open('./tmp_workflow/alert_triage_all_params_selection_output/rag_accuracy_output.json') as f:\n",
    "    latency_results = json.load(f)\n",
    "\n",
    "print(f\"LLM Latency Average Score: {latency_results['average_score']}sec\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c411a80a",
   "metadata": {},
   "source": [
    "<!-- path-check-skip-begin -->\n",
    "The `nat eval` runs above compare the performance of the Alert Triage Agent before and after `nat optimize` has determined an optimal set of parameters from the search space. The search space in the demo above is small: by default we are only allowing `nat optimize` to run a `sampler: grid` search across the backbone `llm`s, `temperature`, and the system prompt for the agent. However, in a real world use case, developers can easily add more parameters to the search space by adding or removing more parameter combinations to the search space.\n",
    "\n",
    "While the search space shown above is small, a previous evaluation showed that the accuracy performance of our model was improved from 43% to 71% with only three generations of prompt optimizations. We previously showed how to analyze the `trails_dataframe_params.csv` artifact that reports the Pareto optimality of numeric parameters combinations (i.e. model, temperature). Per the NeMo Agent toolkit 2-step optimization process (numeric parameter tuning, followed by prompt tuning), we analyze the results of prompt optimization separately from numeric parameter optimization. You will note that for each iteration of the `ga_generations` parameter, a new `optimized_prompts_gen<i>.json` artifact was generated, in addition to `ga_history_prompts.csv` and `optimized_prompts.json`. These files trace the lineage of the prompt through the genetic mutation algorithm's optimization process.\n",
    "\n",
    "**Before prompt optimization:**\n",
    "```\n",
    "\"**Role**\\nYou are a Triage Agent who determines if an alert is real,\\\n",
    "identifies likely root cause, and recommends actions.\\\n",
    "Steps\\n\\\n",
    "1) Read the alert and key context.\\n\\\n",
    "2) Choose and run only the most relevant diagnostic tools (each at most once).\\n\\\n",
    "3) Review outputs and correlate with the alert.\\n\\\n",
    "4) Decide root cause and alert validity.\\n\\\n",
    "5) Produce a concise Markdown report with:\\n\\\n",
    "- Alert Summary\\n\\\n",
    "- Collected Metrics\\n\\\n",
    "- Analysis\\n\\\n",
    "- Recommended Actions\\n\\\n",
    "- Alert Status (Valid | Abnormal but benign | False alarm)\\n\\n\\\n",
    "Rules\\n\\\n",
    "- Be concise and structured.\\n\\\n",
    "- Analyze tool outputs before deciding next steps.\"\n",
    "```\n",
    "\n",
    "**After prompt optimization:**\n",
    "```\n",
    "**Role**\n",
    "You are a Triage Agent responsible for diagnosing system alerts, identifying root causes, and providing actionable recommendations. To achieve this, follow these structured steps:\\n\\n**Objective**\\nDetermine the validity of a system alert, identify its likely root cause, and recommend corrective actions.\n",
    "\n",
    "**Constraints**\n",
    "- Analyze each alert independently.\n",
    "- Use diagnostic tools judiciously, running each at most once.\n",
    "- Ensure concise and structured reporting.\\n\\n**Steps**\n",
    "1. **Alert Analysis**: Read the alert and its key context carefully.\n",
    "2. **Diagnostic Tool Selection**: Choose the most relevant diagnostic tools based on the alert context.\n",
    "3. **Tool Execution**: Run the selected tools, ensuring each is executed at most once.\n",
    "4. **Output Analysis**: Review tool outputs and correlate them with the alert context.\n",
    "5. **Root Cause Analysis**: Determine the root cause of the alert and decide on its validity.\n",
    "6. **Reporting**: Produce a concise Markdown report containing:\n",
    "- **Alert Summary**: Brief overview of the alert.\n",
    "- **Collected Metrics**: Relevant metrics gathered from diagnostic tools.\n",
    "- **Analysis**: Correlation of tool outputs with the alert context.\n",
    "- **Recommended Actions**: Clear, actionable steps for resolution.\n",
    "- **Alert Status**: Categorize the alert as Valid, Abnormal but benign, or False alarm.\n",
    "\n",
    "**Rules**\n",
    "- Maintain a structured approach in your analysis and reporting.\n",
    "- Ensure that tool outputs are analyzed before deciding on next steps or drawing conclusions.\n",
    "- Prioritize conciseness and clarity in your report.\n",
    "\n",
    "**Example Report**\n",
    "# Alert Summary\n",
    "Brief description of the alert.\\n\\n# Collected Metrics\n",
    "- Metric 1: Value\n",
    "- Metric 2: Value\n",
    "\n",
    "# Analysis\n",
    "Correlation of tool outputs with the alert context.\n",
    "\n",
    "# Recommended Actions\n",
    "1. Action 1\n",
    "2. Action 2\n",
    "\n",
    "# Alert Status\n",
    "Valid/Abnormal but benign/False alarm\n",
    "\n",
    "**Schema**\n",
    "Reports must adhere to the provided Markdown schema to ensure consistency and clarity.\",\n",
    "\"Guide the agent to effectively diagnose system alerts, gather relevant metrics, and provide clear triage analysis with actionable recommendations.\n",
    "```\n",
    "\n",
    "**Key differences between the prompts:**\n",
    "The genetic algorithm optimization process made several significant improvements to the prompt structure and content:\n",
    "1. **Enhanced Structure**: The optimized prompt adds explicit sections for **Objective** and **Constraints**, providing clearer context and boundaries for the agent's task.\n",
    "2. **More Detailed Steps**: Each step in the optimized version is more descriptive and includes bold labels (e.g., **Alert Analysis**, **Diagnostic Tool Selection**), making the workflow easier to follow.\n",
    "3. **Expanded Reporting Section**: The optimized prompt provides more detailed guidance on what each report section should contain, with explicit descriptions like \"Brief overview of the alert\" and \"Clear, actionable steps for resolution.\"\n",
    "4. **Concrete Example**: The optimized version includes a full **Example Report** section showing the exact Markdown format expected, which helps the agent understand the desired output structure.\n",
    "5. **Explicit Schema Reference**: The addition of a **Schema** section reinforces the importance of adhering to the Markdown format for consistency.\n",
    "6. **Refined Rules**: The rules section is more comprehensive, emphasizing structured approach and thorough analysis of tool outputs before drawing conclusions.\n",
    "\n",
    "These changes demonstrate how the optimization process evolved the prompt from a compact, functional instruction set to a more comprehensive, structured guide that provides clearer expectations and examples for the agent to follow.\n",
    "<!-- path-check-skip-end -->"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4682debd",
   "metadata": {},
   "source": [
    "##"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a20f6d61",
   "metadata": {},
   "source": [
    "<a id=\"next-steps\"></a>\n",
    "# 4.0) Next steps\n",
    "\n",
    "Continue learning how to fully utilize the NVIDIA NeMo Agent toolkit by exploring the other documentation and advanced agents in the `examples` directory."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "unew_312",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
