{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d1de6f2b",
   "metadata": {},
   "source": [
    "# Task Navigation Efficiency Evaluator"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7acb533",
   "metadata": {},
   "source": [
    "### Getting Started\n",
    "\n",
    "This sample demonstrates how to use the Task Navigation Efficiency Evaluator to evaluate whether an agent's sequence of actions follows optimal decision-making patterns.\n",
    "\n",
    "Before running the sample:\n",
    "```bash\n",
    "pip install azure-ai-projects azure-identity azure-ai-evaluation\n",
    "```\n",
    "Note: The Task Navigation Efficiency Evaluator does not require Azure OpenAI configuration as it's a rule-based evaluator."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dbc5612b",
   "metadata": {},
   "source": [
    "The Task Navigation Efficiency Evaluator measures how efficiently an agent navigates through a sequence of actions compared to an optimal task completion path.\n",
    "\n",
    "The evaluator provides comprehensive evaluation with both binary matching results and additional detailed P\\R\\F1 results:\n",
    "\n",
    "**Primary Result:**\n",
    "- **Binary Match Result**: Pass/Fail based on the selected matching mode\n",
    "\n",
    "**Available Matching Modes:**\n",
    "- **Exact Match**: Agent's tool calls must exactly match the ground truth (default)\n",
    "- **In-Order Match**: All ground truth steps must appear in correct order (allows extra steps)\n",
    "- **Any-Order Match**: All ground truth steps must appear with sufficient frequency (most lenient)\n",
    "\n",
    "**Properties Bag Additional Metrics (0.0 - 1.0):**\n",
    "- **Precision**: How many of the agent's steps were necessary (relevant to ground truth)\n",
    "- **Recall**: How many of the required steps were executed by the agent  \n",
    "- **F1 Score**: Harmonic mean of precision and recall\n",
    "\n",
    "The evaluation requires the following inputs:\n",
    "- **Response**: The agent's response containing tool calls as a list of messages or string\n",
    "- **Ground Truth**: List of expected tool/action steps as strings, or tuple with parameters for matching"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1be910ff",
   "metadata": {},
   "source": [
    "### Initialize Task Navigation Efficiency Evaluator"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "49a84a7d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from azure.ai.evaluation._evaluators._task_navigation_efficiency import _TaskNavigationEfficiencyEvaluator, _TaskNavigationEfficiencyMatchingMode\n",
    "from pprint import pprint\n",
    "\n",
    "# Initialize with exact match mode\n",
    "task_navigation_efficiency_evaluator = _TaskNavigationEfficiencyEvaluator(\n",
    "    matching_mode=_TaskNavigationEfficiencyMatchingMode.EXACT_MATCH\n",
    ")\n",
    "\n",
    "# Other examples:\n",
    "# For in-order matching (allows extra steps but requires correct order)\n",
    "# task_navigation_efficiency_evaluator = _TaskNavigationEfficiencyEvaluator(matching_mode=_TaskNavigationEfficiencyMatchingMode.IN_ORDER_MATCH)\n",
    "\n",
    "# For any-order matching (most lenient - allows extra steps and different order)  \n",
    "# task_navigation_efficiency_evaluator = _TaskNavigationEfficiencyEvaluator(matching_mode=_TaskNavigationEfficiencyMatchingMode.ANY_ORDER_MATCH)\n",
    "\n",
    "# Or use defaults (exact match mode)\n",
    "# task_navigation_efficiency_evaluator = _TaskNavigationEfficiencyEvaluator()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0247c79d",
   "metadata": {},
   "source": [
    "### Task Navigation Efficiency Examples"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "da6060ac",
   "metadata": {},
   "source": [
    "#### Sample 1: Perfect Path (Exact Match)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "67e5d8fc",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Agent follows the exact optimal path\n",
    "response = [\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"search\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\", \n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"analyze\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_3\", \"name\": \"report\", \"arguments\": {}}],\n",
    "    },\n",
    "]\n",
    "\n",
    "ground_truth = [\"search\", \"analyze\", \"report\"]\n",
    "\n",
    "result = task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "print(\"Perfect Path Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0331b142",
   "metadata": {},
   "source": [
    "#### Sample 2: Efficient Path with Extra Steps"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c74e0597",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Agent performs all required steps but with extra unnecessary step\n",
    "response = [\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"search\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"validate\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\", \n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_3\", \"name\": \"analyze\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_4\", \"name\": \"report\", \"arguments\": {}}],\n",
    "    },\n",
    "]\n",
    "\n",
    "ground_truth = [\"search\", \"analyze\", \"report\"]\n",
    "\n",
    "result = task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "print(\"\\nPath with Extra Steps Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "05b2736a",
   "metadata": {},
   "source": [
    "#### Sample 3: Inefficient Path (Wrong Order)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c443db2c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Agent performs all required steps but in wrong order\n",
    "response = [\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"report\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"search\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\", \n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_3\", \"name\": \"analyze\", \"arguments\": {}}],\n",
    "    },\n",
    "]\n",
    "\n",
    "ground_truth = [\"search\", \"analyze\", \"report\"]\n",
    "\n",
    "# Using in-order matching mode to demonstrate the difference\n",
    "in_order_task_navigation_efficiency_evaluator = _TaskNavigationEfficiencyEvaluator(matching_mode=_TaskNavigationEfficiencyMatchingMode.IN_ORDER_MATCH)\n",
    "\n",
    "result = in_order_task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "print(\"\\nWrong Order Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a21e2af8",
   "metadata": {},
   "source": [
    "#### Sample 4: Incomplete Path (Missing Steps)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c25e2e02",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Agent performs only some of the required steps (incomplete)\n",
    "response = [\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"search\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"analyze\", \"arguments\": {}}],\n",
    "    },\n",
    "]\n",
    "\n",
    "ground_truth = [\"search\", \"analyze\", \"report\"]\n",
    "\n",
    "result = task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "print(\"\\nMissing Steps Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5bce7d3",
   "metadata": {},
   "source": [
    "#### Sample 5: Real-World Customer Service Scenario"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ab8a6d6f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Real-world example: Customer service agent handling a refund request\n",
    "response = [\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"lookup_order\", \"arguments\": {\"order_id\": \"12345\"}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"check_inventory\", \"arguments\": {\"product_id\": \"ABC123\"}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\", \n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_3\", \"name\": \"calculate_refund\", \"arguments\": {\"order_id\": \"12345\"}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_4\", \"name\": \"process_refund\", \"arguments\": {\"order_id\": \"12345\", \"amount\": \"29.99\"}}],\n",
    "    },\n",
    "]\n",
    "\n",
    "ground_truth = [\"lookup_order\", \"calculate_refund\", \"process_refund\"]\n",
    "\n",
    "result = task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "print(\"\\nCustomer Service Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6626053",
   "metadata": {},
   "source": [
    "#### Sample 6: Complex Path with Duplicates"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0b2a492",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Agent repeats some steps and includes extra ones\n",
    "response = [\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"search\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"search\", \"arguments\": {}}],  # duplicate\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\", \n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_3\", \"name\": \"validate\", \"arguments\": {}}],  # extra step\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_4\", \"name\": \"analyze\", \"arguments\": {}}],\n",
    "    },\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_5\", \"name\": \"report\", \"arguments\": {}}],\n",
    "    },\n",
    "]\n",
    "\n",
    "ground_truth = [\"search\", \"analyze\", \"report\"]\n",
    "\n",
    "result = task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "print(\"\\nComplex Path with Duplicates Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d58e09b5",
   "metadata": {},
   "source": [
    "#### Sample 7: Edge Cases and Error Scenarios"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "64cd71e9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Test edge cases\n",
    "\n",
    "# Test with empty response\n",
    "try:\n",
    "    response = []\n",
    "    ground_truth = [\"search\", \"analyze\", \"report\"]\n",
    "    \n",
    "    result = task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "    print(\"\\nEmpty Response Results:\")\n",
    "    pprint(result)\n",
    "except Exception as e:\n",
    "    print(f\"Error with empty response: {e}\")\n",
    "\n",
    "# Test with empty ground truth (should raise error)\n",
    "try:\n",
    "    response = [\n",
    "        {\n",
    "            \"role\": \"assistant\",\n",
    "            \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"search\", \"arguments\": {}}],\n",
    "        }\n",
    "    ]\n",
    "    ground_truth = []\n",
    "    \n",
    "    result = task_navigation_efficiency_evaluator(response=response, ground_truth=ground_truth)\n",
    "    print(\"\\nEmpty Ground Truth Results:\")\n",
    "    pprint(result)\n",
    "except Exception as e:\n",
    "    print(f\"Error with empty ground truth: {e}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a8f6dc32",
   "metadata": {},
   "source": [
    "#### Sample 8: Tuple Format with Parameters"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1b1a1a0c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# TaskNavigationEfficiencyEvaluator also supports tuple format with parameters for exact parameter matching\n",
    "response_with_params = [\n",
    "    {\n",
    "        \"role\": \"assistant\",\n",
    "        \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"search\", \"arguments\": {\"query\": \"test\"}}],\n",
    "    },\n",
    "]\n",
    "\n",
    "# Ground truth using tuple format: (tool_names, parameters_dict)\n",
    "# Parameters must match exactly for tools to be considered matching\n",
    "ground_truth_with_params = ([\"search\"], {\"search\": {\"query\": \"test\"}})\n",
    "\n",
    "result = task_navigation_efficiency_evaluator(response=response_with_params, ground_truth=ground_truth_with_params)\n",
    "print(\"\\nTuple Format with Parameters Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd649c7a",
   "metadata": {},
   "source": [
    "#### Sample 9: String Response Input Type"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "675a1ab7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Demonstrate string response input type\n",
    "# The string response should contain structured tool call information that can be parsed\n",
    "string_response = \"I'll help you with that. Let me search for information, then analyze the results, and finally provide a report.\"\n",
    "ground_truth = [\"search\", \"analyze\", \"report\"]\n",
    "\n",
    "result = task_navigation_efficiency_evaluator(response=string_response, ground_truth=ground_truth)\n",
    "print(\"\\nString Response Results:\")\n",
    "pprint(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6741e8a0",
   "metadata": {},
   "source": [
    "### Evaluation Analysis Helper Function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a68181b7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Helper functions for analysis\n",
    "\n",
    "def analyze_task_navigation_efficiency(response, ground_truth, scenario_name, evaluator=None):\n",
    "    \"\"\"\n",
    "    Helper function to analyze and display task navigation efficiency results\n",
    "    \"\"\"\n",
    "    if evaluator is None:\n",
    "        evaluator = task_navigation_efficiency_evaluator\n",
    "        \n",
    "    result = evaluator(response=response, ground_truth=ground_truth)\n",
    "    \n",
    "    print(f\"\\n{'='*50}\")\n",
    "    print(f\"Analysis for: {scenario_name}\")\n",
    "    print(f\"{'='*50}\")\n",
    "    \n",
    "    print(f\"Ground Truth Steps: {ground_truth}\")\n",
    "    print(f\"Evaluator Matching Mode: {evaluator.matching_mode.value}\")\n",
    "    print(f\"{'='*50}\")\n",
    "    \n",
    "    # Display the returned results\n",
    "    for key, value in result.items():\n",
    "        if key == \"task_navigation_efficiency_details\":\n",
    "            print(f\"  {key}:\")\n",
    "            for prop_key, prop_value in value.items():\n",
    "                print(f\"    {prop_key}: {prop_value:.3f}\")\n",
    "        else:\n",
    "            print(f\"  {key}: {value}\")\n",
    "\n",
    "    return result\n",
    "\n",
    "# Example with different matching modes\n",
    "def compare_matching_modes(response, ground_truth, scenario_name):\n",
    "    \"\"\"\n",
    "    Compare results across different matching modes for the same scenario\n",
    "    \"\"\"\n",
    "    print(f\"\\n{'='*60}\")\n",
    "    print(f\"Matching Mode Comparison for: {scenario_name}\")\n",
    "    print(f\"{'='*60}\")\n",
    "    \n",
    "    matching_modes_to_test = [\n",
    "        _TaskNavigationEfficiencyMatchingMode.EXACT_MATCH,\n",
    "        _TaskNavigationEfficiencyMatchingMode.IN_ORDER_MATCH,\n",
    "        _TaskNavigationEfficiencyMatchingMode.ANY_ORDER_MATCH\n",
    "    ]\n",
    "    \n",
    "    for mode in matching_modes_to_test:\n",
    "        evaluator = _TaskNavigationEfficiencyEvaluator(matching_mode=mode)\n",
    "        result = evaluator(response=response, ground_truth=ground_truth)\n",
    "        \n",
    "        # Get the main result value\n",
    "        result_value = result.get(\"task_navigation_efficiency_result\", \"N/A\")\n",
    "        print(f\"  {mode.value.upper():15}: {result_value}\")\n",
    "    \n",
    "    return"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86c22044",
   "metadata": {},
   "source": [
    "### Example Usage of Helper Function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f3ad9842",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Example: Using the helper function to analyze different scenarios\n",
    "\n",
    "# Scenario 1: Perfect efficiency\n",
    "perfect_response = [\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"authenticate\", \"arguments\": {}}]},\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"fetch_data\", \"arguments\": {}}]},\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_3\", \"name\": \"process_result\", \"arguments\": {}}]},\n",
    "]\n",
    "perfect_ground_truth = [\"authenticate\", \"fetch_data\", \"process_result\"]\n",
    "\n",
    "analyze_task_navigation_efficiency(perfect_response, perfect_ground_truth, \"Perfect Efficiency Example\")\n",
    "\n",
    "# Scenario 2: Inefficient with extra steps\n",
    "inefficient_response = [\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_1\", \"name\": \"authenticate\", \"arguments\": {}}]},\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_2\", \"name\": \"log_attempt\", \"arguments\": {}}]},  # extra\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_3\", \"name\": \"fetch_data\", \"arguments\": {}}]},\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_4\", \"name\": \"validate_data\", \"arguments\": {}}]},  # extra\n",
    "    {\"role\": \"assistant\", \"content\": [{\"type\": \"tool_call\", \"tool_call_id\": \"call_5\", \"name\": \"process_result\", \"arguments\": {}}]},\n",
    "]\n",
    "inefficient_ground_truth = [\"authenticate\", \"fetch_data\", \"process_result\"]\n",
    "\n",
    "analyze_task_navigation_efficiency(inefficient_response, inefficient_ground_truth, \"Inefficient Path with Extra Steps\")\n",
    "\n",
    "# Demonstrate different matching modes\n",
    "print(\"\\n\" + \"=\"*60)\n",
    "print(\"COMPARING DIFFERENT MATCHING MODES\")\n",
    "print(\"=\"*60)\n",
    "\n",
    "compare_matching_modes(inefficient_response, inefficient_ground_truth, \"Inefficient Path Analysis\")\n",
    "\n",
    "# Example: Creating evaluators with different matching modes\n",
    "print(f\"\\n{'='*60}\")\n",
    "print(\"INDIVIDUAL MATCHING MODE EXAMPLES\")\n",
    "print(\"=\"*60)\n",
    "\n",
    "# Exact match evaluator\n",
    "exact_match_evaluator = _TaskNavigationEfficiencyEvaluator(matching_mode=_TaskNavigationEfficiencyMatchingMode.EXACT_MATCH)\n",
    "exact_result = exact_match_evaluator(response=perfect_response, ground_truth=perfect_ground_truth)\n",
    "print(f\"Exact Match Evaluator: {exact_result}\")\n",
    "\n",
    "# In-order match evaluator\n",
    "in_order_evaluator = _TaskNavigationEfficiencyEvaluator(matching_mode=_TaskNavigationEfficiencyMatchingMode.IN_ORDER_MATCH)\n",
    "in_order_result = in_order_evaluator(response=inefficient_response, ground_truth=inefficient_ground_truth)\n",
    "print(f\"In-Order Match Evaluator: {in_order_result}\")\n",
    "\n",
    "# Any-order match evaluator (most lenient)\n",
    "any_order_evaluator = _TaskNavigationEfficiencyEvaluator(matching_mode=_TaskNavigationEfficiencyMatchingMode.ANY_ORDER_MATCH)\n",
    "any_order_result = any_order_evaluator(response=inefficient_response, ground_truth=inefficient_ground_truth)\n",
    "print(f\"Any-Order Match Evaluator: {any_order_result}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "test_agent_evaluator_prp",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.9"
  },
  "nbformat": 4,
  "nbformat_minor": 5
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
