{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# SageMaker Custom Scorer Evaluation - Demo\n",
    "\n",
    "This notebook demonstrates how to use the CustomScorerEvaluator to evaluate models with custom evaluator functions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "Import necessary modules."
   ]
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# Configure AWS credentials and region\n",
    "#! ada credentials update --provider=isengard --account=<> --role=Admin --profile=default --once\n",
    "#! aws configure set region us-west-2"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "from sagemaker.train.evaluate import CustomScorerEvaluator\n",
    "from rich.pretty import pprint\n",
    "\n",
    "# Configure logging to show INFO messages\n",
    "import logging\n",
    "logging.basicConfig(\n",
    "    level=logging.INFO,\n",
    "    format='%(levelname)s - %(name)s - %(message)s'\n",
    ")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Configure Evaluation Parameters\n",
    "\n",
    "Set up the parameters for your custom scorer evaluation."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Evaluator ARN (custom evaluator from AI Registry)\n",
    "# evaluator_arn = \"arn:aws:sagemaker:us-west-2:<>:hub-content/AIRegistry/JsonDoc/00-goga-qa-evaluation/1.0.0\"\n",
    "# evaluator_arn = \"arn:aws:sagemaker:us-west-2:<>:hub-content/AIRegistry/JsonDoc/nikmehta-reward-function/1.0.0\"\n",
    "# evaluator_arn = \"arn:aws:sagemaker:us-west-2:<>:hub-content/AIRegistry/JsonDoc/eval-lambda-test/0.0.1\"\n",
    "evaluator_arn = \"arn:aws:sagemaker:us-west-2:<>:hub-content/F3LMYANDKWPZCROJVCKMJ7TOML6QMZBZRRQOVTUL45VUK7PJ4SXA/JsonDoc/eval-lambda-test/0.0.1\"\n",
    "\n",
    "# Dataset - can be S3 URI or AIRegistry DataSet ARN\n",
    "dataset = \"s3://sagemaker-us-west-2-<>/studio-users/d20251107t195443/datasets/2025-11-07T19-55-37-609Z/zc_test.jsonl\"\n",
    "\n",
    "# Base model - can be:\n",
    "# 1. Model package ARN: \"arn:aws:sagemaker:region:account:model-package/name/version\"\n",
    "# 2. JumpStart model ID: \"llama-3-2-1b-instruct\" [Evaluation with Base Model Only is yet to be implemented/tested - Not Working currently]\n",
    "base_model = \"arn:aws:sagemaker:us-west-2:<>:model-package/test-finetuned-models-gamma/28\"\n",
    "\n",
    "# S3 location for outputs\n",
    "s3_output_path = \"s3://mufi-test-serverless-smtj/eval/\"\n",
    "\n",
    "# Optional: MLflow tracking server ARN\n",
    "mlflow_resource_arn = \"arn:aws:sagemaker:us-west-2:<>:mlflow-tracking-server/mmlu-eval-experiment\"\n",
    "\n",
    "print(\"Configuration:\")\n",
    "print(f\"  Evaluator: {evaluator_arn}\")\n",
    "print(f\"  Dataset: {dataset}\")\n",
    "print(f\"  Base Model: {base_model}\")\n",
    "print(f\"  Output Location: {s3_output_path}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create CustomScorerEvaluator Instance\n",
    "\n",
    "Instantiate the evaluator with your configuration. The evaluator can accept:\n",
    "- **Custom Evaluator ARN** (string): Points to your custom evaluator in AI Registry\n",
    "- **Built-in Metric** (string or enum): Use preset metrics like \"code_executions\", \"math_answers\", etc.\n",
    "- **Evaluator Object**: A sagemaker.ai_registry.evaluator.Evaluator instance"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Create evaluator with custom evaluator ARN\n",
    "evaluator = CustomScorerEvaluator(\n",
    "    evaluator=evaluator_arn,  # Custom evaluator ARN\n",
    "    dataset=dataset,\n",
    "    model=base_model,\n",
    "    s3_output_path=s3_output_path,\n",
    "    mlflow_resource_arn=mlflow_resource_arn,\n",
    "    # model_package_group=\"arn:aws:sagemaker:us-west-2:<>:model-package-group/Demo-test-deb-2\", \n",
    "    evaluate_base_model=False  # Set to True to also evaluate the base model\n",
    ")\n",
    "\n",
    "print(\"\\n✓ CustomScorerEvaluator created successfully\")\n",
    "pprint(evaluator)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Optionally update the hyperparameters"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "pprint(evaluator.hyperparameters.to_dict())\n",
    "\n",
    "# optionally update hyperparameters\n",
    "# evaluator.hyperparameters.temperature = \"0.1\"\n",
    "\n",
    "# optionally get more info on types, limits, defaults.\n",
    "# evaluator.hyperparameters.get_info()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Alternative: Using Built-in Metrics\n",
    "\n",
    "Instead of a custom evaluator ARN, you can use built-in metrics:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Example with built-in metrics (commented out)\n",
    "# from sagemaker.train.evaluate import get_builtin_metrics\n",
    "# \n",
    "# BuiltInMetric = get_builtin_metrics()\n",
    "# \n",
    "# evaluator_builtin = CustomScorerEvaluator(\n",
    "#     evaluator=BuiltInMetric.PRIME_MATH,  # Or use string: \"prime_math\"\n",
    "#     dataset=dataset,\n",
    "#     base_model=base_model,\n",
    "#     s3_output_path=s3_output_path\n",
    "# )"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Start Evaluation\n",
    "\n",
    "Call `evaluate()` to start the evaluation job. This will:\n",
    "1. Create or update the evaluation pipeline\n",
    "2. Start a pipeline execution\n",
    "3. Return an `EvaluationPipelineExecution` object for monitoring"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Start evaluation\n",
    "execution = evaluator.evaluate()\n",
    "\n",
    "print(\"\\n✓ Evaluation execution started successfully!\")\n",
    "print(f\"  Execution Name: {execution.name}\")\n",
    "print(f\"  Pipeline Execution ARN: {execution.arn}\")\n",
    "print(f\"  Status: {execution.status.overall_status}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Monitor Job Progress\n",
    "\n",
    "Use `refresh()` to update the job status, or `wait()` to block until completion."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Check current status\n",
    "execution.refresh()\n",
    "print(f\"Current Status: {execution.status.overall_status}\")\n",
    "\n",
    "pprint(execution.status)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Wait for Completion\n",
    "\n",
    "Block execution until the job completes. This provides a rich visual experience in Jupyter notebooks."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Wait for job to complete (with rich visual feedback)\n",
    "execution.wait(poll=30, timeout=3600)\n",
    "\n",
    "print(f\"\\nFinal Status: {execution.status.overall_status}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# show results\n",
    "execution.show_results()"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Retrieve Existing Job\n",
    "\n",
    "You can retrieve a previously started evaluation job using its ARN."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "from sagemaker.train.evaluate import EvaluationPipelineExecution\n",
    "\n",
    "# Get existing job by ARN\n",
    "existing_arn = execution.arn  # Or use a specific ARN\n",
    "\n",
    "existing_exec = EvaluationPipelineExecution.get(arn=existing_arn)\n",
    "\n",
    "print(f\"Retrieved job: {existing_exec.name}\")\n",
    "print(f\"Status: {existing_exec.status.overall_status}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## List All Custom Scorer Evaluations\n",
    "\n",
    "Retrieve all custom scorer evaluation executions."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Get all custom scorer evaluations\n",
    "all_executions = list(CustomScorerEvaluator.get_all())\n",
    "\n",
    "print(f\"Found {len(all_executions)} custom scorer evaluation(s):\\n\")\n",
    "for execution in all_executions:\n",
    "    print(f\"  - {execution.name} - {execution.arn}: {execution.status.overall_status}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Stop a Running Job (Optional)\n",
    "\n",
    "You can stop a running evaluation if needed."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Uncomment to stop the job\n",
    "# execution.stop()\n",
    "# print(f\"Execution stopped. Status: {execution.status.overall_status}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "This notebook demonstrated:\n",
    "1. ✅ Creating a CustomScorerEvaluator with a custom evaluator ARN\n",
    "2. ✅ Starting an evaluation job\n",
    "3. ✅ Monitoring job progress with refresh() and wait()\n",
    "4. ✅ Retrieving existing jobs\n",
    "5. ✅ Listing all custom scorer evaluations\n",
    "\n",
    "### Key Points:\n",
    "- The `evaluator` parameter accepts:\n",
    "  - Custom evaluator ARN (for AI Registry evaluators)\n",
    "  - Built-in metric names (\"code_executions\", \"math_answers\", \"exact_match\")\n",
    "  - Evaluator objects from sagemaker.ai_registry.evaluator.Evaluator\n",
    "- Set `evaluate_base_model=False` to only evaluate the custom model\n",
    "- Use `execution.wait()` for automatic monitoring with rich visual feedback\n",
    "- Use `execution.refresh()` for manual status updates\n",
    "- The SageMaker session is automatically inferred from your environment"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
