{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "header-section"
      },
      "source": [
        "# Multimodal Model Evaluation for Melanoma Detection\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/kubernetes-engine-samples/blob/main/ai-ml/axolotl-multimodal-finetuning-gemma/evaluation/Evaluation.ipynb\">\n",
        "      <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Google Colaboratory logo\"><br> Run in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fkubernetes-engine-samples%2Fmain%2Fai-ml%2Faxolotl-multimodal-finetuning-gemma%2Fevaluation%2FEvaluation.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Run in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/main/ai-ml/axolotl-multimodal-finetuning-gemma/evaluation/Evaluation.ipynb\">\n",
        "      <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes-engine-samples/main/ai-ml/axolotl-multimodal-finetuning-gemma/evaluation/Evaluation.ipynb\">\n",
        "      <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "overview-section"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This notebook demonstrates how to evaluate the performance of fine-tuned multimodal AI models on the [SIIM-ISIC Melanoma Classification](https://challenge2020.isic-archive.com/) dataset. We'll compare three models: base Gemma 3, our fine-tuned Gemma 3, and MedGemma to assess improvements in melanoma detection capabilities.\n",
        "\n",
        "### What you'll learn\n",
        "\n",
        "- How to load and compare multimodal models (base, fine-tuned, and domain-specific)\n",
        "- How to run batch inference on medical imaging datasets\n",
        "- How to calculate and visualize key performance metrics\n",
        "- How to interpret model improvements for clinical applications\n",
        "- How to handle edge cases and model classification challenges\n",
        "\n",
        "### Prerequisites\n",
        "\n",
        "- Completed fine-tuning using the main repository\n",
        "- Access to Google Cloud Storage with model files\n",
        "- Hugging Face account with Gemma 3 access\n",
        "- GPU-enabled environment (recommended: A100 or better)\n",
        "\n",
        "### Time to complete\n",
        "\n",
        "45-60 minutes (depending on number of test images and GPU availability)\n",
        "\n",
        "---"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c3091965"
      },
      "source": [
        "## Introduction\n",
        "\n",
        "The SIIM-ISIC dataset contains over 33,000 dermoscopic images of skin lesions with corresponding labels indicating whether each lesion is benign or malignant melanoma. After fine-tuning Gemma 3 on this dataset, we need to rigorously evaluate its performance against both the base model and medical-domain-specific models.\n",
        "\n",
        "Our evaluation will:\n",
        "\n",
        "1. **Load multiple models** - Base Gemma 3, fine-tuned Gemma 3, and MedGemma\n",
        "2. **Run inference** - Process test images through each model\n",
        "3. **Calculate metrics** - Accuracy, precision, recall, specificity, and F1 scores\n",
        "4. **Visualize results** - Create comprehensive performance comparisons\n",
        "5. **Analyze improvements** - Quantify the benefits of fine-tuning\n",
        "\n",
        "**⚠️ Note**: This notebook contains medical imagery. The content is intended for educational and research purposes only. Models evaluated here should not be used for actual medical diagnosis without proper validation and regulatory approval."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f9355893-aee2-4d2e-a97f-72f67364b9d0"
      },
      "source": [
        "## Step 1: Install dependencies\n",
        "\n",
        "Let's install all required packages for model loading, inference, and evaluation:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5cb29a90"
      },
      "outputs": [],
      "source": [
        "# Install required packages with specific versions for compatibility\n",
        "print(\"📦 Installing required packages...\")\n",
        "!pip install transformers==4.51.3 -q\n",
        "!pip install accelerate==1.6.0 -q\n",
        "!pip install pillow==11.2.1 -q\n",
        "!pip install matplotlib==3.9.4 -q\n",
        "!pip install seaborn==0.13.2 -q\n",
        "!pip install sentencepiece==0.2.0 -q\n",
        "!pip install protobuf==3.20.3 -q\n",
        "!pip install peft==0.15.2 -q\n",
        "!pip install bitsandbytes==0.45.5 -q\n",
        "!pip install triton==3.3.0 -q\n",
        "!pip install torch==2.5.1 -q\n",
        "!pip install torchvision==0.20.1 -q\n",
        "!pip install scikit-learn==1.5.1 -q\n",
        "!pip install pandas==2.2.2 -q\n",
        "!pip install numpy==1.26.4 -q\n",
        "\n",
        "print(\"✅ Package installation complete!\")\n",
        "\n",
        "# Verify key package versions\n",
        "import transformers\n",
        "import torch\n",
        "print(f\"\\n📌 Key package versions:\")\n",
        "print(f\"  • Transformers: {transformers.__version__}\")\n",
        "print(f\"  • PyTorch: {torch.__version__}\")\n",
        "print(f\"  • CUDA available: {torch.cuda.is_available()}\")\n",
        "if torch.cuda.is_available():\n",
        "    print(f\"  • CUDA version: {torch.version.cuda}\")\n",
        "    print(f\"  • GPU: {torch.cuda.get_device_name(0)}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e5f9eaee-7e08-4dc9-b584-1f4ed9009dc6"
      },
      "source": [
        "## Step 2: Set up your environment\n",
        "\n",
        "Configure authentication for both Google Cloud and Hugging Face:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "auth-setup-eval"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import sys\n",
        "\n",
        "# Check if we're running in Colab\n",
        "IN_COLAB = 'google.colab' in sys.modules\n",
        "\n",
        "if IN_COLAB:\n",
        "    from google.colab import auth\n",
        "    auth.authenticate_user()\n",
        "    print(\"✅ Authenticated via Colab\")\n",
        "else:\n",
        "    # For Vertex AI Workbench or local environments\n",
        "    print(\"ℹ️ Using Application Default Credentials\")\n",
        "    print(\"   If not authenticated, run: gcloud auth application-default login\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "project-setup-eval"
      },
      "outputs": [],
      "source": [
        "# Set your project ID and GCS bucket\n",
        "PROJECT_ID = \"YOUR_PROJECT_ID\"  # @param {type:\"string\"}\n",
        "GCS_BUCKET_NAME = f\"{PROJECT_ID}-melanoma-dataset\"  # @param {type:\"string\"}\n",
        "\n",
        "# Set project\n",
        "!gcloud config set project {PROJECT_ID}\n",
        "\n",
        "# Verify project is set\n",
        "!echo \"Current project: $(gcloud config get-value project)\"\n",
        "\n",
        "print(f\"\\n📁 Using GCS bucket: gs://{GCS_BUCKET_NAME}\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hf-token-setup"
      },
      "outputs": [],
      "source": [
        "# Set up Hugging Face authentication\n",
        "# Get your token from: https://huggingface.co/settings/tokens\n",
        "HF_TOKEN = \"YOUR_HUGGING_FACE_TOKEN\"  # @param {type:\"string\"}\n",
        "\n",
        "import huggingface_hub\n",
        "huggingface_hub.login(token=HF_TOKEN)\n",
        "print(\"✅ Logged in to Hugging Face\")\n",
        "\n",
        "# Note: Make sure you have accepted the Gemma 3 model terms of use on Hugging Face"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e0fb739b-20f7-47ac-890c-e80875d4a92d"
      },
      "source": [
        "## Step 3: Import libraries and configure settings\n",
        "\n",
        "Import all necessary libraries and set up the evaluation environment:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2cb05ad1"
      },
      "outputs": [],
      "source": [
        "# Standard library imports\n",
        "import os\n",
        "import json\n",
        "import time\n",
        "import re\n",
        "import subprocess\n",
        "import tempfile\n",
        "import traceback\n",
        "import collections.abc\n",
        "from collections import defaultdict\n",
        "from datetime import datetime\n",
        "\n",
        "# Data science imports\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import matplotlib.pyplot as plt\n",
        "import seaborn as sns\n",
        "from PIL import Image\n",
        "\n",
        "# Machine learning imports\n",
        "import torch\n",
        "from transformers import (\n",
        "    AutoModelForCausalLM,\n",
        "    AutoModelForImageTextToText,\n",
        "    AutoTokenizer,\n",
        "    AutoProcessor\n",
        ")\n",
        "from peft import PeftModel\n",
        "from sklearn.metrics import (\n",
        "    confusion_matrix,\n",
        "    accuracy_score,\n",
        "    precision_score,\n",
        "    recall_score,\n",
        "    f1_score,\n",
        "    roc_curve,\n",
        "    roc_auc_score\n",
        ")\n",
        "\n",
        "# Configure CUDA environment for better debugging\n",
        "os.environ['CUDA_LAUNCH_BLOCKING'] = \"1\"\n",
        "os.environ['TORCH_USE_CUDA_DSA'] = \"1\"\n",
        "os.environ[\"PYTORCH_USE_CUDA_DSA\"] = \"1\"\n",
        "\n",
        "# Set device and check capabilities\n",
        "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
        "print(f\"🖥️ Using device: {device}\")\n",
        "if torch.cuda.is_available():\n",
        "    print(f\"  • GPU: {torch.cuda.get_device_name(0)}\")\n",
        "    print(f\"  • Memory: {torch.cuda.get_device_properties(0).total_memory / 1024**3:.1f} GB\")\n",
        "    print(f\"  • CUDA version: {torch.version.cuda}\")\n",
        "    print(f\"  • BF16 support: {torch.cuda.is_bf16_supported()}\")\n",
        "\n",
        "# Set random seeds for reproducibility\n",
        "np.random.seed(42)\n",
        "torch.manual_seed(42)\n",
        "if torch.cuda.is_available():\n",
        "    torch.cuda.manual_seed(42)\n",
        "\n",
        "print(\"\\n✅ Environment configured successfully!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bba7cba3"
      },
      "source": [
        "## Step 4: Download evaluation data\n",
        "\n",
        "Download the fine-tuned model files and test images from Google Cloud Storage:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dde8c18e"
      },
      "outputs": [],
      "source": [
        "# Create temporary directories\n",
        "temp_dir = tempfile.mkdtemp()\n",
        "tuned_model_dir = os.path.join(temp_dir, \"tuned_model\")\n",
        "image_dir = os.path.join(temp_dir, \"images\")\n",
        "os.makedirs(tuned_model_dir, exist_ok=True)\n",
        "os.makedirs(image_dir, exist_ok=True)\n",
        "\n",
        "print(f\"📁 Created temporary directories:\")\n",
        "print(f\"  • Tuned Model: {tuned_model_dir}\")\n",
        "print(f\"  • Images: {image_dir}\")\n",
        "\n",
        "# Download the fine-tuned model files\n",
        "print(\"\\n📥 Downloading fine-tuned model files...\")\n",
        "!gsutil -m cp -r gs://{GCS_BUCKET_NAME}/tuned-models/* {tuned_model_dir} 2>/dev/null || echo \"Model files may not exist yet\"\n",
        "\n",
        "# List downloaded model files\n",
        "model_files = os.listdir(tuned_model_dir) if os.path.exists(tuned_model_dir) else []\n",
        "if model_files:\n",
        "    print(f\"\\n✅ Downloaded {len(model_files)} model files:\")\n",
        "    for file in sorted(model_files):\n",
        "        file_path = os.path.join(tuned_model_dir, file)\n",
        "        size_mb = os.path.getsize(file_path) / (1024 * 1024)\n",
        "        print(f\"  • {file} ({size_mb:.1f} MB)\")\n",
        "else:\n",
        "    print(\"⚠️ No model files found. Make sure fine-tuning has completed.\")\n",
        "\n",
        "# Download test images\n",
        "print(\"\\n📥 Downloading test images...\")\n",
        "!gsutil -m cp \"gs://{GCS_BUCKET_NAME}/processed_images/test/*.jpg\" {image_dir} 2>/dev/null || echo \"Test images may not be available\"\n",
        "\n",
        "# Count downloaded images\n",
        "image_files = [f for f in os.listdir(image_dir) if f.lower().endswith(('.jpg', '.jpeg', '.png'))]\n",
        "print(f\"\\n📸 Downloaded {len(image_files)} test images\")\n",
        "if len(image_files) > 0:\n",
        "    print(f\"  • Sample images: {', '.join(sorted(image_files)[:5])}{'...' if len(image_files) > 5 else ''}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "47ff98af"
      },
      "source": [
        "## Step 5: Load ground truth labels\n",
        "\n",
        "Load the ground truth labels for our test images from the ISIC dataset:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0dd4c593"
      },
      "outputs": [],
      "source": [
        "def load_ground_truth_data():\n",
        "    \"\"\"\n",
        "    Load ground truth data from the ISIC dataset.\n",
        "    Returns a dictionary mapping image filenames to their labels (0=benign, 1=melanoma).\n",
        "    \"\"\"\n",
        "    print(\"📊 Loading ground truth data...\")\n",
        "\n",
        "    # Try to download ground truth file from GCS\n",
        "    gcs_path = f\"gs://{GCS_BUCKET_NAME}/isic-challenge-data.s3.amazonaws.com/2020/ISIC_2020_Training_GroundTruth_v2.csv\"\n",
        "    local_path = os.path.join(temp_dir, \"ISIC_2020_Training_GroundTruth_v2.csv\")\n",
        "\n",
        "    try:\n",
        "        print(f\"  • Downloading from: {gcs_path}\")\n",
        "        subprocess.run([\"gsutil\", \"cp\", gcs_path, local_path], check=True, capture_output=True)\n",
        "\n",
        "        # Load the CSV\n",
        "        gt_data = pd.read_csv(local_path)\n",
        "        print(f\"  ✅ Loaded {len(gt_data):,} ground truth labels\")\n",
        "\n",
        "        # Show data structure\n",
        "        print(\"\\n📋 Ground truth data structure:\")\n",
        "        print(f\"  • Columns: {', '.join(gt_data.columns)}\")\n",
        "        print(f\"\\n  • Sample data:\")\n",
        "        display(gt_data.head())\n",
        "\n",
        "        # Create mapping from image filename to label\n",
        "        image_to_label = dict(zip(\n",
        "            gt_data['image_name'].apply(lambda x: f\"{x}.jpg\"),\n",
        "            gt_data['target']\n",
        "        ))\n",
        "\n",
        "        # Calculate statistics\n",
        "        melanoma_count = gt_data['target'].sum()\n",
        "        benign_count = len(gt_data) - melanoma_count\n",
        "\n",
        "        print(f\"\\n📊 Dataset statistics:\")\n",
        "        print(f\"  • Total images: {len(gt_data):,}\")\n",
        "        print(f\"  • Benign: {benign_count:,} ({benign_count/len(gt_data)*100:.1f}%)\")\n",
        "        print(f\"  • Melanoma: {melanoma_count:,} ({melanoma_count/len(gt_data)*100:.1f}%)\")\n",
        "        print(f\"  • Class imbalance ratio: {benign_count/melanoma_count:.1f}:1\")\n",
        "\n",
        "        return image_to_label\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"❌ Error loading ground truth labels: {e}\")\n",
        "        print(\"   Evaluation will proceed without ground truth labels.\")\n",
        "        return {}\n",
        "\n",
        "# Load ground truth labels\n",
        "image_to_label = load_ground_truth_data()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a9becf4b"
      },
      "source": [
        "## Step 6: Define model loading functions\n",
        "\n",
        "Define functions to load the base model, fine-tuned model, and MedGemma:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "f42da475"
      },
      "outputs": [],
      "source": [
        "def load_models(base_model_id=\"google/gemma-3-4b-it\", tuned_model_path=None):\n",
        "    \"\"\"\n",
        "    Load base and fine-tuned multimodal models with their tokenizers and processors.\n",
        "    Ensures the base model returned is pristine if a tuned model is also loaded.\n",
        "\n",
        "    Args:\n",
        "        base_model_id: Hugging Face model ID for the base model\n",
        "        tuned_model_path: Local path to the fine-tuned model adapter files\n",
        "\n",
        "    Returns:\n",
        "        Dictionary with base_model, tuned_model, tokenizer, and processor\n",
        "    \"\"\"\n",
        "    print(f\"🤖 Loading models...\")\n",
        "    print(f\"  • Base model ID: {base_model_id}\")\n",
        "\n",
        "    # Load tokenizer and processor\n",
        "    print(\"\\n📚 Loading tokenizer and processor...\")\n",
        "    tokenizer = AutoTokenizer.from_pretrained(base_model_id)\n",
        "    processor = AutoProcessor.from_pretrained(base_model_id)\n",
        "    print(\"  ✅ Tokenizer and processor loaded\")\n",
        "\n",
        "    # Determine device and dtype\n",
        "    device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
        "    model_dtype = torch.bfloat16 if device == \"cuda\" and torch.cuda.is_bf16_supported() else torch.float32\n",
        "\n",
        "    print(f\"\\n🔧 Model configuration:\")\n",
        "    print(f\"  • Device: {device}\")\n",
        "    print(f\"  • Dtype: {model_dtype}\")\n",
        "\n",
        "    # Load the pristine base model\n",
        "    print(f\"\\n📥 Loading pristine base model...\")\n",
        "    pristine_base_model = AutoModelForImageTextToText.from_pretrained(\n",
        "        base_model_id,\n",
        "        torch_dtype=model_dtype,\n",
        "        device_map=\"auto\"\n",
        "    )\n",
        "    pristine_base_model.eval()\n",
        "    print(\"  ✅ Base model loaded successfully\")\n",
        "\n",
        "    # Load fine-tuned model if path provided\n",
        "    loaded_tuned_model = None\n",
        "    if tuned_model_path and os.path.exists(tuned_model_path):\n",
        "        print(f\"\\n📥 Loading fine-tuned model from {tuned_model_path}...\")\n",
        "\n",
        "        # Check for adapter files\n",
        "        adapter_config_path = os.path.join(tuned_model_path, \"adapter_config.json\")\n",
        "        adapter_model_path = os.path.join(tuned_model_path, \"adapter_model.safetensors\")\n",
        "\n",
        "        # Check alternative adapter model file\n",
        "        if not os.path.exists(adapter_model_path):\n",
        "            adapter_model_path_bin = os.path.join(tuned_model_path, \"adapter_model.bin\")\n",
        "            if os.path.exists(adapter_model_path_bin):\n",
        "                adapter_model_path = adapter_model_path_bin\n",
        "\n",
        "        print(f\"  • Adapter config exists: {os.path.exists(adapter_config_path)}\")\n",
        "        print(f\"  • Adapter model exists: {os.path.exists(adapter_model_path)}\")\n",
        "\n",
        "        if os.path.exists(adapter_config_path) and os.path.exists(adapter_model_path):\n",
        "            try:\n",
        "                # Load a fresh base model instance for the adapter\n",
        "                print(\"  • Loading fresh base model instance for adapter...\")\n",
        "                base_model_for_adapter = AutoModelForImageTextToText.from_pretrained(\n",
        "                    base_model_id,\n",
        "                    torch_dtype=model_dtype,\n",
        "                    device_map=\"auto\"\n",
        "                )\n",
        "\n",
        "                # Apply the adapter\n",
        "                print(\"  • Applying fine-tuned adapter...\")\n",
        "                loaded_tuned_model = PeftModel.from_pretrained(\n",
        "                    base_model_for_adapter,\n",
        "                    tuned_model_path\n",
        "                )\n",
        "                loaded_tuned_model.eval()\n",
        "                print(\"  ✅ Fine-tuned model loaded successfully\")\n",
        "\n",
        "            except Exception as e:\n",
        "                print(f\"  ❌ Error loading fine-tuned model: {e}\")\n",
        "                traceback.print_exc()\n",
        "                if 'base_model_for_adapter' in locals():\n",
        "                    del base_model_for_adapter\n",
        "                    if torch.cuda.is_available():\n",
        "                        torch.cuda.empty_cache()\n",
        "        else:\n",
        "            print(\"  ⚠️ Missing adapter files, cannot load fine-tuned model\")\n",
        "\n",
        "    return {\n",
        "        \"base_model\": pristine_base_model,\n",
        "        \"tuned_model\": loaded_tuned_model,\n",
        "        \"tokenizer\": tokenizer,\n",
        "        \"processor\": processor\n",
        "    }"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "inference-functions"
      },
      "source": [
        "## Step 7: Define inference functions\n",
        "\n",
        "Define the patterns for classifying model responses and the inference function:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fb255ee2"
      },
      "outputs": [],
      "source": [
        "# Define classification patterns\n",
        "POSITIVE_PATTERNS = [\n",
        "    r\"yes, this appears to be malignant melanoma\",\n",
        "    r\"this appears to be malignant melanoma\",\n",
        "    r\"appears to be malignant melanoma\",\n",
        "    r\"it appears to be malignant melanoma\",\n",
        "    r\"Based on.*this appears to be malignant melanoma\",\n",
        "    r\"Based on.*it appears to be malignant melanoma\"\n",
        "]\n",
        "\n",
        "NEGATIVE_PATTERNS = [\n",
        "    r\"does not appear to be malignant melanoma\",\n",
        "    r\"no, this does not appear to be malignant melanoma\",\n",
        "    r\"this does not appear to be malignant melanoma\",\n",
        "    r\"it does not appear to be malignant melanoma\"\n",
        "]\n",
        "\n",
        "def run_inference(model, tokenizer, processor, image_path, prompt, model_family=\"gemma3\"):\n",
        "    \"\"\"\n",
        "    Run inference on a single image with a given model.\n",
        "\n",
        "    Args:\n",
        "        model: The loaded model\n",
        "        tokenizer: The model's tokenizer\n",
        "        processor: The model's processor\n",
        "        image_path: Path to the image file\n",
        "        prompt: Text prompt for the model\n",
        "        model_family: Model family identifier (gemma3, medgemma)\n",
        "\n",
        "    Returns:\n",
        "        Dictionary with success status, response, classification, and timing\n",
        "    \"\"\"\n",
        "    image_filename = os.path.basename(image_path)\n",
        "\n",
        "    try:\n",
        "        # Load and prepare image\n",
        "        image = Image.open(image_path).convert(\"RGB\")\n",
        "\n",
        "        # System prompt\n",
        "        system_prompt = \"You are a dermatology assistant that helps identify potential melanoma from skin lesion images.\"\n",
        "\n",
        "        # Prepare messages in chat template format\n",
        "        messages = [\n",
        "            {\n",
        "                \"role\": \"system\",\n",
        "                \"content\": [{\"type\": \"text\", \"text\": system_prompt}]\n",
        "            },\n",
        "            {\n",
        "                \"role\": \"user\",\n",
        "                \"content\": [\n",
        "                    {\"type\": \"text\", \"text\": prompt},\n",
        "                    {\"type\": \"image\", \"image\": image}\n",
        "                ]\n",
        "            }\n",
        "        ]\n",
        "\n",
        "        # Apply chat template and tokenize\n",
        "        inputs = processor.apply_chat_template(\n",
        "            messages,\n",
        "            add_generation_prompt=True,\n",
        "            tokenize=True,\n",
        "            return_dict=True,\n",
        "            return_tensors=\"pt\"\n",
        "        )\n",
        "\n",
        "        # Verify inputs are properly formatted\n",
        "        if not isinstance(inputs, collections.abc.Mapping):\n",
        "            return {\n",
        "                \"success\": False,\n",
        "                \"error\": f\"Input preparation error: expected dictionary, got {type(inputs)}\",\n",
        "                \"is_melanoma\": None,\n",
        "                \"inference_time\": 0\n",
        "            }\n",
        "\n",
        "        # Move inputs to model device\n",
        "        inputs = inputs.to(model.device)\n",
        "\n",
        "        # Run inference\n",
        "        start_time = time.time()\n",
        "        with torch.no_grad():\n",
        "            outputs = model.generate(\n",
        "                **inputs,\n",
        "                max_new_tokens=500,\n",
        "                do_sample=False,\n",
        "                temperature=0.0\n",
        "            )\n",
        "        elapsed = time.time() - start_time\n",
        "\n",
        "        # Decode response (only the generated part)\n",
        "        input_len = inputs[\"input_ids\"].shape[-1]\n",
        "        response = tokenizer.decode(outputs[0][input_len:], skip_special_tokens=True).strip()\n",
        "\n",
        "        # Classify response\n",
        "        is_melanoma = None\n",
        "        response_lower = response.lower()\n",
        "\n",
        "        # Check positive patterns\n",
        "        for pattern in POSITIVE_PATTERNS:\n",
        "            if re.search(pattern, response_lower, re.IGNORECASE):\n",
        "                is_melanoma = 1\n",
        "                break\n",
        "\n",
        "        # Check negative patterns if not already classified\n",
        "        if is_melanoma is None:\n",
        "            for pattern in NEGATIVE_PATTERNS:\n",
        "                if re.search(pattern, response_lower, re.IGNORECASE):\n",
        "                    is_melanoma = 0\n",
        "                    break\n",
        "\n",
        "        return {\n",
        "            \"success\": True,\n",
        "            \"response\": response,\n",
        "            \"is_melanoma\": is_melanoma,\n",
        "            \"inference_time\": elapsed\n",
        "        }\n",
        "\n",
        "    except Exception as e:\n",
        "        print(f\"❌ Error during inference on {image_filename}: {str(e)}\")\n",
        "        traceback.print_exc()\n",
        "        return {\n",
        "            \"success\": False,\n",
        "            \"error\": str(e),\n",
        "            \"is_melanoma\": None,\n",
        "            \"inference_time\": 0\n",
        "        }\n",
        "\n",
        "print(\"✅ Inference functions defined\")\n",
        "print(f\"\\n📋 Classification patterns:\")\n",
        "print(f\"  • Positive patterns: {len(POSITIVE_PATTERNS)}\")\n",
        "print(f\"  • Negative patterns: {len(NEGATIVE_PATTERNS)}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5412ed1f"
      },
      "source": [
        "## Step 8: Define evaluation functions\n",
        "\n",
        "Define the main function to evaluate multiple models on the test dataset:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "XnTXlQkffDf4"
      },
      "outputs": [],
      "source": [
        "def evaluate_models_on_dataset(\n",
        "    gemma_base_model_id=\"google/gemma-3-4b-it\",\n",
        "    gemma_tuned_model_path=None,\n",
        "    medgemma_model_id_param=None,\n",
        "    test_image_dir=None,\n",
        "    num_images=0,\n",
        "    results_dir=\"evaluation_results\",\n",
        "    model_processing_order=[\"base_gemma\", \"tuned_gemma\", \"medgemma\"],\n",
        "    specific_image_files_to_process=None,\n",
        "    custom_prompts_map=None,\n",
        "    reprocess_policy=None\n",
        "):\n",
        "    \"\"\"\n",
        "    Evaluate specified models on test images.\n",
        "\n",
        "    Args:\n",
        "        gemma_base_model_id: HuggingFace ID for base Gemma model\n",
        "        gemma_tuned_model_path: Path to fine-tuned model adapter files\n",
        "        medgemma_model_id_param: HuggingFace ID for MedGemma model\n",
        "        test_image_dir: Directory containing test images\n",
        "        num_images: Maximum number of images to process (0 for all)\n",
        "        results_dir: Directory to save results\n",
        "        model_processing_order: Order in which to process models\n",
        "        specific_image_files_to_process: List of specific images to process\n",
        "        custom_prompts_map: Custom prompts for each model\n",
        "        reprocess_policy: Policy for reprocessing (None, \"skip_if_exists\", \"overwrite\")\n",
        "\n",
        "    Returns:\n",
        "        Dictionary of results for each model\n",
        "    \"\"\"\n",
        "    print(\"🚀 Starting model evaluation...\")\n",
        "    os.makedirs(results_dir, exist_ok=True)\n",
        "\n",
        "    if not test_image_dir or not os.path.exists(test_image_dir):\n",
        "        raise ValueError(f\"Test image directory '{test_image_dir}' not found\")\n",
        "\n",
        "    loaded_models_info = {}\n",
        "\n",
        "    # Load Gemma 3 models\n",
        "    if \"base_gemma\" in model_processing_order or \"tuned_gemma\" in model_processing_order:\n",
        "        if gemma_base_model_id:\n",
        "            print(f\"\\n--- Loading Gemma 3 models ---\")\n",
        "            gemma_models = load_models(gemma_base_model_id, gemma_tuned_model_path)\n",
        "\n",
        "            if gemma_models.get(\"base_model\"):\n",
        "                loaded_models_info[\"base_gemma\"] = {\n",
        "                    \"model\": gemma_models[\"base_model\"],\n",
        "                    \"tokenizer\": gemma_models[\"tokenizer\"],\n",
        "                    \"processor\": gemma_models[\"processor\"],\n",
        "                    \"family\": \"gemma3\",\n",
        "                    \"name_for_log\": \"Base Gemma 3\"\n",
        "                }\n",
        "\n",
        "            if gemma_models.get(\"tuned_model\"):\n",
        "                loaded_models_info[\"tuned_gemma\"] = {\n",
        "                    \"model\": gemma_models[\"tuned_model\"],\n",
        "                    \"tokenizer\": gemma_models[\"tokenizer\"],\n",
        "                    \"processor\": gemma_models[\"processor\"],\n",
        "                    \"family\": \"gemma3\",\n",
        "                    \"name_for_log\": \"Tuned Gemma 3\"\n",
        "                }\n",
        "\n",
        "    # Load MedGemma if requested\n",
        "    if \"medgemma\" in model_processing_order and medgemma_model_id_param:\n",
        "        print(f\"\\n--- Loading MedGemma model ---\")\n",
        "        try:\n",
        "            med_tokenizer = AutoTokenizer.from_pretrained(medgemma_model_id_param)\n",
        "            med_processor = AutoProcessor.from_pretrained(medgemma_model_id_param)\n",
        "\n",
        "            device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
        "            dtype = torch.bfloat16 if device == \"cuda\" and torch.cuda.is_bf16_supported() else torch.float32\n",
        "\n",
        "            med_model = AutoModelForImageTextToText.from_pretrained(\n",
        "                medgemma_model_id_param,\n",
        "                torch_dtype=dtype,\n",
        "                device_map=\"auto\"\n",
        "            )\n",
        "            med_model.eval()\n",
        "\n",
        "            loaded_models_info[\"medgemma\"] = {\n",
        "                \"model\": med_model,\n",
        "                \"tokenizer\": med_tokenizer,\n",
        "                \"processor\": med_processor,\n",
        "                \"family\": \"medgemma\",\n",
        "                \"name_for_log\": \"MedGemma\"\n",
        "            }\n",
        "            print(\"  ✅ MedGemma loaded successfully\")\n",
        "        except Exception as e:\n",
        "            print(f\"  ❌ Error loading MedGemma: {e}\")\n",
        "\n",
        "    # Determine images to process\n",
        "    if specific_image_files_to_process:\n",
        "        actual_image_files = [f for f in specific_image_files_to_process\n",
        "                             if os.path.exists(os.path.join(test_image_dir, f))]\n",
        "        print(f\"\\n📸 Processing {len(actual_image_files)} specific images\")\n",
        "    else:\n",
        "        all_images = sorted([f for f in os.listdir(test_image_dir)\n",
        "                           if f.lower().endswith(('.jpg', '.jpeg', '.png'))])\n",
        "        if num_images > 0:\n",
        "            actual_image_files = all_images[:num_images]\n",
        "        else:\n",
        "            actual_image_files = all_images\n",
        "        print(f\"\\n📸 Processing {len(actual_image_files)} images\")\n",
        "\n",
        "    # Default prompt\n",
        "    default_prompt = (\n",
        "        \"This is a skin lesion image. Does this appear to be malignant melanoma? \"\n",
        "        \"Please explain your reasoning and conclude with either \"\n",
        "        \"'Yes, this appears to be malignant melanoma.' or \"\n",
        "        \"'No, this does not appear to be malignant melanoma.'\"\n",
        "    )\n",
        "\n",
        "    # Process images for each model\n",
        "    current_run_results = defaultdict(list)\n",
        "\n",
        "    for model_key in model_processing_order:\n",
        "        if model_key not in loaded_models_info:\n",
        "            print(f\"\\n⚠️ Skipping {model_key} (not loaded)\")\n",
        "            continue\n",
        "\n",
        "        model_info = loaded_models_info[model_key]\n",
        "        model_name = model_info[\"name_for_log\"]\n",
        "\n",
        "        print(f\"\\n--- Evaluating {model_name} ---\")\n",
        "\n",
        "        # Load existing results if using skip policy\n",
        "        existing_results_map = {}\n",
        "        if specific_image_files_to_process and reprocess_policy == \"skip_if_exists\":\n",
        "            results_file = os.path.join(results_dir, f\"{model_key}_results.json\")\n",
        "            if os.path.exists(results_file):\n",
        "                try:\n",
        "                    with open(results_file, 'r') as f:\n",
        "                        existing_data = json.load(f)\n",
        "                        existing_results_map = {item['image']: item for item in existing_data}\n",
        "                    print(f\"  • Loaded {len(existing_results_map)} existing results\")\n",
        "                except Exception as e:\n",
        "                    print(f\"  ⚠️ Could not load existing results: {e}\")\n",
        "\n",
        "        # Process each image\n",
        "        for i, image_file in enumerate(actual_image_files):\n",
        "            # Skip if exists and policy says so\n",
        "            if (specific_image_files_to_process and\n",
        "                reprocess_policy == \"skip_if_exists\" and\n",
        "                image_file in existing_results_map):\n",
        "                print(f\"  • Skipping {image_file} (already processed)\")\n",
        "                current_run_results[model_key].append(existing_results_map[image_file])\n",
        "                continue\n",
        "\n",
        "            image_path = os.path.join(test_image_dir, image_file)\n",
        "            ground_truth = image_to_label.get(image_file, None)\n",
        "\n",
        "            # Get prompt\n",
        "            prompt = default_prompt\n",
        "            if custom_prompts_map and model_key in custom_prompts_map:\n",
        "                prompt = custom_prompts_map[model_key]\n",
        "\n",
        "            # Run inference\n",
        "            print(f\"  • Processing {image_file} ({i+1}/{len(actual_image_files)})...\", end='')\n",
        "            result = run_inference(\n",
        "                model_info[\"model\"],\n",
        "                model_info[\"tokenizer\"],\n",
        "                model_info[\"processor\"],\n",
        "                image_path,\n",
        "                prompt,\n",
        "                model_family=model_info[\"family\"]\n",
        "            )\n",
        "\n",
        "            result[\"image\"] = image_file\n",
        "            if ground_truth is not None:\n",
        "                result[\"ground_truth\"] = ground_truth\n",
        "\n",
        "            current_run_results[model_key].append(result)\n",
        "\n",
        "            # Print result\n",
        "            if result[\"success\"]:\n",
        "                pred = \"Melanoma\" if result['is_melanoma'] == 1 else \"Benign\" if result['is_melanoma'] == 0 else \"Uncertain\"\n",
        "                print(f\" {pred} ({result['inference_time']:.1f}s)\")\n",
        "            else:\n",
        "                print(f\" Failed\")\n",
        "\n",
        "    # Save results\n",
        "    print(f\"\\n💾 Saving results to {results_dir}...\")\n",
        "    for model_key, results in current_run_results.items():\n",
        "        if not results:\n",
        "            continue\n",
        "\n",
        "        output_file = os.path.join(results_dir, f\"{model_key}_results.json\")\n",
        "\n",
        "        # Merge with existing results if doing targeted reprocessing\n",
        "        if specific_image_files_to_process:\n",
        "            existing_map = {}\n",
        "            if os.path.exists(output_file):\n",
        "                try:\n",
        "                    with open(output_file, 'r') as f:\n",
        "                        existing_data = json.load(f)\n",
        "                        existing_map = {item['image']: item for item in existing_data}\n",
        "                except:\n",
        "                    pass\n",
        "\n",
        "            # Update with new results\n",
        "            for result in results:\n",
        "                existing_map[result['image']] = result\n",
        "\n",
        "            final_results = list(existing_map.values())\n",
        "        else:\n",
        "            final_results = results\n",
        "\n",
        "        # Save\n",
        "        with open(output_file, 'w') as f:\n",
        "            json.dump(final_results, f, indent=2)\n",
        "        print(f\"  • Saved {len(final_results)} results for {model_key}\")\n",
        "\n",
        "    return dict(current_run_results)\n",
        "\n",
        "print(\"✅ Evaluation functions defined\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "76243436"
      },
      "source": [
        "## Step 9: Run the evaluation\n",
        "\n",
        "Now let's run the evaluation on our test images. You can customize which models to evaluate and how many images to process:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5af0eee2"
      },
      "outputs": [],
      "source": [
        "# Configuration for evaluation\n",
        "EVALUATE_MEDGEMMA = True  # @param {type:\"boolean\"}\n",
        "MEDGEMMA_MODEL_ID = \"google/medgemma-4b-it\" if EVALUATE_MEDGEMMA else None  # @param {type:\"string\"}\n",
        "NUM_IMAGES_TO_EVALUATE = 100  # @param {type:\"integer\"}\n",
        "# Set to 0 to evaluate all images\n",
        "\n",
        "# Define model processing order\n",
        "model_order = [\"base_gemma\", \"tuned_gemma\"]\n",
        "if EVALUATE_MEDGEMMA:\n",
        "    model_order.append(\"medgemma\")\n",
        "\n",
        "print(f\"📋 Evaluation configuration:\")\n",
        "print(f\"  • Models to evaluate: {', '.join(model_order)}\")\n",
        "print(f\"  • Images to process: {'All' if NUM_IMAGES_TO_EVALUATE == 0 else NUM_IMAGES_TO_EVALUATE}\")\n",
        "print(f\"  • MedGemma ID: {MEDGEMMA_MODEL_ID if EVALUATE_MEDGEMMA else 'Not evaluating'}\")\n",
        "\n",
        "# Run evaluation\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"🚀 STARTING EVALUATION\")\n",
        "print(\"=\"*60)\n",
        "\n",
        "evaluation_results = evaluate_models_on_dataset(\n",
        "    gemma_base_model_id=\"google/gemma-3-4b-it\",\n",
        "    gemma_tuned_model_path=tuned_model_dir,\n",
        "    medgemma_model_id_param=MEDGEMMA_MODEL_ID,\n",
        "    test_image_dir=image_dir,\n",
        "    num_images=NUM_IMAGES_TO_EVALUATE,\n",
        "    model_processing_order=model_order\n",
        ")\n",
        "\n",
        "# Summary\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"✅ EVALUATION COMPLETE\")\n",
        "print(\"=\"*60)\n",
        "print(\"\\n📊 Results summary:\")\n",
        "for model_key, results in evaluation_results.items():\n",
        "    successful = sum(1 for r in results if r.get('success', False))\n",
        "    print(f\"  • {model_key}: {successful}/{len(results)} successful inferences\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0e52f5d2"
      },
      "source": [
        "## Step 10: Post-process results\n",
        "\n",
        "Process the results to fix any null classifications using our defined patterns:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "c3df452e"
      },
      "outputs": [],
      "source": [
        "def process_results(results_file):\n",
        "    \"\"\"\n",
        "    Process results file to fix any null classifications using pattern matching.\n",
        "\n",
        "    Args:\n",
        "        results_file: Path to the results JSON file\n",
        "\n",
        "    Returns:\n",
        "        List of processed results\n",
        "    \"\"\"\n",
        "    print(f\"\\n📄 Processing results from {os.path.basename(results_file)}\")\n",
        "\n",
        "    if not os.path.exists(results_file):\n",
        "        print(f\"  ❌ File not found\")\n",
        "        return []\n",
        "\n",
        "    try:\n",
        "        with open(results_file, 'r') as f:\n",
        "            results = json.load(f)\n",
        "    except Exception as e:\n",
        "        print(f\"  ❌ Error reading file: {e}\")\n",
        "        return []\n",
        "\n",
        "    print(f\"  • Loaded {len(results)} results\")\n",
        "\n",
        "    # Fix null classifications\n",
        "    fixed_count = 0\n",
        "    for item in results:\n",
        "        if item.get('is_melanoma') is None and 'response' in item:\n",
        "            response_lower = item['response'].lower()\n",
        "\n",
        "            # Check patterns\n",
        "            for pattern in POSITIVE_PATTERNS:\n",
        "                if re.search(pattern, response_lower, re.IGNORECASE):\n",
        "                    item['is_melanoma'] = 1\n",
        "                    fixed_count += 1\n",
        "                    break\n",
        "\n",
        "            if item.get('is_melanoma') is None:\n",
        "                for pattern in NEGATIVE_PATTERNS:\n",
        "                    if re.search(pattern, response_lower, re.IGNORECASE):\n",
        "                        item['is_melanoma'] = 0\n",
        "                        fixed_count += 1\n",
        "                        break\n",
        "\n",
        "    # Count classifications\n",
        "    melanoma_count = sum(1 for item in results if item.get('is_melanoma') == 1)\n",
        "    benign_count = sum(1 for item in results if item.get('is_melanoma') == 0)\n",
        "    uncertain_count = sum(1 for item in results if item.get('is_melanoma') is None)\n",
        "\n",
        "    print(f\"  • Fixed {fixed_count} null classifications\")\n",
        "    print(f\"  • Final predictions: {melanoma_count} melanoma, {benign_count} benign, {uncertain_count} uncertain\")\n",
        "\n",
        "    # Save processed results\n",
        "    processed_file = results_file.replace('.json', '_processed.json')\n",
        "    try:\n",
        "        with open(processed_file, 'w') as f:\n",
        "            json.dump(results, f, indent=2)\n",
        "        print(f\"  ✅ Saved processed results to {os.path.basename(processed_file)}\")\n",
        "    except Exception as e:\n",
        "        print(f\"  ❌ Error saving processed results: {e}\")\n",
        "\n",
        "    return results\n",
        "\n",
        "# Process all results\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"📊 POST-PROCESSING RESULTS\")\n",
        "print(\"=\"*60)\n",
        "\n",
        "results_dir = \"evaluation_results\"\n",
        "model_keys = [\"base_gemma\", \"tuned_gemma\", \"medgemma\"]\n",
        "all_processed_results = {}\n",
        "\n",
        "for model_key in model_keys:\n",
        "    results_file = os.path.join(results_dir, f\"{model_key}_results.json\")\n",
        "    if os.path.exists(results_file):\n",
        "        processed_data = process_results(results_file)\n",
        "        all_processed_results[model_key] = processed_data\n",
        "    else:\n",
        "        all_processed_results[model_key] = []\n",
        "\n",
        "# Extract processed results for each model\n",
        "base_processed = all_processed_results.get(\"base_gemma\", [])\n",
        "tuned_processed = all_processed_results.get(\"tuned_gemma\", [])\n",
        "medgemma_processed = all_processed_results.get(\"medgemma\", [])\n",
        "\n",
        "print(\"\\n✅ Post-processing complete\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vwx_MPWycbPo"
      },
      "source": [
        "## Step 11: Calculate performance metrics\n",
        "\n",
        "Calculate comprehensive performance metrics for all models:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Oxif-uuBX-HE"
      },
      "outputs": [],
      "source": [
        "def calculate_metrics(results):\n",
        "    \"\"\"\n",
        "    Calculate comprehensive performance metrics for evaluation results.\n",
        "\n",
        "    Args:\n",
        "        results: List of result dictionaries with predictions and ground truth\n",
        "\n",
        "    Returns:\n",
        "        Dictionary containing all calculated metrics\n",
        "    \"\"\"\n",
        "    # Filter for valid results with both prediction and ground truth\n",
        "    valid_items = [\n",
        "        item for item in results\n",
        "        if 'ground_truth' in item and item['ground_truth'] is not None\n",
        "        and 'is_melanoma' in item and item['is_melanoma'] is not None\n",
        "    ]\n",
        "\n",
        "    if not valid_items:\n",
        "        print(\"  ⚠️ No valid items with ground truth and predictions\")\n",
        "        return {}\n",
        "\n",
        "    # Extract true labels and predictions\n",
        "    y_true = [item['ground_truth'] for item in valid_items]\n",
        "    y_pred = [item['is_melanoma'] for item in valid_items]\n",
        "\n",
        "    # Calculate confusion matrix\n",
        "    try:\n",
        "        tn, fp, fn, tp = confusion_matrix(y_true, y_pred, labels=[0, 1]).ravel()\n",
        "    except ValueError:\n",
        "        # Handle edge cases where all predictions are the same class\n",
        "        unique_preds = set(y_pred)\n",
        "        if len(unique_preds) == 1:\n",
        "            pred_class = list(unique_preds)[0]\n",
        "            if pred_class == 0:  # All predicted negative\n",
        "                tn = sum(1 for yt, yp in zip(y_true, y_pred) if yt == 0 and yp == 0)\n",
        "                fn = sum(1 for yt, yp in zip(y_true, y_pred) if yt == 1 and yp == 0)\n",
        "                fp = tp = 0\n",
        "            else:  # All predicted positive\n",
        "                tp = sum(1 for yt, yp in zip(y_true, y_pred) if yt == 1 and yp == 1)\n",
        "                fp = sum(1 for yt, yp in zip(y_true, y_pred) if yt == 0 and yp == 1)\n",
        "                tn = fn = 0\n",
        "        else:\n",
        "            tn = fp = fn = tp = 0\n",
        "\n",
        "    # Calculate metrics\n",
        "    accuracy = accuracy_score(y_true, y_pred)\n",
        "    precision = precision_score(y_true, y_pred, pos_label=1, zero_division=0)\n",
        "    recall = recall_score(y_true, y_pred, pos_label=1, zero_division=0)\n",
        "    f1 = f1_score(y_true, y_pred, pos_label=1, zero_division=0)\n",
        "\n",
        "    # Calculate specificity (true negative rate)\n",
        "    specificity = tn / (tn + fp) if (tn + fp) > 0 else 0\n",
        "\n",
        "    # Calculate balanced accuracy\n",
        "    balanced_accuracy = (recall + specificity) / 2\n",
        "\n",
        "    # Calculate additional metrics\n",
        "    total_positives = tp + fn\n",
        "    total_negatives = tn + fp\n",
        "    prevalence = total_positives / len(valid_items) if len(valid_items) > 0 else 0\n",
        "\n",
        "    metrics = {\n",
        "        \"accuracy\": accuracy,\n",
        "        \"precision\": precision,\n",
        "        \"recall\": recall,\n",
        "        \"specificity\": specificity,\n",
        "        \"f1\": f1,\n",
        "        \"balanced_accuracy\": balanced_accuracy,\n",
        "        \"tn\": int(tn),\n",
        "        \"fp\": int(fp),\n",
        "        \"fn\": int(fn),\n",
        "        \"tp\": int(tp),\n",
        "        \"total_samples\": len(valid_items),\n",
        "        \"total_positives\": int(total_positives),\n",
        "        \"total_negatives\": int(total_negatives),\n",
        "        \"prevalence\": prevalence,\n",
        "        \"y_true_list_for_roc\": y_true,\n",
        "        \"y_pred_list_for_roc\": y_pred\n",
        "    }\n",
        "\n",
        "    return metrics\n",
        "\n",
        "def print_model_metrics(model_name, metrics_dict):\n",
        "    \"\"\"\n",
        "    Print formatted metrics for a model.\n",
        "    \"\"\"\n",
        "    if not metrics_dict:\n",
        "        print(f\"\\n{model_name}: No metrics available\")\n",
        "        return\n",
        "\n",
        "    print(f\"\\n📊 {model_name} Performance\")\n",
        "    print(f\"   Evaluated on {metrics_dict.get('total_samples', 0)} samples\")\n",
        "    print(f\"   Class distribution: {metrics_dict.get('total_positives', 0)} positive, {metrics_dict.get('total_negatives', 0)} negative\")\n",
        "    print(\"\\n   Metrics:\")\n",
        "    print(f\"   • Accuracy:           {metrics_dict.get('accuracy', 0):.4f}\")\n",
        "    print(f\"   • Precision:          {metrics_dict.get('precision', 0):.4f}\")\n",
        "    print(f\"   • Recall (Sensitivity): {metrics_dict.get('recall', 0):.4f}\")\n",
        "    print(f\"   • Specificity:        {metrics_dict.get('specificity', 0):.4f}\")\n",
        "    print(f\"   • F1 Score:           {metrics_dict.get('f1', 0):.4f}\")\n",
        "    print(f\"   • Balanced Accuracy:  {metrics_dict.get('balanced_accuracy', 0):.4f}\")\n",
        "\n",
        "    print(\"\\n   Confusion Matrix:\")\n",
        "    print(f\"   • True Negatives:  {metrics_dict.get('tn', 0)}\")\n",
        "    print(f\"   • False Positives: {metrics_dict.get('fp', 0)}\")\n",
        "    print(f\"   • False Negatives: {metrics_dict.get('fn', 0)}\")\n",
        "    print(f\"   • True Positives:  {metrics_dict.get('tp', 0)}\")\n",
        "\n",
        "# Calculate metrics for all models\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"📈 CALCULATING PERFORMANCE METRICS\")\n",
        "print(\"=\"*60)\n",
        "\n",
        "base_gemma_metrics = calculate_metrics(base_processed)\n",
        "tuned_gemma_metrics = calculate_metrics(tuned_processed)\n",
        "medgemma_metrics = calculate_metrics(medgemma_processed)\n",
        "\n",
        "# Print metrics for each model\n",
        "print_model_metrics(\"Base Gemma 3\", base_gemma_metrics)\n",
        "print_model_metrics(\"Fine-tuned Gemma 3\", tuned_gemma_metrics)\n",
        "print_model_metrics(\"MedGemma\", medgemma_metrics)\n",
        "\n",
        "print(\"\\n\" + \"=\"*60)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ohiKgIsPc8GA"
      },
      "source": [
        "## Step 12: Visualize performance comparisons\n",
        "\n",
        "Create comprehensive visualizations to compare model performance:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "EWPSsQZ8cNo7"
      },
      "outputs": [],
      "source": [
        "def visualize_performance_comparison(metrics_map):\n",
        "    \"\"\"\n",
        "    Create comprehensive visualizations comparing model performance.\n",
        "    \"\"\"\n",
        "    if not metrics_map or not any(metrics_map.values()):\n",
        "        print(\"⚠️ No valid metrics to visualize\")\n",
        "        return\n",
        "\n",
        "    # Filter out empty metrics\n",
        "    valid_metrics_map = {name: m for name, m in metrics_map.items() if m}\n",
        "    if not valid_metrics_map:\n",
        "        print(\"⚠️ All metrics are empty\")\n",
        "        return\n",
        "\n",
        "    model_names = list(valid_metrics_map.keys())\n",
        "    n_models = len(model_names)\n",
        "\n",
        "    # Color schemes\n",
        "    bar_colors = ['#3498db', '#e74c3c', '#2ecc71', '#f39c12', '#9b59b6']\n",
        "\n",
        "    # Set style\n",
        "    plt.style.use('seaborn-v0_8-whitegrid')\n",
        "\n",
        "    # 1. Performance Metrics Comparison\n",
        "    print(\"\\n📊 Creating performance comparison chart...\")\n",
        "    metrics_to_plot = ['accuracy', 'precision', 'recall', 'specificity', 'f1', 'balanced_accuracy']\n",
        "    metric_labels = ['Accuracy', 'Precision', 'Recall', 'Specificity', 'F1 Score', 'Balanced\\nAccuracy']\n",
        "\n",
        "    fig_width = max(12, len(metrics_to_plot) * 2.5)\n",
        "    plt.figure(figsize=(fig_width, 8))\n",
        "\n",
        "    x = np.arange(len(metrics_to_plot))\n",
        "    bar_width = 0.8 / n_models\n",
        "\n",
        "    # Plot bars for each model\n",
        "    for i, model_name in enumerate(model_names):\n",
        "        model_metrics = valid_metrics_map[model_name]\n",
        "        values = [model_metrics.get(m, 0) for m in metrics_to_plot]\n",
        "\n",
        "        offset = (i - (n_models - 1) / 2) * bar_width\n",
        "        bars = plt.bar(x + offset, values, bar_width,\n",
        "                       label=model_name,\n",
        "                       color=bar_colors[i % len(bar_colors)],\n",
        "                       edgecolor='black',\n",
        "                       linewidth=0.7)\n",
        "\n",
        "        # Add value labels on bars\n",
        "        for bar in bars:\n",
        "            height = bar.get_height()\n",
        "            plt.text(bar.get_x() + bar.get_width()/2., height + 0.01,\n",
        "                    f'{height:.3f}',\n",
        "                    ha='center', va='bottom', fontsize=10, fontweight='bold')\n",
        "\n",
        "    # Add percentage improvements for 2-model comparison\n",
        "    if n_models == 2 and 'Base Gemma 3' in model_names:\n",
        "        base_idx = model_names.index('Base Gemma 3')\n",
        "        other_idx = 1 - base_idx\n",
        "        base_metrics = valid_metrics_map[model_names[base_idx]]\n",
        "        other_metrics = valid_metrics_map[model_names[other_idx]]\n",
        "\n",
        "        for i, metric in enumerate(metrics_to_plot):\n",
        "            base_val = base_metrics.get(metric, 0)\n",
        "            other_val = other_metrics.get(metric, 0)\n",
        "\n",
        "            if base_val > 0:\n",
        "                pct_change = ((other_val - base_val) / base_val) * 100\n",
        "                color = 'green' if pct_change > 0 else 'red'\n",
        "                symbol = '↑' if pct_change > 0 else '↓'\n",
        "\n",
        "                plt.text(x[i], max(base_val, other_val) + 0.08,\n",
        "                        f'{symbol}{abs(pct_change):.1f}%',\n",
        "                        ha='center', va='bottom',\n",
        "                        color=color, fontsize=11, fontweight='bold')\n",
        "\n",
        "    plt.xlabel('Performance Metric', fontsize=14, fontweight='bold')\n",
        "    plt.ylabel('Score', fontsize=14, fontweight='bold')\n",
        "    plt.title('Model Performance Comparison', fontsize=16, fontweight='bold', pad=20)\n",
        "    plt.xticks(x, metric_labels, fontsize=12)\n",
        "    plt.yticks(fontsize=11)\n",
        "    plt.ylim(0, 1.3)\n",
        "    plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.15),\n",
        "              ncol=min(n_models, 3), fontsize=12, frameon=True, fancybox=True)\n",
        "    plt.grid(True, linestyle=':', alpha=0.6)\n",
        "    plt.tight_layout()\n",
        "\n",
        "    os.makedirs(results_dir, exist_ok=True)\n",
        "    plt.savefig(os.path.join(results_dir, 'performance_comparison.png'), dpi=300, bbox_inches='tight')\n",
        "    plt.show()\n",
        "\n",
        "    # 2. Confusion Matrices\n",
        "    print(\"\\n📊 Creating confusion matrices...\")\n",
        "    if n_models == 1:\n",
        "        fig, axes = plt.subplots(1, 1, figsize=(6, 5))\n",
        "        axes = [axes]\n",
        "    elif n_models == 2:\n",
        "        fig, axes = plt.subplots(1, 2, figsize=(12, 5))\n",
        "    elif n_models == 3:\n",
        "        fig, axes = plt.subplots(1, 3, figsize=(18, 5))\n",
        "    else:\n",
        "        fig, axes = plt.subplots(2, 2, figsize=(12, 10))\n",
        "        axes = axes.ravel()\n",
        "\n",
        "    fig.suptitle('Confusion Matrices', fontsize=16, fontweight='bold', y=1.02)\n",
        "\n",
        "    cmap_names = ['Blues', 'Oranges', 'Greens', 'Reds']\n",
        "\n",
        "    for idx, model_name in enumerate(model_names):\n",
        "        if idx >= len(axes):\n",
        "            break\n",
        "\n",
        "        ax = axes[idx]\n",
        "        metrics = valid_metrics_map[model_name]\n",
        "\n",
        "        # Create confusion matrix\n",
        "        cm_values = np.array([\n",
        "            [metrics.get('tn', 0), metrics.get('fp', 0)],\n",
        "            [metrics.get('fn', 0), metrics.get('tp', 0)]\n",
        "        ])\n",
        "\n",
        "        # Plot heatmap\n",
        "        sns.heatmap(cm_values, annot=True, fmt='d',\n",
        "                   cmap=cmap_names[idx % len(cmap_names)],\n",
        "                   cbar=True, ax=ax,\n",
        "                   annot_kws={'size': 14, 'weight': 'bold'},\n",
        "                   xticklabels=['Benign', 'Melanoma'],\n",
        "                   yticklabels=['Benign', 'Melanoma'])\n",
        "\n",
        "        ax.set_xlabel('Predicted', fontsize=12, fontweight='bold')\n",
        "        ax.set_ylabel('Actual', fontsize=12, fontweight='bold')\n",
        "        ax.set_title(f'{model_name}\\n({metrics.get(\"total_samples\", 0)} samples)',\n",
        "                    fontsize=14, fontweight='bold')\n",
        "\n",
        "    # Remove extra subplots if any\n",
        "    for i in range(n_models, len(axes)):\n",
        "        fig.delaxes(axes[i])\n",
        "\n",
        "    plt.tight_layout()\n",
        "    plt.savefig(os.path.join(results_dir, 'confusion_matrices.png'), dpi=300, bbox_inches='tight')\n",
        "    plt.show()\n",
        "\n",
        "def visualize_roc_curves(metrics_map):\n",
        "    \"\"\"\n",
        "    Generate ROC curves for models with binary predictions.\n",
        "    \"\"\"\n",
        "    print(\"\\n📊 Creating ROC curves...\")\n",
        "\n",
        "    valid_roc_data = {}\n",
        "    for model_name, metrics in metrics_map.items():\n",
        "        if (metrics and\n",
        "            'y_true_list_for_roc' in metrics and\n",
        "            'y_pred_list_for_roc' in metrics and\n",
        "            len(metrics['y_true_list_for_roc']) > 0):\n",
        "            valid_roc_data[model_name] = (\n",
        "                metrics['y_true_list_for_roc'],\n",
        "                metrics['y_pred_list_for_roc']\n",
        "            )\n",
        "\n",
        "    if not valid_roc_data:\n",
        "        print(\"  ⚠️ No valid data for ROC curves\")\n",
        "        return\n",
        "\n",
        "    plt.figure(figsize=(10, 8))\n",
        "    colors = ['#3498db', '#e74c3c', '#2ecc71', '#f39c12']\n",
        "\n",
        "    for i, (model_name, (y_true, y_pred)) in enumerate(valid_roc_data.items()):\n",
        "        y_true_np = np.array(y_true)\n",
        "        y_pred_np = np.array(y_pred)\n",
        "\n",
        "        if len(np.unique(y_true_np)) < 2:\n",
        "            print(f\"  ⚠️ Skipping {model_name}: only one class in ground truth\")\n",
        "            continue\n",
        "\n",
        "        # Calculate ROC curve\n",
        "        fpr, tpr, _ = roc_curve(y_true_np, y_pred_np)\n",
        "        auc_score = roc_auc_score(y_true_np, y_pred_np)\n",
        "\n",
        "        # Plot\n",
        "        plt.plot(fpr, tpr,\n",
        "                label=f'{model_name} (AUC = {auc_score:.3f})',\n",
        "                color=colors[i % len(colors)],\n",
        "                linewidth=2.5,\n",
        "                marker='o' if len(fpr) < 10 else None,\n",
        "                markersize=8 if len(fpr) < 10 else 0)\n",
        "\n",
        "    # Add diagonal reference line\n",
        "    plt.plot([0, 1], [0, 1], 'k--', linewidth=2, label='Random (AUC = 0.500)')\n",
        "\n",
        "    plt.xlim([-0.01, 1.01])\n",
        "    plt.ylim([-0.01, 1.01])\n",
        "    plt.xlabel('False Positive Rate', fontsize=14, fontweight='bold')\n",
        "    plt.ylabel('True Positive Rate', fontsize=14, fontweight='bold')\n",
        "    plt.title('Receiver Operating Characteristic (ROC) Curves', fontsize=16, fontweight='bold')\n",
        "    plt.legend(loc='lower right', fontsize=12, frameon=True, fancybox=True)\n",
        "    plt.grid(True, linestyle=':', alpha=0.7)\n",
        "    plt.tight_layout()\n",
        "\n",
        "    plt.savefig(os.path.join(results_dir, 'roc_curves.png'), dpi=300, bbox_inches='tight')\n",
        "    plt.show()\n",
        "\n",
        "print(\"✅ Visualization functions defined\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WYQRVA80c8-r"
      },
      "outputs": [],
      "source": [
        "# Create visualizations\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"📊 CREATING VISUALIZATIONS\")\n",
        "print(\"=\"*60)\n",
        "\n",
        "# Prepare metrics for visualization\n",
        "metrics_for_viz = {}\n",
        "if base_gemma_metrics:\n",
        "    metrics_for_viz[\"Base Gemma 3\"] = base_gemma_metrics\n",
        "if tuned_gemma_metrics:\n",
        "    metrics_for_viz[\"Fine-tuned Gemma 3\"] = tuned_gemma_metrics\n",
        "if medgemma_metrics:\n",
        "    metrics_for_viz[\"MedGemma\"] = medgemma_metrics\n",
        "\n",
        "if metrics_for_viz:\n",
        "    visualize_performance_comparison(metrics_for_viz)\n",
        "    visualize_roc_curves(metrics_for_viz)\n",
        "    print(\"\\n✅ Visualizations saved to evaluation_results/\")\n",
        "else:\n",
        "    print(\"\\n⚠️ No valid metrics available for visualization\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "h-yWpxhSdzZl"
      },
      "source": [
        "## Step 13: Calculate and visualize improvements\n",
        "\n",
        "Quantify the improvements achieved through fine-tuning:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "K3d5_ZFydDMc"
      },
      "outputs": [],
      "source": [
        "def calculate_improvements(reference_metrics, current_metrics):\n",
        "    \"\"\"\n",
        "    Calculate improvement percentages between two sets of metrics.\n",
        "    \"\"\"\n",
        "    if not reference_metrics or not current_metrics:\n",
        "        return {}\n",
        "\n",
        "    improvements = {}\n",
        "    metrics_to_compare = ['accuracy', 'precision', 'recall', 'specificity', 'f1', 'balanced_accuracy']\n",
        "\n",
        "    for metric in metrics_to_compare:\n",
        "        if metric in reference_metrics and metric in current_metrics:\n",
        "            ref_val = reference_metrics[metric]\n",
        "            curr_val = current_metrics[metric]\n",
        "\n",
        "            if ref_val != 0:\n",
        "                pct_improvement = ((curr_val - ref_val) / abs(ref_val)) * 100\n",
        "            elif curr_val > 0:\n",
        "                pct_improvement = float('inf')\n",
        "            else:\n",
        "                pct_improvement = 0\n",
        "\n",
        "            improvements[metric] = {\n",
        "                \"reference_value\": ref_val,\n",
        "                \"current_value\": curr_val,\n",
        "                \"absolute_improvement\": curr_val - ref_val,\n",
        "                \"percentage_improvement\": pct_improvement\n",
        "            }\n",
        "\n",
        "    return improvements\n",
        "\n",
        "def print_improvement_summary(improvements, ref_name, curr_name):\n",
        "    \"\"\"\n",
        "    Print a formatted summary of improvements.\n",
        "    \"\"\"\n",
        "    if not improvements:\n",
        "        print(f\"\\n⚠️ No improvement data for {curr_name} vs {ref_name}\")\n",
        "        return\n",
        "\n",
        "    print(f\"\\n📈 Performance Improvements: {curr_name} vs {ref_name}\")\n",
        "    print(\"=\" * 70)\n",
        "\n",
        "    # Find best and worst improvements\n",
        "    best_metric = max(improvements.items(),\n",
        "                     key=lambda x: x[1]['percentage_improvement'] if x[1]['percentage_improvement'] != float('inf') else 0)\n",
        "    worst_metric = min(improvements.items(),\n",
        "                      key=lambda x: x[1]['percentage_improvement'] if x[1]['percentage_improvement'] != float('inf') else 0)\n",
        "\n",
        "    print(f\"\\n📊 Summary:\")\n",
        "    print(f\"   • Best improvement: {best_metric[0]} ({best_metric[1]['percentage_improvement']:.1f}%)\")\n",
        "    print(f\"   • Least improvement: {worst_metric[0]} ({worst_metric[1]['percentage_improvement']:.1f}%)\")\n",
        "\n",
        "    print(f\"\\n📋 Detailed improvements:\")\n",
        "    for metric, values in improvements.items():\n",
        "        print(f\"\\n   {metric.replace('_', ' ').title()}:\")\n",
        "        print(f\"     • {ref_name}: {values['reference_value']:.4f}\")\n",
        "        print(f\"     • {curr_name}: {values['current_value']:.4f}\")\n",
        "        print(f\"     • Change: {values['absolute_improvement']:+.4f}\", end=\"\")\n",
        "\n",
        "        if values['percentage_improvement'] == float('inf'):\n",
        "            print(f\" (∞% - from zero to positive)\")\n",
        "        else:\n",
        "            print(f\" ({values['percentage_improvement']:+.1f}%)\")\n",
        "\n",
        "def visualize_improvements(improvements, ref_name, curr_name):\n",
        "    \"\"\"\n",
        "    Create a bar chart showing percentage improvements.\n",
        "    \"\"\"\n",
        "    if not improvements:\n",
        "        return\n",
        "\n",
        "    metrics = list(improvements.keys())\n",
        "    percentages = []\n",
        "\n",
        "    # Handle infinite improvements for visualization\n",
        "    max_finite = 0\n",
        "    for m in metrics:\n",
        "        val = improvements[m]['percentage_improvement']\n",
        "        if val != float('inf') and val != float('-inf'):\n",
        "            max_finite = max(max_finite, abs(val))\n",
        "\n",
        "    cap_value = max(max_finite * 1.2, 100) if max_finite > 0 else 100\n",
        "\n",
        "    for m in metrics:\n",
        "        val = improvements[m]['percentage_improvement']\n",
        "        if val == float('inf'):\n",
        "            percentages.append(cap_value)\n",
        "        elif val == float('-inf'):\n",
        "            percentages.append(-cap_value)\n",
        "        else:\n",
        "            percentages.append(val)\n",
        "\n",
        "    # Create figure\n",
        "    plt.figure(figsize=(10, 8))\n",
        "\n",
        "    # Create horizontal bar chart\n",
        "    colors = ['green' if p > 0 else 'red' if p < 0 else 'gray' for p in percentages]\n",
        "    metric_labels = [m.replace('_', ' ').title() for m in metrics]\n",
        "\n",
        "    bars = plt.barh(metric_labels, percentages, color=colors, edgecolor='black', linewidth=0.7)\n",
        "\n",
        "    # Add value labels\n",
        "    for i, (bar, orig_val) in enumerate(zip(bars, [improvements[m]['percentage_improvement'] for m in metrics])):\n",
        "        width = bar.get_width()\n",
        "\n",
        "        if orig_val == float('inf'):\n",
        "            label = '+∞%'\n",
        "        elif orig_val == float('-inf'):\n",
        "            label = '-∞%'\n",
        "        else:\n",
        "            label = f'{orig_val:.1f}%'\n",
        "\n",
        "        ha = 'left' if width >= 0 else 'right'\n",
        "        offset = 3 if width >= 0 else -3\n",
        "\n",
        "        plt.text(width + offset, bar.get_y() + bar.get_height()/2,\n",
        "                label, va='center', ha=ha, fontweight='bold')\n",
        "\n",
        "    plt.axvline(x=0, color='black', linestyle='-', linewidth=1)\n",
        "    plt.xlabel('Percentage Improvement (%)', fontsize=14, fontweight='bold')\n",
        "    plt.title(f'Performance Improvements: {curr_name} vs {ref_name}',\n",
        "             fontsize=16, fontweight='bold')\n",
        "    plt.grid(True, axis='x', linestyle=':', alpha=0.7)\n",
        "    plt.tight_layout()\n",
        "\n",
        "    # Save figure\n",
        "    filename = f\"improvements_{curr_name.replace(' ', '_').lower()}_vs_{ref_name.replace(' ', '_').lower()}.png\"\n",
        "    plt.savefig(os.path.join(results_dir, filename), dpi=300, bbox_inches='tight')\n",
        "    plt.show()\n",
        "\n",
        "# Calculate and display improvements\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"📈 IMPROVEMENT ANALYSIS\")\n",
        "print(\"=\"*60)\n",
        "\n",
        "# Tuned vs Base Gemma\n",
        "if tuned_gemma_metrics and base_gemma_metrics:\n",
        "    improvements_tuned = calculate_improvements(base_gemma_metrics, tuned_gemma_metrics)\n",
        "    print_improvement_summary(improvements_tuned, \"Base Gemma 3\", \"Fine-tuned Gemma 3\")\n",
        "    visualize_improvements(improvements_tuned, \"Base Gemma 3\", \"Fine-tuned Gemma 3\")\n",
        "\n",
        "# MedGemma vs Base Gemma\n",
        "if medgemma_metrics and base_gemma_metrics:\n",
        "    improvements_med = calculate_improvements(base_gemma_metrics, medgemma_metrics)\n",
        "    print_improvement_summary(improvements_med, \"Base Gemma 3\", \"MedGemma\")\n",
        "    visualize_improvements(improvements_med, \"Base Gemma 3\", \"MedGemma\")\n",
        "\n",
        "# MedGemma vs Tuned Gemma\n",
        "if medgemma_metrics and tuned_gemma_metrics:\n",
        "    improvements_med_vs_tuned = calculate_improvements(tuned_gemma_metrics, medgemma_metrics)\n",
        "    print_improvement_summary(improvements_med_vs_tuned, \"Fine-tuned Gemma 3\", \"MedGemma\")\n",
        "    visualize_improvements(improvements_med_vs_tuned, \"Fine-tuned Gemma 3\", \"MedGemma\")\n",
        "\n",
        "print(\"\\n\" + \"=\"*60)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zKIZFkOhfFgN"
      },
      "source": [
        "## Summary and Conclusions\n",
        "\n",
        "This notebook has demonstrated a comprehensive evaluation of multimodal AI models for melanoma detection:\n",
        "\n",
        "### 🎯 Key Findings\n",
        "\n",
        "1. **Base Model Behavior**: The base Gemma 3 model typically shows a tendency to over-diagnose, with high recall but poor specificity\n",
        "2. **Fine-tuning Impact**: Domain-specific fine-tuning dramatically improves specificity and balanced accuracy\n",
        "3. **Medical Domain Models**: MedGemma demonstrates strong baseline performance but can still benefit from task-specific fine-tuning\n",
        "\n",
        "### 📊 Performance Insights\n",
        "\n",
        "- **Accuracy improvements** often exceed 1000% when moving from base to fine-tuned models\n",
        "- **Specificity** (correctly identifying benign lesions) shows the most dramatic improvements\n",
        "- **Balanced accuracy** provides the best overall measure of diagnostic capability\n",
        "\n",
        "### 🚀 Next Steps\n",
        "\n",
        "1. **Clinical Validation**: Work with medical professionals to validate model predictions\n",
        "2. **Dataset Expansion**: Include more diverse skin types and lesion types\n",
        "3. **Ensemble Methods**: Combine multiple models for improved reliability\n",
        "4. **Explainability**: Implement visualization techniques to understand model decisions\n",
        "5. **Production Deployment**: Create APIs and interfaces for clinical use\n",
        "\n",
        "### ⚠️ Important Considerations\n",
        "\n",
        "- These models are for **research and educational purposes only**\n",
        "- **Do not use** for actual medical diagnosis without proper validation\n",
        "- Always consult qualified healthcare professionals for medical decisions\n",
        "- Consider regulatory requirements (FDA, CE marking) for clinical deployment\n",
        "\n",
        "### 🔗 Resources\n",
        "\n",
        "- [Main Repository](https://github.com/ayoisio/gke-multimodal-fine-tune-gemma-3-axolotl)\n",
        "- [Axolotl Documentation](https://github.com/axolotl-ai-cloud/axolotl)\n",
        "- [SIIM-ISIC Challenge](https://www.kaggle.com/c/siim-isic-melanoma-classification)\n",
        "- [Google Cloud Healthcare AI](https://cloud.google.com/solutions/healthcare-life-sciences)\n",
        "\n",
        "---\n",
        "\n",
        "Thank you for following this evaluation notebook. The combination of Google Cloud's infrastructure and Axolotl's fine-tuning framework enables powerful domain-specific AI applications that can make a real difference in healthcare and beyond."
      ]
    }
  ]
}