{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Patient Stratification using Multiple Instance Learning (MIL) Tutorial - Part 1\n",
    "\n",
    "## Introduction and Background\n",
    "\n",
    "Patient stratification is a critical task in precision medicine that aims to classify patients into different groups based on their clinical characteristics, treatment response, or survival outcomes. This tutorial demonstrates how to perform patient stratification using Multiple Instance Learning (MIL) on multiplex immunofluorescence images.\n",
    "\n",
    "## Workflow Overview\n",
    "\n",
    "This tutorial is organized into multiple parts:\n",
    "\n",
    "- **Part 1: Data Preparation Pipeline**\n",
    "    - **Patch Extraction:** Extract image patches from multiplex immunofluorescence images\n",
    "    - **Feature Extraction:** Extract deep learning features using pre-trained KRONOS models\n",
    "    - **H5AD Object Preparation:** Build AnnData objects for downstream analysis\n",
    "\n",
    "- **Part 2: MIL Analysis Pipeline**\n",
    "    - **Patient-level Data Aggregation:** Aggregate patch-level features to patient-level representations\n",
    "    - **MIL Model Training and Evaluation:** Train MIL models for patient stratification with cross-validation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Input File Requirements\n",
    "Your dataset folder should be organized as follows:\n",
    "\n",
    "### Dataset Structure\n",
    "- **`dataset/multiplex_images/`**: Contains the multiplex image files (e.g., `.tiff`).\n",
    "- **`dataset/test_TIFF/marker_info_with_metadata.csv`**: A CSV file containing marker metadata.\n",
    "- **`dataset/case_metadata.csv`**: A CSV file containing ground truth annotations.\n",
    "\n",
    "### Output Directories\n",
    "The following directories in the project directory will be generated during the workflow:\n",
    "- **`patches/`**: Stores the extracted cell-centered patches in `.h5` format.\n",
    "- **`features/`**: Stores the extracted features in `.npy` format.\n",
    "- **`folds/`**: Contains cross-validation folds for training, validation, and testing.\n",
    "- **`results/`**: Stores the results for each fold and aggregated metrics.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 1: Experiment Configuration\n",
    "In this section, we define the configuration and hyperparameters for the patient stratification pipeline. Ensure your dataset folder is organized according to the structure described above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "from utils.PatchExtraction_FeatureExtraction_h5ad import PatchExtraction, FeatureExtraction, H5ADBuilder\n",
    "\n",
    "# Define the root directory for the project\n",
    "project_dir = \"path/to/project/directory/\"  # Replace with your actual project directory\n",
    "\n",
    "# Configuration dictionary containing all parameters for the pipeline\n",
    "config = {\n",
    "    # Dataset-related parameters - modify these paths according to your dataset structure\n",
    "    \"image_dir\": f\"{project_dir}/multiplex_image/\",  # Path to multiplex image files\n",
    "    \"marker_csv_path\": f\"{project_dir}/multiplex_image/marker_info_with_metadata.csv\",  # Path to marker metadata CSV\n",
    "    \"patient_metadata_csv_path\": f\"{project_dir}/case_metadata.csv\",  # Path to patient metadata CSV\n",
    "    \n",
    "    # Output directories for intermediate and final results\n",
    "    \"patch_output_dir\": f\"{project_dir}/patches/\",  # Directory to save extracted patches\n",
    "    \"feature_output_dir\": f\"{project_dir}/features/\",  # Directory to save extracted features\n",
    "    \"h5ad_output_dir\": f\"{project_dir}/h5ad_objects/\",  # Directory to save AnnData objects\n",
    "    \"results_dir\": f\"{project_dir}/results/\",  # Directory for final results (Part 2)\n",
    "    \n",
    "    # Model-related parameters for KRONOS feature extraction\n",
    "    \"checkpoint_path\": \"hf_hub:MahmoodLab/kronos\",  # Pre-trained KRONOS model checkpoint\n",
    "    \"hf_auth_token\": None,  # Hugging Face authentication token (if required)\n",
    "    \"cache_dir\": f\"{project_dir}/models/cache/\",  # Directory to cache KRONOS model\n",
    "    \"model_type\": \"vits16\",  # Type of pre-trained model (vits16, vitl16)\n",
    "    \"token_overlap\": True,  # Whether to use token overlap during feature extraction\n",
    "    \n",
    "    # Patch extraction parameters\n",
    "    \"patch_size\": 128,  # Size of patches to extract (128x128 pixels)\n",
    "    \"stride\": 128,  # Stride for patch extraction (non-overlapping patches)\n",
    "    \"file_ext\": \".tif\",  # File extension of input images\n",
    "    \n",
    "    # Feature extraction parameters\n",
    "    \"nuclear_stain\": \"DAPI\",  # Name of nuclear stain marker\n",
    "    \"max_value\": 65535.0,  # Maximum possible pixel value (depends on image bit depth)\n",
    "    \"batch_size\": 4,  # Batch size for feature extraction\n",
    "    \"num_workers\": 4,  # Number of workers for data loading\n",
    "    \"extract_token_features\": True,  # Whether to extract patch token features\n",
    "    \n",
    "    # H5AD building parameters\n",
    "    \"model_name\": \"Kronos\",  # Model name for output file naming\n",
    "    \"dataset_name\": \"PatientStratification\",  # Dataset name for output file naming in case_metadata.csv\n",
    "    \"core_id_column\": \"TMA_core_num\",  # Column name for core IDs in patient metadata case_metadata.csv\n",
    "    \n",
    "    # Control flags for pipeline steps\n",
    "    \"verbose\": True,  # Whether to print detailed progress information\n",
    "}\n",
    "\n",
    "print(\"Configuration loaded successfully!\")\n",
    "print(f\"Project directory: {project_dir}\")\n",
    "print(f\"Patch size: {config['patch_size']}x{config['patch_size']}\")\n",
    "print(f\"Model: {config['model_name']} ({config['model_type']})\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 2: Patch Extraction\n",
    "In this step, we extract patches from the multiplex immunofluorescence images. The patches are extracted using a sliding window approach across the entire image and saved as HDF5 files containing individual marker datasets."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"=== Step 2: Patch Extraction ===\")\n",
    "\n",
    "# Initialize patch extraction\n",
    "patch_config = {\n",
    "    \"image_dir\": config[\"image_dir\"],\n",
    "    \"output_dir\": config[\"patch_output_dir\"],\n",
    "    \"marker_csv_path\": config[\"marker_csv_path\"],\n",
    "    \"patch_size\": config[\"patch_size\"],\n",
    "    \"stride\": config[\"stride\"],\n",
    "    \"file_ext\": config[\"file_ext\"]\n",
    "}\n",
    "\n",
    "# Create patch extractor\n",
    "patch_extractor = PatchExtraction(patch_config)\n",
    "\n",
    "# Extract patches from all images in the directory\n",
    "print(f\"Starting patch extraction from images in: {config['image_dir']}\")\n",
    "print(f\"Patch size: {config['patch_size']}x{config['patch_size']}\")\n",
    "print(f\"Stride: {config['stride']} (non-overlapping patches)\")\n",
    "\n",
    "# Option 1: Extract from all images in directory\n",
    "patch_results = patch_extractor.extract_all_patches()\n",
    "\n",
    "# Option 2: Extract from specific image files (uncomment if needed)\n",
    "# specific_files = [\"sample_001.ome.tiff\", \"sample_002.ome.tiff\"]  # Replace with your files\n",
    "# patch_results = patch_extractor.extract_all_patches(file_list=specific_files)\n",
    "\n",
    "# Display results\n",
    "print(\"\\nPatch extraction completed!\")\n",
    "print(\"Summary:\")\n",
    "total_patches = sum(patch_results.values())\n",
    "print(f\"- Total images processed: {len(patch_results)}\")\n",
    "print(f\"- Total patches extracted: {total_patches}\")\n",
    "print(f\"- Average patches per image: {total_patches/len(patch_results):.1f}\")\n",
    "\n",
    "# Show detailed results for each image\n",
    "print(\"\\nDetailed results:\")\n",
    "for image_name, patch_count in patch_results.items():\n",
    "    print(f\"  {image_name}: {patch_count} patches\")\n",
    "\n",
    "print(f\"\\nPatches saved to: {config['patch_output_dir']}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 3: Feature Extraction\n",
    "In this step, we extract deep learning features from the patches using the pre-trained KRONOS model. The features are saved as numpy arrays for downstream analysis."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"=== Step 3: Feature Extraction ===\")\n",
    "\n",
    "# Configure feature extraction parameters\n",
    "feature_config = {\n",
    "    \"dataset_dir\": os.path.join(config[\"patch_output_dir\"], f\"{config['patch_size']}_{config['stride']}\"),\n",
    "    \"feature_dir\": config[\"feature_output_dir\"],\n",
    "    \"checkpoint_path\": config[\"checkpoint_path\"],\n",
    "    \"hf_auth_token\": config[\"hf_auth_token\"],\n",
    "    \"cache_dir\": config[\"cache_dir\"],\n",
    "    \"model_type\": config[\"model_type\"],\n",
    "    \"token_overlap\": config[\"token_overlap\"],\n",
    "    \"marker_info\": config[\"marker_csv_path\"],\n",
    "    \"nuclear_stain\": config[\"nuclear_stain\"],\n",
    "    \"max_value\": config[\"max_value\"],\n",
    "    \"batch_size\": config[\"batch_size\"],\n",
    "    \"num_workers\": config[\"num_workers\"]\n",
    "}\n",
    "\n",
    "# Create feature extractor\n",
    "print(f\"Initializing KRONOS model: {config['model_type']}\")\n",
    "print(f\"Loading model from: {config['checkpoint_path']}\")\n",
    "\n",
    "feature_extractor = FeatureExtraction(feature_config)\n",
    "\n",
    "# Extract features from all patches\n",
    "print(f\"Starting feature extraction from patches in: {feature_config['dataset_dir']}\")\n",
    "print(f\"Batch size: {config['batch_size']}\")\n",
    "print(f\"Extract token features: {config['extract_token_features']}\")\n",
    "\n",
    "num_processed = feature_extractor.extract_features_from_patches(\n",
    "    token_features=config[\"extract_token_features\"]\n",
    ")\n",
    "\n",
    "print(f\"\\nFeature extraction completed!\")\n",
    "print(f\"- Total patches processed: {num_processed}\")\n",
    "print(f\"- Features saved to: {config['feature_output_dir']}\")\n",
    "\n",
    "# Display feature types extracted\n",
    "feature_types = []\n",
    "if os.path.exists(os.path.join(config[\"feature_output_dir\"], \"norm_clstoken\")):\n",
    "    cls_count = len(os.listdir(os.path.join(config[\"feature_output_dir\"], \"norm_clstoken\")))\n",
    "    feature_types.append(f\"CLS token features: {cls_count} files\")\n",
    "\n",
    "if config[\"extract_token_features\"] and os.path.exists(os.path.join(config[\"feature_output_dir\"], \"norm_patchtokens\")):\n",
    "    token_count = len(os.listdir(os.path.join(config[\"feature_output_dir\"], \"norm_patchtokens\")))\n",
    "    feature_types.append(f\"Patch token features: {token_count} files\")\n",
    "\n",
    "print(\"Feature types extracted:\")\n",
    "for feature_type in feature_types:\n",
    "    print(f\"  - {feature_type}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 4: H5AD Object Preparation\n",
    "In this final step of Part 1, we build AnnData (h5ad) objects from the extracted features. These objects combine the features with metadata and can be used for downstream analysis including scanpy workflows and MIL analysis."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"=== Step 4: H5AD Object Preparation ===\")\n",
    "\n",
    "# Configure H5AD building parameters\n",
    "h5ad_config = {\n",
    "    \"embedding_path\": os.path.join(config[\"feature_output_dir\"], \"norm_clstoken\"),  # Using CLS token features\n",
    "    \"output_dir\": config[\"h5ad_output_dir\"],\n",
    "    \"metadata_path\": config[\"patient_metadata_csv_path\"],\n",
    "    \"model_name\": config[\"model_name\"],\n",
    "    \"patch_size\": f\"{config['patch_size']}_{config['stride']}\",\n",
    "    \"dataset_name\": config[\"dataset_name\"],\n",
    "    \"core_id_column\": config[\"core_id_column\"]\n",
    "}\n",
    "\n",
    "# Create H5AD builder\n",
    "print(f\"Building H5AD object from features in: {h5ad_config['embedding_path']}\")\n",
    "print(f\"Using metadata from: {config['patient_metadata_csv_path']}\")\n",
    "\n",
    "h5ad_builder = H5ADBuilder(h5ad_config)\n",
    "\n",
    "# Build H5AD object\n",
    "h5ad_path = h5ad_builder.build_h5ad()\n",
    "\n",
    "if h5ad_path:\n",
    "    print(f\"\\nH5AD object successfully created!\")\n",
    "    print(f\"Saved to: {h5ad_path}\")\n",
    "    \n",
    "    # Load and display basic information about the H5AD object\n",
    "    import scanpy as sc\n",
    "    adata = sc.read_h5ad(h5ad_path)\n",
    "    \n",
    "    print(f\"\\nH5AD Object Summary:\")\n",
    "    print(f\"- Shape: {adata.shape[0]} observations × {adata.shape[1]} features\")\n",
    "    print(f\"- Unique patients/cores: {adata.obs[config['core_id_column']].nunique()}\")\n",
    "    print(f\"- Available metadata columns: {list(adata.obs.columns)}\")\n",
    "    \n",
    "    # Display sample distribution if response column exists\n",
    "    if 'response' in adata.obs.columns:\n",
    "        response_counts = adata.obs['response'].value_counts()\n",
    "        print(f\"- Response distribution:\")\n",
    "        for response, count in response_counts.items():\n",
    "            print(f\"  {response}: {count} patches\")\n",
    "    \n",
    "    print(f\"\\nThe H5AD object is ready for downstream analysis!\")\n",
    "    print(f\"You can now proceed with:\")\n",
    "    print(f\"  - Scanpy analysis workflows\")\n",
    "    print(f\"  - Patient stratification using MIL (Part 2)\")\n",
    "    print(f\"  - Dimensionality reduction and visualization\")\n",
    "    \n",
    "else:\n",
    "    print(\"Failed to create H5AD object. Please check the configuration and try again.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Patient Stratification using Multiple Instance Learning (MIL) Tutorial - Part 2\n",
    "\n",
    "## MIL Analysis Pipeline\n",
    "\n",
    "Welcome to Part 2 of the patient stratification tutorial! In this section, we will perform Multiple Instance Learning (MIL) analysis on the prepared data from Part 1. This tutorial focuses on patient-level classification using patch-level features with rigorous cross-validation evaluation.\n",
    "\n",
    "---\n",
    "\n",
    "## Overview of Part 2\n",
    "\n",
    "This part covers the advanced MIL analysis pipeline:\n",
    "\n",
    "- **Step 4: MIL Dataset Preparation**  \n",
    "  Convert H5AD objects to MIL format for training\n",
    "\n",
    "- **Step 5: Cross-Validation Analysis**  \n",
    "  Perform repeated stratified k-fold cross-validation\n",
    "\n",
    "- **Step 6: Model Training and Evaluation**  \n",
    "  Train MIL models with comprehensive evaluation\n",
    "\n",
    "- **Step 7: Results Analysis and Visualization**  \n",
    "  Analyze performance across different feature types\n",
    "\n",
    "---\n",
    "\n",
    "## Multiple Instance Learning (MIL) Background\n",
    "\n",
    "In MIL, we work with _\"bags\"_ (patients) containing multiple _\"instances\"_ (image patches). The goal is to predict patient-level labels using patch-level features without requiring patch-level annotations. This is particularly valuable in medical imaging where:\n",
    "\n",
    "- **Patients are bags** with binary labels (e.g., responder vs. non-responder)\n",
    "- **Image patches are instances** with rich feature representations\n",
    "- **Only patient-level labels** are available for training\n",
    "\n",
    "Our MIL model aggregates patch-level features into patient-level representations using neural networks with learnable aggregation functions.\n",
    "\n",
    "---\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 4: MIL Dataset Preparation and Configuration\n",
    "In this step, we configure the MIL analysis pipeline and prepare datasets from the H5AD objects created in Part 1. Here, we assume user should generate a h5ad object and please supply the h5ad object to the following tutorials."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "from utils.patient_stratification import PatientStratification, PatientStratificationClassifier\n",
    "\n",
    "print(\"=== Step 4: MIL Dataset Preparation and Configuration ===\")\n",
    "\n",
    "# Define the root directory for the project\n",
    "project_dir = \"/fs/ess/PAS2205/Yuzhou/Datasets/CTCL_pembro_data\"  # Replace with your actual project directory\n",
    "\n",
    "# MIL Analysis Configuration\n",
    "mil_config = {\n",
    "    # Input data (from Part 1)\n",
    "    \"h5ad_path\": f\"{project_dir}/object_h5ad.h5ad\",  # Path to H5AD file from Part 1\n",
    "    \n",
    "    # Output directory for MIL results\n",
    "    \"output_dir\": f\"{project_dir}/mil_results/\",\n",
    "    \n",
    "    # Data parameters - must match your H5AD object structure\n",
    "    \"patient_col\": \"Patients\",  # Column name for patient identifiers in H5AD\n",
    "    \"label_col\": \"Response\",   # Column name for response labels (R/NR or 1/0)\n",
    "    \n",
    "    # MIL model architecture parameters\n",
    "    \"n_neurons\": 256,          # Number of neurons in the first hidden layer\n",
    "    \"hidden_layers\": [],       # Additional hidden layers (e.g., [128, 64] for two extra layers)\n",
    "    \"dropout_rate\": 0.0,       # Dropout rate for regularization\n",
    "    \"model_aggregation\": torch.mean,  # Aggregation function (torch.mean or torch.max)\n",
    "    \n",
    "    # Training parameters\n",
    "    \"lr\": 1e-3,               # Learning rate\n",
    "    \"weight_decay\": 1e-5,     # L2 regularization weight\n",
    "    \"n_epochs\": 50,           # Number of training epochs per fold\n",
    "    \"batch_size\": 4,          # Batch size for training\n",
    "    \"normalize_data\": False,   # Whether to normalize input features\n",
    "    \n",
    "    # Learning rate scheduler\n",
    "    \"lr_scheduler\": True,     # Whether to use learning rate scheduling\n",
    "    \"lr_step_size\": 50,       # Step size for LR decay\n",
    "    \"lr_gamma\": 0.5,          # Gamma for LR decay\n",
    "    \n",
    "    # Cross-validation parameters\n",
    "    \"n_repeats\": 2,          # Number of repeated cross-validation runs\n",
    "    \"n_folds\": 5,             # Number of folds per repetition\n",
    "    \n",
    "    # Feature types to benchmark (different dimensionality reduction methods)\n",
    "    \"feature_types\": ['pca50', 'pca100'],\n",
    "    \n",
    "    # Logging and output control\n",
    "    \"verbose\": False,          # Detailed progress logging\n",
    "    \"loss_log_interval\": 10,  # Log training loss every N epochs\n",
    "}\n",
    "\n",
    "# Validate H5AD file exists\n",
    "import os\n",
    "if not os.path.exists(mil_config[\"h5ad_path\"]):\n",
    "    raise FileNotFoundError(f\"H5AD file not found: {mil_config['h5ad_path']}\")\n",
    "    \n",
    "print(f\"✓ H5AD file found: {mil_config['h5ad_path']}\")\n",
    "print(f\"✓ Output directory: {mil_config['output_dir']}\")\n",
    "print(f\"✓ Cross-validation: {mil_config['n_repeats']} repeats × {mil_config['n_folds']} folds = {mil_config['n_repeats'] * mil_config['n_folds']} total runs\")\n",
    "print(f\"✓ Feature types to test: {mil_config['feature_types']}\")\n",
    "\n",
    "# Initialize the patient stratification system\n",
    "patient_stratification = PatientStratification(mil_config)\n",
    "print(\"✓ MIL analysis system initialized successfully!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 5: Cross-Validation Analysis Setup\n",
    "Before running the full analysis, let's examine our data and set up the cross-validation framework."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"=== Step 5: Cross-Validation Analysis Setup ===\")\n",
    "\n",
    "# Load and examine the H5AD data\n",
    "import scanpy as sc\n",
    "adata = sc.read_h5ad(mil_config[\"h5ad_path\"])\n",
    "\n",
    "print(f\"Dataset Overview:\")\n",
    "print(f\"  Total observations (patches): {adata.shape[0]:,}\")\n",
    "print(f\"  Feature dimensions: {adata.shape[1]:,}\")\n",
    "print(f\"  Patients/cores: {adata.obs[mil_config['patient_col']].nunique()}\")\n",
    "\n",
    "# Examine label distribution\n",
    "if mil_config[\"label_col\"] in adata.obs.columns:\n",
    "    label_counts = adata.obs.groupby(mil_config[\"patient_col\"])[mil_config[\"label_col\"]].first().value_counts()\n",
    "    print(f\"\\nPatient-level label distribution:\")\n",
    "    for label, count in label_counts.items():\n",
    "        print(f\"  {label}: {count} patients\")\n",
    "    \n",
    "    # Check for class imbalance\n",
    "    min_class = label_counts.min()\n",
    "    max_class = label_counts.max()\n",
    "    imbalance_ratio = max_class / min_class\n",
    "    print(f\"  Class imbalance ratio: {imbalance_ratio:.2f}\")\n",
    "    if imbalance_ratio > 3:\n",
    "        print(\" Significant class imbalance detected - consider stratified sampling\")\n",
    "else:\n",
    "    print(f\"Warning: Label column '{mil_config['label_col']}' not found in data\")\n",
    "\n",
    "# Examine patches per patient distribution\n",
    "patches_per_patient = adata.obs[mil_config[\"patient_col\"]].value_counts()\n",
    "print(f\"\\nPatches per patient statistics:\")\n",
    "print(f\"  Mean: {patches_per_patient.mean():.1f}\")\n",
    "print(f\"  Median: {patches_per_patient.median():.1f}\")\n",
    "print(f\"  Min: {patches_per_patient.min()}\")\n",
    "print(f\"  Max: {patches_per_patient.max()}\")\n",
    "print(f\"  Std: {patches_per_patient.std():.1f}\")\n",
    "\n",
    "# Display a few examples\n",
    "print(f\"\\nExample patients and their patch counts:\")\n",
    "for patient, count in patches_per_patient.head(5).items():\n",
    "    label = adata.obs[adata.obs[mil_config[\"patient_col\"]] == patient][mil_config[\"label_col\"]].iloc[0]\n",
    "    print(f\"  {patient}: {count} patches, label: {label}\")\n",
    "\n",
    "print(\"\\n✓ Data exploration completed - ready for MIL analysis!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 6: MIL Model Training and Evaluation\n",
    "Now we'll run the complete MIL benchmarking across different feature types with repeated cross-validation.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"=== Step 6: MIL Model Training and Evaluation ===\")\n",
    "\n",
    "# Initialize the MIL classifier\n",
    "classifier = PatientStratificationClassifier(mil_config)\n",
    "\n",
    "# Store results for all feature types\n",
    "all_results = {}\n",
    "summary_statistics = []\n",
    "\n",
    "print(f\"Starting comprehensive MIL benchmarking...\")\n",
    "print(f\"This will run {len(mil_config['feature_types'])} feature types × {mil_config['n_repeats']} repeats × {mil_config['n_folds']} folds\")\n",
    "print(f\"Total model training runs: {len(mil_config['feature_types']) * mil_config['n_repeats'] * mil_config['n_folds']}\")\n",
    "\n",
    "# Run benchmarking for each feature type\n",
    "for i, feature_type in enumerate(mil_config['feature_types']):\n",
    "    print(f\"\\n{'='*60}\")\n",
    "    print(f\"BENCHMARKING FEATURE TYPE {i+1}/{len(mil_config['feature_types'])}: {feature_type.upper()}\")\n",
    "    print(f\"{'='*60}\")\n",
    "    \n",
    "    try:\n",
    "        # Run repeated cross-validation for this feature type\n",
    "        results_df = classifier.run_cross_validation(\n",
    "            h5ad_path=mil_config[\"h5ad_path\"],\n",
    "            feature_type=feature_type\n",
    "        )\n",
    "        \n",
    "        # Save detailed results and compute summary statistics\n",
    "        summary_stats = classifier.save_results_and_plot(\n",
    "            results_df=results_df,\n",
    "            feature_type=feature_type,\n",
    "            output_dir=mil_config[\"output_dir\"]\n",
    "        )\n",
    "        \n",
    "        # Store results\n",
    "        all_results[feature_type] = results_df\n",
    "        summary_statistics.append(summary_stats)\n",
    "        \n",
    "        # Display immediate results for this feature type\n",
    "        mean_auc = results_df['test_auc'].mean()\n",
    "        std_auc = results_df['test_auc'].std()\n",
    "        print(f\"\\n {feature_type.upper()} Results Summary:\")\n",
    "        print(f\"   Mean AUC: {mean_auc:.4f} ± {std_auc:.4f}\")\n",
    "        print(f\"   95% CI: [{summary_stats['ci_lower']:.4f}, {summary_stats['ci_upper']:.4f}]\")\n",
    "        print(f\"   Best fold AUC: {results_df['test_auc'].max():.4f}\")\n",
    "        print(f\"   Worst fold AUC: {results_df['test_auc'].min():.4f}\")\n",
    "        \n",
    "    except Exception as e:\n",
    "        print(f\" Error processing {feature_type}: {str(e)}\")\n",
    "        continue\n",
    "\n",
    "print(f\"\\n{'='*60}\")\n",
    "print(\" MIL BENCHMARKING COMPLETED!\")\n",
    "print(f\"{'='*60}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 7: Results Analysis and Visualization\n",
    "Finally, let's analyze and visualize the comprehensive results across all feature types."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"=== Step 7: Results Analysis and Visualization ===\")\n",
    "\n",
    "# Convert to DataFrame for easy analysis\n",
    "comparison_df = pd.DataFrame(summary_statistics)\n",
    "\n",
    "# Create and save comprehensive visualization\n",
    "patient_stratification._create_comparison_plot(comparison_df, mil_config[\"output_dir\"])\n",
    "\n",
    "    \n",
    "print(f\"  • Per-fold results: *_fold_auc.csv\")\n",
    "print(f\"  • Summary statistics: *_summary.csv\")\n",
    "print(f\"  • Combined comparison: combined_feature_comparison.csv\")\n",
    "print(f\"  • Visualization: feature_type_comparison.png\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "kronos",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
