{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Train an Image Classifier with TIMM Models\n",
    "\n",
    "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/opengeos/geoai/blob/main/docs/examples/train_timm_classifier.ipynb)\n",
    "\n",
    "This notebook demonstrates how to train an image classification model using the [PyTorch Image Models (timm)](https://github.com/huggingface/pytorch-image-models) library. The `geoai.timm_train` module provides a high-level API for training state-of-the-art computer vision models on remote sensing imagery.\n",
    "\n",
    "## Key Features\n",
    "\n",
    "- **1000+ Pre-trained Models**: Access to ResNet, EfficientNet, Vision Transformers (ViT), ConvNeXt, and more\n",
    "- **Multi-channel Support**: Train on RGB, RGBN (RGB + NIR), or any number of channels\n",
    "- **PyTorch Lightning Integration**: Automatic training loops, checkpointing, and early stopping\n",
    "- **Transfer Learning**: Fine-tune pretrained models or train from scratch\n",
    "\n",
    "## Install packages\n",
    "\n",
    "To use the new functionality, ensure the required packages are installed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install geoai-py timm lightning datasets"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import geoai\n",
    "from geoai.timm_train import (\n",
    "    list_timm_models,\n",
    "    get_timm_model,\n",
    "    RemoteSensingDataset,\n",
    "    train_timm_classifier,\n",
    "    predict_with_timm,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Explore Available Models\n",
    "\n",
    "The timm library provides over 1000 pretrained models. Let's explore some popular architectures:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# List ResNet models\n",
    "resnet_models = list_timm_models(filter=\"resnet\", limit=10)\n",
    "print(\"ResNet models:\", resnet_models)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# List EfficientNet models\n",
    "efficientnet_models = list_timm_models(filter=\"efficientnet\", limit=10)\n",
    "print(\"EfficientNet models:\", efficientnet_models)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# List Vision Transformer models\n",
    "vit_models = list_timm_models(filter=\"vit\", limit=10)\n",
    "print(\"Vision Transformer models:\", vit_models)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Download Sample Data\n",
    "\n",
    "For this example, we'll use the [EuroSAT RGB dataset](https://huggingface.co/datasets/timm/eurosat-rgb) from Hugging Face. This dataset contains Sentinel-2 satellite RGB images in 10 land use/land cover classes:\n",
    "- AnnualCrop\n",
    "- Forest\n",
    "- HerbaceousVegetation\n",
    "- Highway\n",
    "- Industrial\n",
    "- Pasture\n",
    "- PermanentCrop\n",
    "- Residential\n",
    "- River\n",
    "- SeaLake"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datasets import load_dataset\n",
    "import tempfile\n",
    "import shutil\n",
    "from PIL import Image\n",
    "\n",
    "# Load EuroSAT RGB dataset from Hugging Face\n",
    "print(\"Loading EuroSAT dataset from Hugging Face...\")\n",
    "dataset = load_dataset(\"timm/eurosat-rgb\", split=\"train\")\n",
    "\n",
    "# Create a temporary directory to save images\n",
    "temp_dir = tempfile.mkdtemp(prefix=\"eurosat_\")\n",
    "print(f\"Saving images to: {temp_dir}\")\n",
    "\n",
    "# Save images to disk organized by class\n",
    "class_names = dataset.features[\"label\"].names\n",
    "print(f\"Classes: {class_names}\")\n",
    "\n",
    "for idx, sample in enumerate(dataset):\n",
    "    img = sample[\"image\"]\n",
    "    label = sample[\"label\"]\n",
    "    class_name = class_names[label]\n",
    "\n",
    "    # Create class directory\n",
    "    class_dir = os.path.join(temp_dir, class_name)\n",
    "    os.makedirs(class_dir, exist_ok=True)\n",
    "\n",
    "    # Save image as JPEG\n",
    "    img_path = os.path.join(class_dir, f\"{idx:05d}.jpg\")\n",
    "    img.save(img_path)\n",
    "\n",
    "print(f\"Saved {len(dataset)} images to {temp_dir}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prepare Training Data\n",
    "\n",
    "Now we'll load all image paths and create train/val/test splits."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import glob\n",
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "# Get all image paths and labels\n",
    "image_paths = []\n",
    "labels = []\n",
    "\n",
    "for class_idx, class_name in enumerate(class_names):\n",
    "    class_dir = os.path.join(temp_dir, class_name)\n",
    "    class_images = sorted(glob.glob(os.path.join(class_dir, \"*.jpg\")))\n",
    "\n",
    "    image_paths.extend(class_images)\n",
    "    labels.extend([class_idx] * len(class_images))\n",
    "\n",
    "print(f\"Total images: {len(image_paths)}\")\n",
    "print(f\"Number of classes: {len(class_names)}\")\n",
    "print(f\"Class distribution:\")\n",
    "for class_idx, class_name in enumerate(class_names):\n",
    "    count = labels.count(class_idx)\n",
    "    print(f\"  {class_name}: {count}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Split data into train, validation, and test sets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_paths, test_paths, train_labels, test_labels = train_test_split(\n",
    "    image_paths, labels, test_size=0.2, random_state=42, stratify=labels\n",
    ")\n",
    "\n",
    "train_paths, val_paths, train_labels, val_labels = train_test_split(\n",
    "    train_paths, train_labels, test_size=0.2, random_state=42, stratify=train_labels\n",
    ")\n",
    "\n",
    "print(f\"Training samples: {len(train_paths)}\")\n",
    "print(f\"Validation samples: {len(val_paths)}\")\n",
    "print(f\"Test samples: {len(test_paths)}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "from PIL import Image\n",
    "\n",
    "# Show one sample from each class\n",
    "fig, axes = plt.subplots(2, 5, figsize=(20, 8))\n",
    "\n",
    "for idx, class_name in enumerate(class_names):\n",
    "    ax = axes[idx // 5, idx % 5]\n",
    "\n",
    "    # Find first image of this class\n",
    "    img_idx = labels.index(idx)\n",
    "    img = Image.open(image_paths[img_idx])\n",
    "\n",
    "    ax.imshow(img)\n",
    "    ax.set_title(class_name, fontsize=12)\n",
    "    ax.axis(\"off\")\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create Datasets\n",
    "\n",
    "The `RemoteSensingDataset` class handles loading images with support for multi-channel imagery."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create datasets\n",
    "train_dataset = RemoteSensingDataset(\n",
    "    image_paths=train_paths,\n",
    "    labels=train_labels,\n",
    "    num_channels=3,  # RGB images\n",
    ")\n",
    "\n",
    "val_dataset = RemoteSensingDataset(\n",
    "    image_paths=val_paths,\n",
    "    labels=val_labels,\n",
    "    num_channels=3,\n",
    ")\n",
    "\n",
    "test_dataset = RemoteSensingDataset(\n",
    "    image_paths=test_paths,\n",
    "    labels=test_labels,\n",
    "    num_channels=3,\n",
    ")\n",
    "\n",
    "print(f\"Train dataset size: {len(train_dataset)}\")\n",
    "print(f\"Validation dataset size: {len(val_dataset)}\")\n",
    "print(f\"Test dataset size: {len(test_dataset)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train a ResNet50 Classifier\n",
    "\n",
    "Let's train a ResNet50 model with pretrained ImageNet weights for transfer learning on the 10-class EuroSAT dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train ResNet50 classifier\n",
    "output_dir = \"timm_output/resnet50\"\n",
    "\n",
    "model = train_timm_classifier(\n",
    "    train_dataset=train_dataset,\n",
    "    val_dataset=val_dataset,\n",
    "    test_dataset=test_dataset,\n",
    "    model_name=\"resnet50\",\n",
    "    num_classes=len(class_names),  # 10 classes\n",
    "    in_channels=3,\n",
    "    pretrained=True,\n",
    "    output_dir=output_dir,\n",
    "    batch_size=32,\n",
    "    num_epochs=20,\n",
    "    learning_rate=1e-3,\n",
    "    weight_decay=1e-4,\n",
    "    num_workers=4,\n",
    "    freeze_backbone=False,\n",
    "    monitor_metric=\"val_acc\",\n",
    "    mode=\"max\",\n",
    "    patience=5,\n",
    "    save_top_k=1,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train an EfficientNet-B0 Classifier\n",
    "\n",
    "EfficientNet models provide an excellent balance between accuracy and efficiency."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train EfficientNet-B0 classifier\n",
    "output_dir = \"timm_output/efficientnet_b0\"\n",
    "\n",
    "model = train_timm_classifier(\n",
    "    train_dataset=train_dataset,\n",
    "    val_dataset=val_dataset,\n",
    "    test_dataset=test_dataset,\n",
    "    model_name=\"efficientnet_b0\",\n",
    "    num_classes=len(class_names),\n",
    "    in_channels=3,\n",
    "    pretrained=True,\n",
    "    output_dir=output_dir,\n",
    "    batch_size=32,\n",
    "    num_epochs=20,\n",
    "    learning_rate=1e-3,\n",
    "    weight_decay=1e-4,\n",
    "    num_workers=4,\n",
    "    freeze_backbone=False,\n",
    "    monitor_metric=\"val_acc\",\n",
    "    mode=\"max\",\n",
    "    patience=5,\n",
    "    save_top_k=1,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Fine-tuning with Frozen Backbone\n",
    "\n",
    "For faster training, you can freeze the backbone and only train the classification head:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Fine-tune only the classifier head\n",
    "output_dir = \"timm_output/resnet50_frozen\"\n",
    "\n",
    "model_frozen = train_timm_classifier(\n",
    "    train_dataset=train_dataset,\n",
    "    val_dataset=val_dataset,\n",
    "    test_dataset=test_dataset,\n",
    "    model_name=\"resnet50\",\n",
    "    num_classes=len(class_names),\n",
    "    in_channels=3,\n",
    "    pretrained=True,\n",
    "    freeze_backbone=True,  # Freeze backbone weights\n",
    "    output_dir=output_dir,\n",
    "    batch_size=32,\n",
    "    num_epochs=10,  # Fewer epochs needed\n",
    "    learning_rate=1e-3,\n",
    "    monitor_metric=\"val_acc\",\n",
    "    mode=\"max\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Make Predictions\n",
    "\n",
    "Use the trained model to make predictions on test images."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load the best model checkpoint\n",
    "from geoai.timm_train import TimmClassifier\n",
    "import torch\n",
    "\n",
    "# Path to the best model checkpoint\n",
    "checkpoint_path = \"timm_output/resnet50/models/last.ckpt\"\n",
    "\n",
    "# Load model\n",
    "model = TimmClassifier.load_from_checkpoint(checkpoint_path)\n",
    "\n",
    "# Make predictions\n",
    "predictions, probabilities = predict_with_timm(\n",
    "    model=model,\n",
    "    image_paths=test_paths[:20],  # Predict on first 20 test images\n",
    "    batch_size=8,\n",
    "    return_probabilities=True,\n",
    ")\n",
    "\n",
    "print(f\"Predictions shape: {predictions.shape}\")\n",
    "print(f\"Probabilities shape: {probabilities.shape}\")\n",
    "print(f\"Sample predictions: {[class_names[p] for p in predictions[:5]]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualize Predictions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "from PIL import Image\n",
    "\n",
    "# Visualize predictions\n",
    "fig, axes = plt.subplots(4, 5, figsize=(20, 16))\n",
    "\n",
    "for idx, ax in enumerate(axes.flat):\n",
    "    if idx >= len(test_paths[:20]):\n",
    "        break\n",
    "\n",
    "    # Load and display image\n",
    "    img = Image.open(test_paths[idx])\n",
    "\n",
    "    ax.imshow(img)\n",
    "    pred_class = class_names[predictions[idx]]\n",
    "    true_class = class_names[test_labels[idx]]\n",
    "    confidence = probabilities[idx][predictions[idx]] * 100\n",
    "\n",
    "    color = \"green\" if predictions[idx] == test_labels[idx] else \"red\"\n",
    "    ax.set_title(\n",
    "        f\"Pred: {pred_class}\\nTrue: {true_class}\\n({confidence:.1f}%)\",\n",
    "        color=color,\n",
    "        fontsize=10,\n",
    "    )\n",
    "    ax.axis(\"off\")\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Using Class Weights for Imbalanced Datasets\n",
    "\n",
    "When dealing with imbalanced datasets, you can provide class weights to the loss function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.utils.class_weight import compute_class_weight\n",
    "import numpy as np\n",
    "\n",
    "# Compute class weights\n",
    "class_weights = compute_class_weight(\n",
    "    class_weight=\"balanced\", classes=np.unique(train_labels), y=train_labels\n",
    ")\n",
    "\n",
    "print(f\"Class weights: {class_weights}\")\n",
    "\n",
    "# Train with class weights\n",
    "output_dir = \"timm_output/resnet50_weighted\"\n",
    "\n",
    "model_weighted = train_timm_classifier(\n",
    "    train_dataset=train_dataset,\n",
    "    val_dataset=val_dataset,\n",
    "    model_name=\"resnet50\",\n",
    "    num_classes=len(class_names),\n",
    "    in_channels=3,\n",
    "    pretrained=True,\n",
    "    output_dir=output_dir,\n",
    "    batch_size=32,\n",
    "    num_epochs=20,\n",
    "    learning_rate=1e-3,\n",
    "    class_weights=class_weights.tolist(),  # Pass class weights\n",
    "    monitor_metric=\"val_acc\",\n",
    "    mode=\"max\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "This notebook demonstrated:\n",
    "\n",
    "1. **Model Selection**: Exploring 1000+ available timm models (ResNet, EfficientNet, ViT)\n",
    "2. **Data Loading**: Using the EuroSAT RGB dataset from Hugging Face\n",
    "3. **Training**: Training various architectures on 10-class land cover classification\n",
    "4. **Transfer Learning**: Fine-tuning pretrained models with frozen backbones\n",
    "5. **Inference**: Making predictions and visualizations\n",
    "6. **Class Weighting**: Handling imbalanced datasets\n",
    "\n",
    "## Key Parameters\n",
    "\n",
    "- `model_name`: Choose from 1000+ timm models\n",
    "- `num_classes`: Number of output classes\n",
    "- `in_channels`: Number of input channels (3 for RGB, 4 for RGBN, etc.)\n",
    "- `pretrained`: Use ImageNet pretrained weights for transfer learning\n",
    "- `freeze_backbone`: Freeze backbone for faster fine-tuning\n",
    "- `class_weights`: Handle imbalanced datasets\n",
    "- `monitor_metric`: Track 'val_loss' or 'val_acc' for checkpointing\n",
    "- `patience`: Early stopping patience\n",
    "\n",
    "## Next Steps\n",
    "\n",
    "- Experiment with different model architectures (ConvNeXt, Swin Transformer, etc.)\n",
    "- Try data augmentation for improved performance\n",
    "- Use learning rate schedulers for better convergence\n",
    "- Deploy models for inference on satellite imagery"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.utils.class_weight import compute_class_weight\n",
    "import numpy as np\n",
    "\n",
    "# Compute class weights\n",
    "class_weights = compute_class_weight(\n",
    "    class_weight=\"balanced\", classes=np.unique(train_labels), y=train_labels\n",
    ")\n",
    "\n",
    "print(f\"Class weights: {class_weights}\")\n",
    "\n",
    "# Train with class weights\n",
    "output_dir = \"timm_output/resnet50_weighted\"\n",
    "\n",
    "model_weighted = train_timm_classifier(\n",
    "    train_dataset=train_dataset,\n",
    "    val_dataset=val_dataset,\n",
    "    model_name=\"resnet50\",\n",
    "    num_classes=len(class_names),\n",
    "    in_channels=3,\n",
    "    pretrained=True,\n",
    "    output_dir=output_dir,\n",
    "    batch_size=16,\n",
    "    num_epochs=20,\n",
    "    learning_rate=1e-3,\n",
    "    class_weights=class_weights.tolist(),  # Pass class weights\n",
    "    monitor_metric=\"val_acc\",\n",
    "    mode=\"max\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "This notebook demonstrated:\n",
    "\n",
    "1. **Model Selection**: Exploring 1000+ available timm models\n",
    "2. **Data Preparation**: Creating datasets for remote sensing imagery\n",
    "3. **Training**: Training various architectures (ResNet, EfficientNet, ViT)\n",
    "4. **Multi-channel Support**: Handling 4-band RGBN imagery\n",
    "5. **Transfer Learning**: Fine-tuning pretrained models with frozen backbones\n",
    "6. **Inference**: Making predictions on new images\n",
    "7. **Class Weighting**: Handling imbalanced datasets\n",
    "\n",
    "## Key Parameters\n",
    "\n",
    "- `model_name`: Choose from 1000+ timm models\n",
    "- `num_classes`: Number of output classes\n",
    "- `in_channels`: Number of input channels (3 for RGB, 4 for RGBN, etc.)\n",
    "- `pretrained`: Use ImageNet pretrained weights for transfer learning\n",
    "- `freeze_backbone`: Freeze backbone for faster fine-tuning\n",
    "- `class_weights`: Handle imbalanced datasets\n",
    "- `monitor_metric`: Track 'val_loss' or 'val_acc' for checkpointing\n",
    "- `patience`: Early stopping patience\n",
    "\n",
    "## Next Steps\n",
    "\n",
    "- Experiment with different model architectures\n",
    "- Try data augmentation for improved performance\n",
    "- Use learning rate schedulers for better convergence\n",
    "- Deploy models for inference on large raster datasets"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "geo",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
