{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Train an Instance Segmentation Model using Mask R-CNN\n",
    "\n",
    "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/opengeos/geoai/blob/main/docs/examples/train_instance_segmentation_model.ipynb)\n",
    "\n",
    "This notebook demonstrates how to train instance segmentation models for object detection (e.g., building detection) using Mask R-CNN. Unlike semantic segmentation, instance segmentation can distinguish between individual objects of the same class, providing separate masks for each instance.\n",
    "\n",
    "## Install packages\n",
    "\n",
    "To use the new functionality, ensure the required packages are installed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install geoai-py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import geoai"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Download sample data\n",
    "\n",
    "We'll use the same dataset as the semantic segmentation example for consistency."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_raster_url = (\n",
    "    \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/naip_rgb_train.tif\"\n",
    ")\n",
    "train_vector_url = \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/naip_train_buildings.geojson\"\n",
    "test_raster_url = (\n",
    "    \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/naip_test.tif\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_raster_path = geoai.download_file(train_raster_url)\n",
    "train_vector_path = geoai.download_file(train_vector_url)\n",
    "test_raster_path = geoai.download_file(test_raster_url)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualize sample data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.get_raster_info(train_raster_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "style_dict = {\n",
    "    \"color\": \"#ff0000\",\n",
    "    \"weight\": 2,\n",
    "    \"opacity\": 1,\n",
    "    # \"fill\": True,\n",
    "    # \"fillColor\": \"#ffffff\",\n",
    "    \"fillOpacity\": 0,\n",
    "    # \"dashArray\": \"9\"\n",
    "    # \"clickable\": True,\n",
    "}\n",
    "style_function = lambda x: style_dict\n",
    "\n",
    "geoai.view_vector_interactive(\n",
    "    train_vector_path, tiles=train_raster_path, style_function=style_function\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.view_raster(test_raster_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create training data\n",
    "\n",
    "We'll create training tiles from the imagery and vector labels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "out_folder = \"buildings_instance\"\n",
    "tiles = geoai.export_geotiff_tiles(\n",
    "    in_raster=train_raster_path,\n",
    "    out_folder=out_folder,\n",
    "    in_class_data=train_vector_path,\n",
    "    tile_size=512,\n",
    "    stride=256,\n",
    "    buffer_radius=0,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train instance segmentation model\n",
    "\n",
    "Now we'll train an instance segmentation model using the `train_instance_segmentation_model` function. This function uses Mask R-CNN, which is specifically designed for instance segmentation tasks.\n",
    "\n",
    "### Key Differences from Semantic Segmentation:\n",
    "\n",
    "- **Instance Segmentation**: Identifies and segments each individual object separately (e.g., distinguishes Building A from Building B)\n",
    "- **Semantic Segmentation**: Only classifies pixels into categories (all buildings are treated as one class)\n",
    "\n",
    "### Model Architecture:\n",
    "\n",
    "Mask R-CNN combines:\n",
    "- **Faster R-CNN** for object detection (bounding boxes)\n",
    "- **FCN** for pixel-level segmentation (masks)\n",
    "- **ResNet-50 + FPN** backbone for feature extraction\n",
    "\n",
    "### Training Parameters:\n",
    "\n",
    "- `num_classes`: Number of classes including background (default: 2 for background + buildings)\n",
    "- `num_channels`: Number of input channels (3 for RGB, 4 for RGBN)\n",
    "- `batch_size`: Typically smaller than semantic segmentation (4-8) due to model complexity\n",
    "- `num_epochs`: Number of training epochs\n",
    "- `learning_rate`: Initial learning rate (default: 0.005)\n",
    "- `val_split`: Fraction of data for validation (default: 0.2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train Mask R-CNN model\n",
    "geoai.train_instance_segmentation_model(\n",
    "    images_dir=f\"{out_folder}/images\",\n",
    "    labels_dir=f\"{out_folder}/labels\",\n",
    "    output_dir=f\"{out_folder}/instance_models\",\n",
    "    num_classes=2,  # background + building\n",
    "    num_channels=3,\n",
    "    batch_size=4,\n",
    "    num_epochs=10,\n",
    "    learning_rate=0.005,\n",
    "    val_split=0.2,\n",
    "    visualize=True,\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Run inference\n",
    "\n",
    "Now we'll use the trained model to make predictions on the test image. The `instance_segmentation` function performs sliding window inference to handle large images."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define paths\n",
    "masks_path = \"naip_test_instance_prediction.tif\"\n",
    "model_path = f\"{out_folder}/instance_models/best_model.pth\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run instance segmentation inference\n",
    "geoai.instance_segmentation(\n",
    "    input_path=test_raster_path,\n",
    "    output_path=masks_path,\n",
    "    model_path=model_path,\n",
    "    num_classes=2,\n",
    "    num_channels=3,\n",
    "    window_size=512,\n",
    "    overlap=256,\n",
    "    confidence_threshold=0.5,\n",
    "    batch_size=4,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Adjust confidence threshold (optional)\n",
    "\n",
    "You can control which predictions to keep by adjusting the confidence threshold. Higher values (e.g., 0.7) will be more conservative and only keep high-confidence detections, while lower values (e.g., 0.3) will be more permissive."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Run inference with higher confidence threshold\n",
    "masks_path_high_conf = \"naip_test_instance_prediction_high_conf.tif\"\n",
    "\n",
    "geoai.instance_segmentation(\n",
    "    input_path=test_raster_path,\n",
    "    output_path=masks_path_high_conf,\n",
    "    model_path=model_path,\n",
    "    num_classes=2,\n",
    "    num_channels=3,\n",
    "    window_size=512,\n",
    "    overlap=256,\n",
    "    confidence_threshold=0.7,  # Higher threshold for more confident predictions\n",
    "    batch_size=4,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Vectorize masks\n",
    "\n",
    "Convert the predicted mask to vector format for better visualization and analysis."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "output_vector_path = \"naip_test_instance_prediction.geojson\"\n",
    "gdf = geoai.orthogonalize(masks_path, output_vector_path, epsilon=2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Add geometric properties\n",
    "\n",
    "Calculate area, perimeter, and other geometric properties for each detected building."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gdf_props = geoai.add_geometric_properties(gdf, area_unit=\"m2\", length_unit=\"m\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualize results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.view_raster(\n",
    "    masks_path, nodata=0, cmap=\"tab20\", basemap=test_raster_path, backend=\"ipyleaflet\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.view_vector_interactive(gdf_props, column=\"area_m2\", tiles=test_raster_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Filter by area\n",
    "\n",
    "Filter out small detections that might be noise or artifacts."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gdf_filtered = gdf_props[(gdf_props[\"area_m2\"] > 50)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.view_vector_interactive(gdf_filtered, column=\"area_m2\", tiles=test_raster_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Compare predictions with imagery"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.create_split_map(\n",
    "    left_layer=gdf_filtered,\n",
    "    right_layer=test_raster_path,\n",
    "    left_args={\"style\": {\"color\": \"red\", \"fillOpacity\": 0.2}},\n",
    "    basemap=test_raster_path,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Model Performance Analysis\n",
    "\n",
    "Let's examine the training curves and model performance:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.plot_performance_metrics(\n",
    "    history_path=f\"{out_folder}/instance_models/training_history.pth\",\n",
    "    figsize=(15, 5),\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Instance vs Semantic Segmentation Comparison\n",
    "\n",
    "### When to use Instance Segmentation:\n",
    "\n",
    "1. **Individual object analysis**: When you need to count, measure, or analyze individual objects\n",
    "2. **Overlapping objects**: When objects of the same class may overlap or touch\n",
    "3. **Object tracking**: When tracking individual objects across frames or images\n",
    "4. **Spatial relationships**: When analyzing relationships between individual objects\n",
    "\n",
    "### When to use Semantic Segmentation:\n",
    "\n",
    "1. **Area coverage**: When you only need to know what percentage of an image contains a certain class\n",
    "2. **Land cover mapping**: For continuous features like vegetation, water, roads\n",
    "3. **Simpler models**: When you want faster training and inference\n",
    "4. **Pixel-level classification**: When object boundaries are less important\n",
    "\n",
    "### Model Outputs:\n",
    "\n",
    "**Instance Segmentation (Mask R-CNN)**:\n",
    "- Bounding boxes for each object\n",
    "- Confidence scores for each detection\n",
    "- Binary mask for each individual object\n",
    "- Class label for each object\n",
    "\n",
    "**Semantic Segmentation**:\n",
    "- Single multi-class mask covering the entire image\n",
    "- Probability map (optional)\n",
    "- No distinction between individual objects\n",
    "\n",
    "### Performance Considerations:\n",
    "\n",
    "| Aspect | Instance Segmentation | Semantic Segmentation |\n",
    "|--------|----------------------|----------------------|\n",
    "| **Training Time** | Slower (more complex model) | Faster |\n",
    "| **Inference Time** | Slower | Faster |\n",
    "| **Memory Usage** | Higher | Lower |\n",
    "| **Accuracy** | Better for distinct objects | Better for continuous classes |\n",
    "| **Typical Batch Size** | 2-8 | 8-32 |\n",
    "\n",
    "### Metrics:\n",
    "\n",
    "**Instance Segmentation Metrics**:\n",
    "- **AP (Average Precision)**: Precision at different IoU thresholds\n",
    "- **AP@0.5**: Average Precision at IoU threshold of 0.5\n",
    "- **AP@0.75**: Average Precision at IoU threshold of 0.75\n",
    "- **AR (Average Recall)**: Recall averaged across IoU thresholds\n",
    "\n",
    "**Semantic Segmentation Metrics**:\n",
    "- **IoU (Intersection over Union)**: Overlap between prediction and ground truth\n",
    "- **Dice Score**: Similar to IoU but more sensitive to small objects\n",
    "- **Pixel Accuracy**: Percentage of correctly classified pixels"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Batch Processing (Optional)\n",
    "\n",
    "If you have multiple images to process, you can use the batch inference function:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Uncomment to process multiple images\n",
    "# geoai.instance_segmentation_batch(\n",
    "#     input_dir=\"path/to/input/images\",\n",
    "#     output_dir=\"path/to/output/masks\",\n",
    "#     model_path=model_path,\n",
    "#     num_classes=2,\n",
    "#     num_channels=3,\n",
    "#     window_size=512,\n",
    "#     overlap=256,\n",
    "#     confidence_threshold=0.5,\n",
    "#     batch_size=4,\n",
    "# )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Advanced: Multi-channel Input (RGBN)\n",
    "\n",
    "If your imagery includes a near-infrared (NIR) band, you can train with 4 channels:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Example for 4-channel (RGBN) imagery\n",
    "# geoai.train_instance_segmentation_model(\n",
    "#     images_dir=f\"{out_folder}/images\",\n",
    "#     labels_dir=f\"{out_folder}/labels\",\n",
    "#     output_dir=f\"{out_folder}/instance_models_rgbn\",\n",
    "#     num_classes=2,\n",
    "#     num_channels=4,  # RGB + NIR\n",
    "#     batch_size=4,\n",
    "#     num_epochs=10,\n",
    "#     learning_rate=0.005,\n",
    "#     val_split=0.2,\n",
    "#     verbose=True,\n",
    "# )"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "geo",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
