{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Creating Training Data for Deep Learning\n",
    "\n",
    "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/opengeos/geoai/blob/main/docs/examples/create_training_data.ipynb)\n",
    "\n",
    "\n",
    "This notebook demonstrates how to create training data (image and mask tiles) from georeferenced imagery and vector annotations using the improved `export_geotiff_tiles_batch` function.\n",
    "\n",
    "The function now supports three different input modes:\n",
    "1. **Single vector file covering all images** - Most efficient for large annotation files\n",
    "2. **Multiple vector files matched by filename** - Good for paired datasets\n",
    "3. **Multiple vector files matched by sorted order** - Good for sequential datasets\n",
    "\n",
    "## Install package\n",
    "\n",
    "To use the `geoai-py` package, ensure it is installed in your environment. Uncomment the command below if needed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install geoai-py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup\n",
    "\n",
    "Import the required functions and check the sample data structure."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import geoai"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Download Sample Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "url = \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/naip_rgb_train_tiles.zip\"\n",
    "download_dir = geoai.download_file(url)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Explore Sample Data\n",
    "\n",
    "The sample data contains:\n",
    "- **images/**: Two NAIP RGB image tiles\n",
    "- **masks1/**: Single GeoJSON file with all building annotations\n",
    "- **masks2/**: Separate GeoJSON files for each image tile"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# List available data\n",
    "data_dir = os.path.join(download_dir, \"data\")\n",
    "\n",
    "print(\"Images:\")\n",
    "for f in sorted(os.listdir(f\"{data_dir}/images\")):\n",
    "    print(f\"  - {f}\")\n",
    "\n",
    "print(\"\\nMasks (single file):\")\n",
    "for f in sorted(os.listdir(f\"{data_dir}/masks1\")):\n",
    "    print(f\"  - {f}\")\n",
    "\n",
    "print(\"\\nMasks (multiple files):\")\n",
    "for f in sorted(os.listdir(f\"{data_dir}/masks2\")):\n",
    "    print(f\"  - {f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualize Sample Image and Annotations\n",
    "\n",
    "Let's look at one of the images and its corresponding building annotations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load and display first image\n",
    "image_path = f\"{data_dir}/images/naip_rgb_train_tile1.tif\"\n",
    "mask_path = f\"{data_dir}/masks2/naip_rgb_train_tile1.geojson\"\n",
    "\n",
    "fig, axes, info = geoai.display_image_with_vector(image_path, mask_path)\n",
    "print(f\"Number of buildings: {info['num_features']}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://github.com/user-attachments/assets/70b51e79-369a-4ff5-960c-f693938c1f99)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Method 1: Single Vector File Covering All Images\n",
    "\n",
    "This is the most efficient method when you have one large annotation file covering multiple image tiles. The function automatically:\n",
    "- Loads the vector file once\n",
    "- Spatially filters features for each image based on bounds\n",
    "- Generates tiles only where features exist"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Use single mask file for all images\n",
    "stats = geoai.export_geotiff_tiles_batch(\n",
    "    images_folder=f\"{data_dir}/images\",\n",
    "    masks_file=f\"{data_dir}/masks1/naip_train_buildings.geojson\",\n",
    "    output_folder=\"output/method1_single_mask\",\n",
    "    tile_size=256,\n",
    "    stride=256,  # No overlap\n",
    "    class_value_field=\"class\",\n",
    "    skip_empty_tiles=True,  # Skip tiles with no buildings\n",
    "    max_tiles=20,  # Limit for demo purposes\n",
    "    quiet=False,\n",
    ")\n",
    "\n",
    "print(f\"\\n{'='*60}\")\n",
    "print(\"Results:\")\n",
    "print(f\"  Images processed: {stats['processed_pairs']}\")\n",
    "print(f\"  Total tiles generated: {stats['total_tiles']}\")\n",
    "print(f\"  Tiles with features: {stats['tiles_with_features']}\")\n",
    "print(\n",
    "    f\"  Feature percentage: {stats['tiles_with_features']/stats['total_tiles']*100:.1f}%\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Method 2: Multiple Vector Files Matched by Sorted Order\n",
    "\n",
    "This method pairs images and masks alphabetically by sorted order. The 1st image pairs with the 1st mask, 2nd with 2nd, etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Use multiple mask files matched by sorted order\n",
    "stats = geoai.export_geotiff_tiles_batch(\n",
    "    images_folder=f\"{data_dir}/images\",\n",
    "    masks_folder=f\"{data_dir}/masks2\",\n",
    "    output_folder=\"output/method2_sorted_order\",\n",
    "    tile_size=256,\n",
    "    stride=256,\n",
    "    class_value_field=\"class\",\n",
    "    skip_empty_tiles=True,\n",
    "    match_by_name=False,  # Match by sorted order\n",
    "    max_tiles=20,\n",
    ")\n",
    "\n",
    "print(f\"\\n{'='*60}\")\n",
    "print(\"Results:\")\n",
    "print(f\"  Images processed: {stats['processed_pairs']}\")\n",
    "print(f\"  Total tiles generated: {stats['total_tiles']}\")\n",
    "print(f\"  Tiles with features: {stats['tiles_with_features']}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Method 3: Multiple Vector Files Matched by Filename\n",
    "\n",
    "This method pairs images and masks by matching their base filenames (e.g., `image1.tif` → `image1.geojson`).\n",
    "\n",
    "**Note**: This requires images and masks to have matching base names. The sample dataset doesn't have matching names, so this example creates a compatible structure first."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "stats = geoai.export_geotiff_tiles_batch(\n",
    "    images_folder=\"data/images\",\n",
    "    masks_folder=\"data/masks2\",\n",
    "    output_folder=\"output/method3_filename_match\",\n",
    "    tile_size=256,\n",
    "    stride=256,\n",
    "    class_value_field=\"class\",\n",
    "    skip_empty_tiles=True,\n",
    "    match_by_name=True,  # Match by filename\n",
    ")\n",
    "\n",
    "print(\"Method 3 requires matching base filenames between images and masks.\")\n",
    "print(\"Example: 'image001.tif' pairs with 'image001.geojson'\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualize Generated Tiles\n",
    "\n",
    "Let's look at some of the generated training tiles."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "output_dir = \"output/method1_single_mask\"\n",
    "fig = geoai.display_training_tiles(output_dir, num_tiles=6)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://github.com/user-attachments/assets/8302e157-10b8-4a33-a4b4-882d446e8ebf)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Advanced Usage: Custom Parameters\n",
    "\n",
    "The function supports many parameters for customization:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Advanced example with custom parameters\n",
    "stats = geoai.export_geotiff_tiles_batch(\n",
    "    images_folder=f\"{data_dir}/images\",\n",
    "    masks_file=f\"{data_dir}/masks1/naip_train_buildings.geojson\",\n",
    "    output_folder=\"output/advanced_example\",\n",
    "    tile_size=512,  # Larger tiles\n",
    "    stride=256,  # 50% overlap for better coverage\n",
    "    class_value_field=\"class\",  # Field containing class labels\n",
    "    buffer_radius=0.5,  # Add 0.5m buffer around buildings\n",
    "    skip_empty_tiles=True,  # Skip tiles with no features\n",
    "    all_touched=True,  # Include pixels touching features\n",
    "    max_tiles=10,  # Limit number of tiles per image\n",
    "    quiet=False,  # Show progress\n",
    ")\n",
    "\n",
    "print(f\"\\nGenerated {stats['total_tiles']} tiles with 50% overlap\")\n",
    "print(f\"Output structure:\")\n",
    "print(f\"  - output/advanced_example/images/  (image tiles)\")\n",
    "print(f\"  - output/advanced_example/masks/   (mask tiles)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train a Segmentation Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.train_segmentation_model(\n",
    "    images_dir=f\"output/method3_filename_match/images\",\n",
    "    labels_dir=f\"output/method3_filename_match/masks\",\n",
    "    output_dir=f\"output/unet_models\",\n",
    "    architecture=\"unet\",\n",
    "    encoder_name=\"resnet34\",\n",
    "    encoder_weights=\"imagenet\",\n",
    "    num_channels=3,\n",
    "    num_classes=2,  # background and building\n",
    "    batch_size=8,\n",
    "    num_epochs=20,\n",
    "    learning_rate=0.001,\n",
    "    val_split=0.2,\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.plot_performance_metrics(\n",
    "    history_path=f\"output/unet_models/training_history.pth\",\n",
    "    figsize=(15, 5),\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image](https://github.com/user-attachments/assets/98db1249-b478-4b3b-90eb-87bf3569fc78)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "The improved `export_geotiff_tiles_batch` function provides flexible options for creating training data:\n",
    "\n",
    "| Method | Use Case | Parameter |\n",
    "|--------|----------|------------|\n",
    "| Single vector file | One annotation file covering all images | `masks_file=\"path/to/file.geojson\"` |\n",
    "| Multiple files (by name) | Paired files with matching names | `masks_folder=\"path/to/masks\", match_by_name=True` |\n",
    "| Multiple files (by order) | Paired files in sorted order | `masks_folder=\"path/to/masks\", match_by_name=False` |\n",
    "\n",
    "**Key Features:**\n",
    "- Supports both raster and vector masks\n",
    "- Automatic CRS reprojection\n",
    "- Spatial filtering for single mask files\n",
    "- Configurable tile size, stride, and overlap\n",
    "- Optional empty tile filtering\n",
    "- Buffer support for vector annotations\n",
    "- Detailed statistics reporting"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "geo",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
