{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0",
   "metadata": {},
   "source": [
    "# Export Training Data in Multiple Formats (PASCAL VOC, COCO, YOLO)\n",
    "\n",
    "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/opengeos/geoai/blob/main/docs/examples/export_training_data_formats.ipynb)\n",
    "\n",
    "This notebook demonstrates how to export geospatial training data in three popular object detection formats:\n",
    "\n",
    "- **PASCAL VOC**: XML-based format, widely used in computer vision\n",
    "- **COCO**: JSON-based format, standard for object detection benchmarks\n",
    "- **YOLO**: Text-based format with normalized coordinates, optimized for YOLO models\n",
    "\n",
    "## Install packages\n",
    "\n",
    "Ensure the required packages are installed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install geoai-py"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2",
   "metadata": {},
   "source": [
    "## Import libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3",
   "metadata": {},
   "outputs": [],
   "source": [
    "import geoai\n",
    "import json\n",
    "from pathlib import Path"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4",
   "metadata": {},
   "source": [
    "## Download sample data\n",
    "\n",
    "We'll use the same building detection dataset from the segmentation example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_raster_url = (\n",
    "    \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/naip_rgb_train.tif\"\n",
    ")\n",
    "train_vector_url = \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/naip_train_buildings.geojson\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_raster_path = geoai.download_file(train_raster_url)\n",
    "train_vector_path = geoai.download_file(train_vector_url)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7",
   "metadata": {},
   "source": [
    "## Visualize sample data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8",
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.get_raster_info(train_raster_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9",
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.view_vector_interactive(train_vector_path, tiles=train_raster_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10",
   "metadata": {},
   "source": [
    "## Format 1: PASCAL VOC (XML)\n",
    "\n",
    "PASCAL VOC format stores annotations in XML files with bounding boxes and class labels. This is the default format and is widely used in traditional object detection frameworks.\n",
    "\n",
    "**Output structure:**\n",
    "```\n",
    "pascal_voc_output/\n",
    "├── images/          # GeoTIFF tiles\n",
    "├── labels/          # Label masks (GeoTIFF)\n",
    "└── annotations/     # XML annotation files\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "11",
   "metadata": {},
   "outputs": [],
   "source": [
    "pascal_output = \"buildings_pascal_voc\"\n",
    "\n",
    "stats = geoai.export_geotiff_tiles(\n",
    "    in_raster=train_raster_path,\n",
    "    out_folder=pascal_output,\n",
    "    in_class_data=train_vector_path,\n",
    "    tile_size=512,\n",
    "    stride=256,\n",
    "    buffer_radius=0,\n",
    "    metadata_format=\"PASCAL_VOC\",\n",
    "    # max_tiles=10,  # Limit for demo purposes\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12",
   "metadata": {},
   "source": [
    "### Examine PASCAL VOC output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "13",
   "metadata": {},
   "outputs": [],
   "source": [
    "# List annotation files\n",
    "xml_files = list(Path(f\"{pascal_output}/annotations\").glob(\"*.xml\"))\n",
    "print(f\"Found {len(xml_files)} XML annotation files\")\n",
    "\n",
    "# Display first annotation file\n",
    "if xml_files:\n",
    "    with open(xml_files[0], \"r\") as f:\n",
    "        print(f\"\\nSample annotation ({xml_files[0].name}):\\n\")\n",
    "        print(f.read())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "14",
   "metadata": {},
   "source": [
    "## Format 2: COCO (JSON)\n",
    "\n",
    "COCO format uses a single JSON file containing all annotations, images, and categories. This is the standard format for modern object detection benchmarks.\n",
    "\n",
    "**Output structure:**\n",
    "```\n",
    "coco_output/\n",
    "├── images/              # GeoTIFF tiles\n",
    "├── labels/              # Label masks (GeoTIFF)\n",
    "└── annotations/\n",
    "    └── instances.json   # COCO annotations\n",
    "```\n",
    "\n",
    "**COCO JSON structure:**\n",
    "```json\n",
    "{\n",
    "  \"images\": [{\"id\": 0, \"file_name\": \"tile_000000.tif\", \"width\": 512, \"height\": 512}],\n",
    "  \"annotations\": [{\"id\": 1, \"image_id\": 0, \"category_id\": 1, \"bbox\": [x, y, w, h]}],\n",
    "  \"categories\": [{\"id\": 1, \"name\": \"building\", \"supercategory\": \"object\"}]\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "15",
   "metadata": {},
   "outputs": [],
   "source": [
    "coco_output = \"buildings_coco\"\n",
    "\n",
    "stats = geoai.export_geotiff_tiles(\n",
    "    in_raster=train_raster_path,\n",
    "    out_folder=coco_output,\n",
    "    in_class_data=train_vector_path,\n",
    "    tile_size=512,\n",
    "    stride=256,\n",
    "    buffer_radius=0,\n",
    "    metadata_format=\"COCO\",\n",
    "    # max_tiles=10,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "16",
   "metadata": {},
   "source": [
    "### Examine COCO output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load COCO annotations\n",
    "coco_file = f\"{coco_output}/annotations/instances.json\"\n",
    "with open(coco_file, \"r\") as f:\n",
    "    coco_data = json.load(f)\n",
    "\n",
    "print(f\"COCO Dataset Summary:\")\n",
    "print(f\"  Images: {len(coco_data['images'])}\")\n",
    "print(f\"  Annotations: {len(coco_data['annotations'])}\")\n",
    "print(f\"  Categories: {len(coco_data['categories'])}\")\n",
    "\n",
    "# Display categories\n",
    "print(f\"\\nCategories:\")\n",
    "for cat in coco_data[\"categories\"]:\n",
    "    print(f\"  {cat}\")\n",
    "\n",
    "# Display first image\n",
    "if coco_data[\"images\"]:\n",
    "    print(f\"\\nFirst image:\")\n",
    "    print(f\"  {coco_data['images'][0]}\")\n",
    "\n",
    "# Display first annotation\n",
    "if coco_data[\"annotations\"]:\n",
    "    print(f\"\\nFirst annotation:\")\n",
    "    print(f\"  {coco_data['annotations'][0]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "18",
   "metadata": {},
   "source": [
    "## Format 3: YOLO (Text)\n",
    "\n",
    "YOLO format uses text files with normalized bounding box coordinates. Each image has a corresponding `.txt` file with one line per object.\n",
    "\n",
    "**Output structure:**\n",
    "```\n",
    "yolo_output/\n",
    "├── images/           # GeoTIFF tiles\n",
    "├── labels/           # Label masks (GeoTIFF) + YOLO .txt files\n",
    "└── classes.txt       # Class names (one per line)\n",
    "```\n",
    "\n",
    "**YOLO annotation format (normalized coordinates 0-1):**\n",
    "```\n",
    "<class_id> <x_center> <y_center> <width> <height>\n",
    "0 0.5 0.5 0.3 0.2\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "19",
   "metadata": {},
   "outputs": [],
   "source": [
    "yolo_output = \"buildings_yolo\"\n",
    "\n",
    "stats = geoai.export_geotiff_tiles(\n",
    "    in_raster=train_raster_path,\n",
    "    out_folder=yolo_output,\n",
    "    in_class_data=train_vector_path,\n",
    "    tile_size=512,\n",
    "    stride=256,\n",
    "    buffer_radius=0,\n",
    "    metadata_format=\"YOLO\",\n",
    "    # max_tiles=10,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20",
   "metadata": {},
   "source": [
    "### Examine YOLO output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "21",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load classes\n",
    "classes_file = f\"{yolo_output}/classes.txt\"\n",
    "with open(classes_file, \"r\") as f:\n",
    "    classes = f.read().strip().split(\"\\n\")\n",
    "\n",
    "print(f\"Classes ({len(classes)}):\")\n",
    "for i, cls in enumerate(classes):\n",
    "    print(f\"  {i}: {cls}\")\n",
    "\n",
    "# List annotation files\n",
    "txt_files = list(Path(f\"{yolo_output}/labels\").glob(\"*.txt\"))\n",
    "print(f\"\\nFound {len(txt_files)} YOLO annotation files\")\n",
    "\n",
    "# Display first annotation file\n",
    "if txt_files:\n",
    "    with open(txt_files[0], \"r\") as f:\n",
    "        lines = f.readlines()\n",
    "    print(f\"\\nSample annotation ({txt_files[0].name}):\")\n",
    "    print(f\"  Format: <class_id> <x_center> <y_center> <width> <height>\")\n",
    "    for line in lines[:5]:  # Show first 5 objects\n",
    "        print(f\"  {line.strip()}\")\n",
    "    if len(lines) > 5:\n",
    "        print(f\"  ... and {len(lines) - 5} more objects\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "22",
   "metadata": {},
   "source": [
    "## Format Comparison\n",
    "\n",
    "### When to Use Each Format\n",
    "\n",
    "| Format | Best For | Pros | Cons |\n",
    "|--------|----------|------|------|\n",
    "| **PASCAL VOC** | Traditional CV frameworks, quick inspection | Human-readable XML, one file per image | Verbose, not ideal for large datasets |\n",
    "| **COCO** | Modern object detection, benchmarking, complex datasets | Efficient JSON, supports multiple annotations types | Single file can be large, requires parsing |\n",
    "| **YOLO** | YOLO models (v3-v8), real-time detection | Compact, fast to parse, normalized coordinates | Less human-readable, limited metadata |\n",
    "\n",
    "### Coordinate Systems\n",
    "\n",
    "- **PASCAL VOC**: Absolute pixel coordinates `[xmin, ymin, xmax, ymax]`\n",
    "- **COCO**: Absolute pixel coordinates `[x, y, width, height]` (top-left corner)\n",
    "- **YOLO**: Normalized coordinates `[x_center, y_center, width, height]` (0-1 range)\n",
    "\n",
    "### GeoAI Extensions\n",
    "\n",
    "All formats preserve geospatial information:\n",
    "- **PASCAL VOC**: CRS, transform, and bounds in `<georeference>` element\n",
    "- **COCO**: CRS and transform as custom fields in image metadata\n",
    "- **YOLO**: Georeferenced GeoTIFF tiles maintain spatial context"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23",
   "metadata": {},
   "source": [
    "## Multi-Class Example\n",
    "\n",
    "The formats also support multi-class datasets. Here's how class information is stored:\n",
    "\n",
    "**PASCAL VOC:**\n",
    "```xml\n",
    "<object>\n",
    "  <name>building</name>\n",
    "  <bndbox>...</bndbox>\n",
    "</object>\n",
    "```\n",
    "\n",
    "**COCO:**\n",
    "```json\n",
    "{\n",
    "  \"categories\": [\n",
    "    {\"id\": 1, \"name\": \"building\", \"supercategory\": \"object\"},\n",
    "    {\"id\": 2, \"name\": \"road\", \"supercategory\": \"object\"}\n",
    "  ]\n",
    "}\n",
    "```\n",
    "\n",
    "**YOLO:**\n",
    "```\n",
    "classes.txt:\n",
    "building\n",
    "road\n",
    "\n",
    "annotations:\n",
    "0 0.5 0.5 0.3 0.2  # class_id 0 = building\n",
    "1 0.7 0.3 0.2 0.1  # class_id 1 = road\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "The `export_geotiff_tiles` function now supports three popular annotation formats:\n",
    "\n",
    "- ✅ **PASCAL VOC** (XML) - Traditional, human-readable\n",
    "- ✅ **COCO** (JSON) - Modern benchmark standard\n",
    "- ✅ **YOLO** (TXT) - Lightweight, optimized for YOLO\n",
    "\n",
    "All formats maintain geospatial context through georeferenced GeoTIFF tiles, making them ideal for training object detection models on remote sensing imagery.\n",
    "\n",
    "Choose the format that best fits your model training framework:\n",
    "- Use **COCO** for detectron2, MMDetection, or benchmark comparisons\n",
    "- Use **YOLO** for YOLOv5, YOLOv8, or ultralytics\n",
    "- Use **PASCAL VOC** for TensorFlow Object Detection API or legacy frameworks"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25",
   "metadata": {},
   "source": [
    "## Using Exported Data for Training\n",
    "\n",
    "The training functions in GeoAI now support all three annotation formats directly! Here's how to use them for training models."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26",
   "metadata": {},
   "source": [
    "### Training with COCO Format\n",
    "\n",
    "Use `input_format=\"coco\"` and point `labels_dir` to the `instances.json` file:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "27",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train semantic segmentation model with COCO format\n",
    "geoai.train_segmentation_model(\n",
    "    images_dir=f\"{coco_output}/images\",\n",
    "    labels_dir=f\"{coco_output}/annotations/instances.json\",  # Path to COCO JSON\n",
    "    output_dir=\"models_coco\",\n",
    "    input_format=\"coco\",  # Specify COCO format\n",
    "    architecture=\"unet\",\n",
    "    encoder_name=\"resnet34\",\n",
    "    num_epochs=20,  # Reduced for demo\n",
    "    batch_size=8,\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "28",
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.plot_performance_metrics(\n",
    "    history_path=f\"models_coco/training_history.pth\",\n",
    "    figsize=(15, 5),\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "29",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train instance segmentation model with COCO format\n",
    "geoai.train_instance_segmentation_model(\n",
    "    images_dir=f\"{coco_output}/images\",\n",
    "    labels_dir=f\"{coco_output}/annotations/instances.json\",\n",
    "    output_dir=\"models_maskrcnn_coco\",\n",
    "    input_format=\"coco\",\n",
    "    num_epochs=20,\n",
    "    batch_size=8,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "30",
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.plot_performance_metrics(\n",
    "    history_path=f\"models_maskrcnn_coco/training_history.pth\",\n",
    "    figsize=(15, 5),\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31",
   "metadata": {},
   "source": [
    "### Training with YOLO Format\n",
    "\n",
    "Use `input_format=\"yolo\"` and point `images_dir` to the root directory containing `images/` and `labels/` subdirectories:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "32",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train semantic segmentation model with YOLO format\n",
    "geoai.train_segmentation_model(\n",
    "    images_dir=yolo_output,  # Root directory containing images/ and labels/\n",
    "    labels_dir=\"\",  # Not used for YOLO format\n",
    "    output_dir=\"models_yolo\",\n",
    "    input_format=\"yolo\",  # Specify YOLO format\n",
    "    architecture=\"unet\",\n",
    "    encoder_name=\"resnet34\",\n",
    "    num_epochs=20,\n",
    "    batch_size=8,\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "33",
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.plot_performance_metrics(\n",
    "    history_path=f\"models_yolo/training_history.pth\",\n",
    "    figsize=(15, 5),\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "34",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train instance segmentation model with YOLO format\n",
    "geoai.train_instance_segmentation_model(\n",
    "    images_dir=yolo_output,\n",
    "    labels_dir=\"\",\n",
    "    output_dir=\"models_maskrcnn_yolo\",\n",
    "    input_format=\"yolo\",\n",
    "    num_epochs=20,\n",
    "    batch_size=8,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "35",
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.plot_performance_metrics(\n",
    "    history_path=f\"models_maskrcnn_yolo/training_history.pth\",\n",
    "    figsize=(15, 5),\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "36",
   "metadata": {},
   "source": [
    "### Training with Directory Format (Default)\n",
    "\n",
    "The default behavior uses separate `images_dir` and `labels_dir` directories:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "37",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Standard directory format (default behavior)\n",
    "geoai.train_segmentation_model(\n",
    "    images_dir=f\"{pascal_output}/images\",\n",
    "    labels_dir=f\"{pascal_output}/labels\",\n",
    "    output_dir=\"models_directory\",\n",
    "    # input_format=\"directory\" is the default, can be omitted\n",
    "    architecture=\"unet\",\n",
    "    encoder_name=\"resnet34\",\n",
    "    num_epochs=20,\n",
    "    batch_size=8,\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "38",
   "metadata": {},
   "outputs": [],
   "source": [
    "geoai.plot_performance_metrics(\n",
    "    history_path=f\"models_directory/training_history.pth\",\n",
    "    figsize=(15, 5),\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "39",
   "metadata": {},
   "source": [
    "## Training Summary\n",
    "\n",
    "Both `train_segmentation_model()` and `train_instance_segmentation_model()` functions now accept the `input_format` parameter to load data in any of these formats:\n",
    "\n",
    "| Input Format | `input_format` Value | `images_dir` | `labels_dir` |\n",
    "|--------------|---------------------|--------------|--------------|\n",
    "| **COCO** | `\"coco\"` | Path to images directory | Path to `instances.json` |\n",
    "| **YOLO** | `\"yolo\"` | Root directory with `images/` and `labels/` | Empty string `\"\"` or not used |\n",
    "| **Directory** | `\"directory\"` (default) | Path to images directory | Path to labels directory |\n",
    "\n",
    "### Benefits\n",
    "\n",
    "- **Maximum Flexibility**: Use any annotation format without conversion\n",
    "- **Geospatial Preservation**: All formats maintain georeferencing through GeoTIFF tiles\n",
    "- **Framework Compatibility**: Export in one format, train in another\n",
    "- **Consistent API**: Same training functions work with all formats\n",
    "\n",
    "### Example Workflow\n",
    "\n",
    "1. Export training data in COCO format for sharing with collaborators\n",
    "2. Export same data in YOLO format for YOLOv8 experiments\n",
    "3. Train both semantic and instance segmentation models using the same data\n",
    "4. All while maintaining full geospatial context for deployment on satellite imagery\n",
    "\n",
    "This provides a complete end-to-end workflow for geospatial deep learning!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "40",
   "metadata": {},
   "source": [
    "## Using TIMM Models with Multiple Formats\n",
    "\n",
    "The `train_timm_segmentation_model()` function also supports all three annotation formats, providing access to a wider range of encoder backbones from the TIMM library (e.g., EfficientNet, ConvNeXt, Swin Transformer):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "41",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train with TIMM encoder using COCO format\n",
    "geoai.train_timm_segmentation_model(\n",
    "    images_dir=f\"{coco_output}/images\",\n",
    "    labels_dir=f\"{coco_output}/annotations/instances.json\",\n",
    "    output_dir=\"models_timm_coco\",\n",
    "    input_format=\"coco\",  # Specify COCO format\n",
    "    encoder_name=\"efficientnet-b3\",  # TIMM encoder\n",
    "    architecture=\"unet\",\n",
    "    encoder_weights=\"imagenet\",\n",
    "    num_epochs=20,\n",
    "    batch_size=8,\n",
    "    verbose=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "42",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Or with YOLO format\n",
    "geoai.train_timm_segmentation_model(\n",
    "    images_dir=yolo_output,\n",
    "    labels_dir=\"\",\n",
    "    output_dir=\"models_timm_yolo\",\n",
    "    input_format=\"yolo\",\n",
    "    encoder_name=\"efficientnet-b3\",\n",
    "    num_epochs=20,\n",
    ")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "geo",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
