{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0",
   "metadata": {},
   "source": [
    "# Image Captioning\n",
    "\n",
    "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/opengeos/segment-geospatial/blob/main/docs/examples/image_captioning.ipynb)\n",
    "\n",
    "This notebook demonstrates how to perform image captioning and feature extraction using the [BLIP](https://huggingface.co/Salesforce/blip-image-captioning-base) model and [spaCy](https://spacy.io/) NLP processing. The `ImageCaptioner` class provides a convenient interface for:\n",
    "\n",
    "- Generating captions for images from local files or URLs\n",
    "- Extracting meaningful features (nouns) from captions\n",
    "- Filtering features using predefined aerial vocabulary or custom lists"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1",
   "metadata": {},
   "source": [
    "## Installation\n",
    "\n",
    "Uncomment the following line to install the required packages if needed.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install \"segment-geospatial[samgeo3]\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3",
   "metadata": {},
   "source": [
    "## Import Libraries\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4",
   "metadata": {},
   "outputs": [],
   "source": [
    "from samgeo.caption import ImageCaptioner, blip_analyze_image, show_image"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5",
   "metadata": {},
   "source": [
    "## Initialize the ImageCaptioner\n",
    "\n",
    "Create an `ImageCaptioner` instance. You can customize the models used:\n",
    "\n",
    "- `blip_model_name`: The BLIP model for caption generation (default: `\"Salesforce/blip-image-captioning-base\"`)\n",
    "- `spacy_model_name`: The spaCy model for NLP processing (default: `\"en_core_web_sm\"`)\n",
    "- `device`: The device to run inference on (`\"cuda\"`, `\"mps\"`, or `\"cpu\"`). Auto-detected if not specified.\n",
    "\n",
    "Available BLIP models:\n",
    "- `Salesforce/blip-image-captioning-base` (default, ~990MB)\n",
    "- `Salesforce/blip-image-captioning-large` (larger, more accurate, ~1.9GB)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6",
   "metadata": {},
   "outputs": [],
   "source": [
    "captioner = ImageCaptioner(\n",
    "    blip_model_name=\"Salesforce/blip-image-captioning-base\",\n",
    "    spacy_model_name=\"en_core_web_sm\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7",
   "metadata": {},
   "source": [
    "## Example 1: Building Image\n",
    "\n",
    "Let's analyze an aerial image of a building."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8",
   "metadata": {},
   "outputs": [],
   "source": [
    "url1 = \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/caption-building.webp\"\n",
    "show_image(url1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9",
   "metadata": {},
   "source": [
    "### Basic Analysis\n",
    "\n",
    "Use the `analyze()` method to generate a caption and extract all noun features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "10",
   "metadata": {},
   "outputs": [],
   "source": [
    "caption, features = captioner.analyze(url1)\n",
    "print(f\"Caption: {caption}\")\n",
    "print(f\"Features: {features}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11",
   "metadata": {},
   "source": [
    "### Using Aerial Features Vocabulary\n",
    "\n",
    "Set `include_features=\"default\"` to filter features using a predefined aerial/geospatial vocabulary available [here](https://huggingface.co/datasets/giswqs/geospatial/blob/main/aerial_features.json). This helps identify features relevant to remote sensing and aerial imagery analysis."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "12",
   "metadata": {},
   "outputs": [],
   "source": [
    "caption, features = captioner.analyze(url1, include_features=\"default\")\n",
    "print(f\"Caption: {caption}\")\n",
    "print(f\"Aerial Features: {features}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "13",
   "metadata": {},
   "source": [
    "### Custom Feature Filtering\n",
    "\n",
    "You can also provide a custom list of features to look for, and exclude specific features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Look only for specific features\n",
    "caption, features = captioner.analyze(\n",
    "    url1, include_features=[\"building\", \"parking_lot\", \"road\", \"car\", \"tree\"]\n",
    ")\n",
    "print(f\"Caption: {caption}\")\n",
    "print(f\"Custom Features: {features}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "15",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Exclude certain features from results\n",
    "caption, features = captioner.analyze(url1, exclude_features=[\"view\", \"image\"])\n",
    "print(f\"Caption: {caption}\")\n",
    "print(f\"Features (excluding 'view', 'image'): {features}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "16",
   "metadata": {},
   "source": [
    "## Example 2: Traffic Sign Image\n",
    "\n",
    "Let's analyze a different type of image - a traffic sign.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "17",
   "metadata": {},
   "outputs": [],
   "source": [
    "url2 = \"https://huggingface.co/datasets/giswqs/geospatial/resolve/main/caption-traffic-sign.webp\"\n",
    "show_image(url2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "18",
   "metadata": {},
   "outputs": [],
   "source": [
    "caption, features = captioner.analyze(url2)\n",
    "print(f\"Caption: {caption}\")\n",
    "print(f\"Features: {features}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "19",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Using aerial vocabulary\n",
    "caption, features = captioner.analyze(url2, include_features=\"default\")\n",
    "print(f\"Caption: {caption}\")\n",
    "print(f\"Aerial Features: {features}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "20",
   "metadata": {},
   "source": [
    "## Using Individual Methods\n",
    "\n",
    "The `ImageCaptioner` class also provides individual methods for more granular control:\n",
    "\n",
    "- `generate_caption()`: Generate only the caption\n",
    "- `extract_features()`: Extract features from an existing caption\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "21",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate caption only\n",
    "caption = captioner.generate_caption(url1)\n",
    "print(f\"Caption: {caption}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "22",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Extract features from an existing caption\n",
    "features = captioner.extract_features(caption)\n",
    "print(f\"All Features: {features}\")\n",
    "\n",
    "aerial_features = captioner.extract_features(caption, include_features=\"default\")\n",
    "print(f\"Aerial Features: {aerial_features}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23",
   "metadata": {},
   "source": [
    "## Using the Convenience Function\n",
    "\n",
    "For quick one-off analyses, you can use the `blip_analyze_image()` function directly without creating an `ImageCaptioner` instance. You can also specify custom models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "24",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Quick analysis with default models\n",
    "caption, features = blip_analyze_image(url1)\n",
    "print(f\"Caption: {caption}\")\n",
    "print(f\"Features: {features}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "25",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Using a larger BLIP model for potentially better captions\n",
    "caption, features = blip_analyze_image(\n",
    "    url1,\n",
    "    include_features=\"default\",\n",
    "    blip_model_name=\"Salesforce/blip-image-captioning-large\",\n",
    ")\n",
    "print(f\"Caption (large model): {caption}\")\n",
    "print(f\"Aerial Features: {features}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "geo",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
