{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Cleaning Segmentation Results with MultiClean\n",
    "\n",
    "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/opengeos/geoai/blob/main/docs/examples/clean_segmentation_results.ipynb)\n",
    "\n",
    "This notebook demonstrates how to use MultiClean integration in GeoAI to post-process and clean segmentation results. MultiClean performs morphological operations to:\n",
    "\n",
    "- **Smooth edges** - Reduce jagged boundaries using morphological opening\n",
    "- **Remove noise** - Eliminate small isolated components (islands)\n",
    "- **Fill gaps** - Replace invalid pixels with nearest valid class\n",
    "\n",
    "MultiClean is particularly useful for cleaning up noisy predictions from deep learning segmentation models.\n",
    "\n",
    "## Installation\n",
    "\n",
    "Uncomment the following line to install the required packages if needed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install -U \"geoai-py[extra]\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import Libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from matplotlib.colors import ListedColormap\n",
    "import rasterio\n",
    "from rasterio.transform import from_bounds\n",
    "import tempfile\n",
    "import os\n",
    "\n",
    "# Import GeoAI multiclean utilities\n",
    "# You can import from geoai directly (convenience imports)\n",
    "from geoai import (\n",
    "    clean_segmentation_mask,\n",
    "    clean_raster,\n",
    "    clean_raster_batch,\n",
    "    compare_masks,\n",
    ")\n",
    "\n",
    "# Or import from the tools subpackage directly\n",
    "# from geoai.tools.multiclean import (\n",
    "#     clean_segmentation_mask,\n",
    "#     clean_raster,\n",
    "#     clean_raster_batch,\n",
    "#     compare_masks,\n",
    "# )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Create a Synthetic Noisy Segmentation Mask\n",
    "\n",
    "First, let's create a synthetic segmentation mask with realistic noise patterns that might occur in deep learning predictions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_noisy_segmentation(size=(512, 512), num_classes=3, noise_level=0.1):\n",
    "    \"\"\"\n",
    "    Create a synthetic segmentation mask with noise.\n",
    "\n",
    "    Args:\n",
    "        size: Tuple of (height, width)\n",
    "        num_classes: Number of segmentation classes\n",
    "        noise_level: Fraction of pixels to add noise (0-1)\n",
    "\n",
    "    Returns:\n",
    "        Noisy segmentation mask\n",
    "    \"\"\"\n",
    "    np.random.seed(42)\n",
    "\n",
    "    # Create base segmentation with smooth regions\n",
    "    mask = np.zeros(size, dtype=np.int32)\n",
    "\n",
    "    # Create class regions\n",
    "    mask[: size[0] // 2, :] = 0  # Background\n",
    "    mask[size[0] // 2 :, : size[1] // 2] = 1  # Class 1\n",
    "    mask[size[0] // 2 :, size[1] // 2 :] = 2  # Class 2\n",
    "\n",
    "    # Add noise - small random islands\n",
    "    num_noise_pixels = int(size[0] * size[1] * noise_level)\n",
    "    noise_y = np.random.randint(0, size[0], num_noise_pixels)\n",
    "    noise_x = np.random.randint(0, size[1], num_noise_pixels)\n",
    "    noise_classes = np.random.randint(0, num_classes, num_noise_pixels)\n",
    "    mask[noise_y, noise_x] = noise_classes\n",
    "\n",
    "    # Add some edge roughness by randomly changing boundary pixels\n",
    "    from scipy.ndimage import binary_erosion, binary_dilation\n",
    "\n",
    "    for class_id in range(num_classes):\n",
    "        class_mask = mask == class_id\n",
    "        # Find edges\n",
    "        eroded = binary_erosion(class_mask)\n",
    "        edges = class_mask & ~eroded\n",
    "        # Randomly toggle some edge pixels\n",
    "        edge_coords = np.where(edges)\n",
    "        if len(edge_coords[0]) > 0:\n",
    "            num_toggle = int(len(edge_coords[0]) * 0.3)\n",
    "            toggle_idx = np.random.choice(\n",
    "                len(edge_coords[0]), num_toggle, replace=False\n",
    "            )\n",
    "            toggle_y = edge_coords[0][toggle_idx]\n",
    "            toggle_x = edge_coords[1][toggle_idx]\n",
    "            mask[toggle_y, toggle_x] = (mask[toggle_y, toggle_x] + 1) % num_classes\n",
    "\n",
    "    return mask\n",
    "\n",
    "\n",
    "# Create noisy mask\n",
    "noisy_mask = create_noisy_segmentation(size=(512, 512), num_classes=3, noise_level=0.05)\n",
    "print(f\"Created noisy mask with shape: {noisy_mask.shape}\")\n",
    "print(f\"Classes: {np.unique(noisy_mask)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Visualize the Noisy Mask\n",
    "\n",
    "Let's visualize the noisy segmentation mask."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create color map for visualization\n",
    "colors = [\"#1f77b4\", \"#ff7f0e\", \"#2ca02c\"]  # Blue, Orange, Green\n",
    "cmap = ListedColormap(colors)\n",
    "\n",
    "plt.figure(figsize=(10, 10))\n",
    "plt.imshow(noisy_mask, cmap=cmap, interpolation=\"nearest\")\n",
    "plt.title(\"Noisy Segmentation Mask\", fontsize=16)\n",
    "plt.colorbar(label=\"Class\", ticks=[0, 1, 2])\n",
    "plt.xlabel(\"X\")\n",
    "plt.ylabel(\"Y\")\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Clean the Segmentation Mask\n",
    "\n",
    "Now let's apply MultiClean to remove noise and smooth edges."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Apply MultiClean\n",
    "cleaned_mask = clean_segmentation_mask(\n",
    "    noisy_mask,\n",
    "    class_values=[0, 1, 2],  # Classes to process\n",
    "    smooth_edge_size=3,  # Kernel size for edge smoothing\n",
    "    min_island_size=100,  # Remove islands smaller than 100 pixels\n",
    "    connectivity=8,  # Use 8-connectivity (includes diagonals)\n",
    "    fill_nan=False,  # Don't fill NaN values (we don't have any)\n",
    ")\n",
    "\n",
    "print(f\"Cleaned mask shape: {cleaned_mask.shape}\")\n",
    "print(f\"Classes: {np.unique(cleaned_mask)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Compare Before and After\n",
    "\n",
    "Let's visualize the noisy and cleaned masks side by side."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig, axes = plt.subplots(1, 2, figsize=(20, 10))\n",
    "\n",
    "# Noisy mask\n",
    "im1 = axes[0].imshow(noisy_mask, cmap=cmap, interpolation=\"nearest\")\n",
    "axes[0].set_title(\"Before Cleaning (Noisy)\", fontsize=16)\n",
    "axes[0].set_xlabel(\"X\")\n",
    "axes[0].set_ylabel(\"Y\")\n",
    "plt.colorbar(im1, ax=axes[0], label=\"Class\", ticks=[0, 1, 2])\n",
    "\n",
    "# Cleaned mask\n",
    "im2 = axes[1].imshow(cleaned_mask, cmap=cmap, interpolation=\"nearest\")\n",
    "axes[1].set_title(\"After Cleaning (Smooth)\", fontsize=16)\n",
    "axes[1].set_xlabel(\"X\")\n",
    "axes[1].set_ylabel(\"Y\")\n",
    "plt.colorbar(im2, ax=axes[1], label=\"Class\", ticks=[0, 1, 2])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. Quantify the Changes\n",
    "\n",
    "Use the `compare_masks` function to quantify how much the mask changed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pixels_changed, total_pixels, change_percentage = compare_masks(\n",
    "    noisy_mask, cleaned_mask\n",
    ")\n",
    "\n",
    "print(\"Cleaning Statistics:\")\n",
    "print(f\"  Total pixels: {total_pixels:,}\")\n",
    "print(f\"  Pixels changed: {pixels_changed:,}\")\n",
    "print(f\"  Change percentage: {change_percentage:.2f}%\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. Zoom In on a Region\n",
    "\n",
    "Let's zoom in to see the edge smoothing and noise removal in detail."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Select a region to zoom in\n",
    "y_start, y_end = 200, 350\n",
    "x_start, x_end = 200, 350\n",
    "\n",
    "fig, axes = plt.subplots(1, 2, figsize=(16, 8))\n",
    "\n",
    "# Zoomed noisy region\n",
    "im1 = axes[0].imshow(\n",
    "    noisy_mask[y_start:y_end, x_start:x_end], cmap=cmap, interpolation=\"nearest\"\n",
    ")\n",
    "axes[0].set_title(\"Before Cleaning (Zoomed)\", fontsize=14)\n",
    "axes[0].set_xlabel(\"X\")\n",
    "axes[0].set_ylabel(\"Y\")\n",
    "plt.colorbar(im1, ax=axes[0], label=\"Class\", ticks=[0, 1, 2])\n",
    "\n",
    "# Zoomed cleaned region\n",
    "im2 = axes[1].imshow(\n",
    "    cleaned_mask[y_start:y_end, x_start:x_end], cmap=cmap, interpolation=\"nearest\"\n",
    ")\n",
    "axes[1].set_title(\"After Cleaning (Zoomed)\", fontsize=14)\n",
    "axes[1].set_xlabel(\"X\")\n",
    "axes[1].set_ylabel(\"Y\")\n",
    "plt.colorbar(im2, ax=axes[1], label=\"Class\", ticks=[0, 1, 2])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7. Experiment with Different Parameters\n",
    "\n",
    "Let's see how different cleaning parameters affect the results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create masks with different parameters\n",
    "params = [\n",
    "    {\"smooth_edge_size\": 0, \"min_island_size\": 0, \"title\": \"No Cleaning\"},\n",
    "    {\"smooth_edge_size\": 0, \"min_island_size\": 100, \"title\": \"Island Removal Only\"},\n",
    "    {\"smooth_edge_size\": 3, \"min_island_size\": 0, \"title\": \"Edge Smoothing Only\"},\n",
    "    {\"smooth_edge_size\": 3, \"min_island_size\": 100, \"title\": \"Full Cleaning\"},\n",
    "]\n",
    "\n",
    "fig, axes = plt.subplots(2, 2, figsize=(16, 16))\n",
    "axes = axes.flatten()\n",
    "\n",
    "for i, param in enumerate(params):\n",
    "    if i == 0:\n",
    "        # No cleaning - just show original\n",
    "        mask = noisy_mask\n",
    "    else:\n",
    "        # Apply cleaning with specified parameters\n",
    "        mask = clean_segmentation_mask(\n",
    "            noisy_mask,\n",
    "            class_values=[0, 1, 2],\n",
    "            smooth_edge_size=param[\"smooth_edge_size\"],\n",
    "            min_island_size=param[\"min_island_size\"],\n",
    "            connectivity=8,\n",
    "        )\n",
    "\n",
    "    im = axes[i].imshow(mask, cmap=cmap, interpolation=\"nearest\")\n",
    "    axes[i].set_title(param[\"title\"], fontsize=14)\n",
    "    axes[i].set_xlabel(\"X\")\n",
    "    axes[i].set_ylabel(\"Y\")\n",
    "    plt.colorbar(im, ax=axes[i], label=\"Class\", ticks=[0, 1, 2])\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 8. Working with GeoTIFF Files\n",
    "\n",
    "MultiClean can also process GeoTIFF files directly while preserving geospatial metadata."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a temporary directory for our test files\n",
    "tmpdir = tempfile.mkdtemp()\n",
    "print(f\"Working directory: {tmpdir}\")\n",
    "\n",
    "# Save noisy mask as GeoTIFF\n",
    "input_tif = os.path.join(tmpdir, \"noisy_segmentation.tif\")\n",
    "output_tif = os.path.join(tmpdir, \"cleaned_segmentation.tif\")\n",
    "\n",
    "# Create a simple transform (geographic coordinates)\n",
    "transform = from_bounds(\n",
    "    west=-120.0,\n",
    "    south=35.0,\n",
    "    east=-119.0,\n",
    "    north=36.0,\n",
    "    width=noisy_mask.shape[1],\n",
    "    height=noisy_mask.shape[0],\n",
    ")\n",
    "\n",
    "# Write noisy mask to GeoTIFF\n",
    "with rasterio.open(\n",
    "    input_tif,\n",
    "    \"w\",\n",
    "    driver=\"GTiff\",\n",
    "    height=noisy_mask.shape[0],\n",
    "    width=noisy_mask.shape[1],\n",
    "    count=1,\n",
    "    dtype=noisy_mask.dtype,\n",
    "    crs=\"EPSG:4326\",\n",
    "    transform=transform,\n",
    "    compress=\"lzw\",\n",
    ") as dst:\n",
    "    dst.write(noisy_mask, 1)\n",
    "\n",
    "print(f\"Saved noisy mask to: {input_tif}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Clean the GeoTIFF\n",
    "clean_raster(\n",
    "    input_path=input_tif,\n",
    "    output_path=output_tif,\n",
    "    class_values=[0, 1, 2],\n",
    "    smooth_edge_size=3,\n",
    "    min_island_size=100,\n",
    "    connectivity=8,\n",
    ")\n",
    "\n",
    "print(f\"Cleaned raster saved to: {output_tif}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Verify the output preserves geospatial metadata\n",
    "with rasterio.open(input_tif) as src_in:\n",
    "    print(\"Input metadata:\")\n",
    "    print(f\"  CRS: {src_in.crs}\")\n",
    "    print(f\"  Transform: {src_in.transform}\")\n",
    "    print(f\"  Bounds: {src_in.bounds}\")\n",
    "\n",
    "print()\n",
    "\n",
    "with rasterio.open(output_tif) as src_out:\n",
    "    print(\"Output metadata:\")\n",
    "    print(f\"  CRS: {src_out.crs}\")\n",
    "    print(f\"  Transform: {src_out.transform}\")\n",
    "    print(f\"  Bounds: {src_out.bounds}\")\n",
    "\n",
    "    # Read cleaned data\n",
    "    cleaned_from_file = src_out.read(1)\n",
    "\n",
    "print(\"\\n✓ Geospatial metadata preserved!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 9. Batch Processing Multiple Files\n",
    "\n",
    "You can process multiple segmentation files at once using `clean_raster_batch`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create multiple test files\n",
    "input_files = []\n",
    "for i in range(3):\n",
    "    # Create different noisy masks\n",
    "    test_mask = create_noisy_segmentation(\n",
    "        size=(256, 256), num_classes=3, noise_level=0.05 + i * 0.02\n",
    "    )\n",
    "\n",
    "    # Save to file\n",
    "    filepath = os.path.join(tmpdir, f\"test_mask_{i}.tif\")\n",
    "\n",
    "    with rasterio.open(\n",
    "        filepath,\n",
    "        \"w\",\n",
    "        driver=\"GTiff\",\n",
    "        height=test_mask.shape[0],\n",
    "        width=test_mask.shape[1],\n",
    "        count=1,\n",
    "        dtype=test_mask.dtype,\n",
    "        crs=\"EPSG:4326\",\n",
    "        transform=from_bounds(-120, 35, -119, 36, 256, 256),\n",
    "    ) as dst:\n",
    "        dst.write(test_mask, 1)\n",
    "\n",
    "    input_files.append(filepath)\n",
    "\n",
    "print(f\"Created {len(input_files)} test files\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Batch clean all files\n",
    "output_dir = os.path.join(tmpdir, \"batch_cleaned\")\n",
    "\n",
    "output_files = clean_raster_batch(\n",
    "    input_paths=input_files,\n",
    "    output_dir=output_dir,\n",
    "    class_values=[0, 1, 2],\n",
    "    smooth_edge_size=2,\n",
    "    min_island_size=50,\n",
    "    connectivity=8,\n",
    "    suffix=\"_cleaned\",\n",
    "    verbose=True,\n",
    ")\n",
    "\n",
    "print(f\"\\nProcessed {len(output_files)} files\")\n",
    "print(\"Output files:\")\n",
    "for f in output_files:\n",
    "    print(f\"  - {os.path.basename(f)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 10. Integration with Segmentation Workflows\n",
    "\n",
    "MultiClean is designed to be used as a post-processing step after semantic segmentation. Here's an example workflow:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def segment_and_clean_workflow(image_path, output_path):\n",
    "    \"\"\"\n",
    "    Example workflow: Segmentation + Cleaning\n",
    "\n",
    "    In a real application, this would:\n",
    "    1. Load an image\n",
    "    2. Run semantic segmentation model (e.g., UNet, DeepLab)\n",
    "    3. Get raw predictions (often noisy)\n",
    "    4. Apply MultiClean to smooth and denoise\n",
    "    5. Save final result\n",
    "    \"\"\"\n",
    "    # For this example, we'll use our synthetic data\n",
    "    # In practice, you would:\n",
    "    # - Load the image with rasterio or PIL\n",
    "    # - Run your trained segmentation model\n",
    "    # - Get the prediction mask\n",
    "\n",
    "    # Simulate noisy model predictions\n",
    "    raw_predictions = create_noisy_segmentation(\n",
    "        size=(512, 512), num_classes=3, noise_level=0.08\n",
    "    )\n",
    "\n",
    "    # Apply MultiClean post-processing\n",
    "    cleaned_predictions = clean_segmentation_mask(\n",
    "        raw_predictions,\n",
    "        class_values=[0, 1, 2],\n",
    "        smooth_edge_size=3,\n",
    "        min_island_size=100,\n",
    "        connectivity=8,\n",
    "    )\n",
    "\n",
    "    return raw_predictions, cleaned_predictions\n",
    "\n",
    "\n",
    "# Run the workflow\n",
    "raw, cleaned = segment_and_clean_workflow(None, None)\n",
    "\n",
    "# Compare\n",
    "fig, axes = plt.subplots(1, 2, figsize=(16, 8))\n",
    "\n",
    "axes[0].imshow(raw, cmap=cmap, interpolation=\"nearest\")\n",
    "axes[0].set_title(\"Raw Model Predictions\", fontsize=14)\n",
    "axes[0].axis(\"off\")\n",
    "\n",
    "axes[1].imshow(cleaned, cmap=cmap, interpolation=\"nearest\")\n",
    "axes[1].set_title(\"After MultiClean Post-Processing\", fontsize=14)\n",
    "axes[1].axis(\"off\")\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()\n",
    "\n",
    "# Quantify improvement\n",
    "changed, total, pct = compare_masks(raw, cleaned)\n",
    "print(f\"\\nPost-processing changed {pct:.2f}% of pixels\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 11. Best Practices and Tips\n",
    "\n",
    "### Choosing Parameters\n",
    "\n",
    "- **smooth_edge_size**: Start with 2-3 pixels. Larger values create smoother boundaries but may over-smooth fine details.\n",
    "- **min_island_size**: Depends on your minimum object size. Set to the smallest valid object area in pixels.\n",
    "- **connectivity**: Use 8 for natural objects (smoother results), 4 for grid-aligned objects.\n",
    "- **fill_nan**: Set to True if your predictions have nodata/NaN values that should be filled.\n",
    "\n",
    "### Performance Tips\n",
    "\n",
    "- Use **max_workers** parameter for parallel processing on multi-core systems\n",
    "- Process large rasters in tiles if memory is limited\n",
    "- For batch processing, use `clean_raster_batch` instead of loops\n",
    "\n",
    "### When to Use MultiClean\n",
    "\n",
    "✓ After semantic segmentation to remove noise  \n",
    "✓ When edge boundaries are jagged or noisy  \n",
    "✓ To remove small false positive detections  \n",
    "✓ For cleaning up classification rasters  \n",
    "\n",
    "✗ Don't use if you need to preserve exact boundaries  \n",
    "✗ Not suitable for instance segmentation (use on semantic masks only)  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary\n",
    "\n",
    "In this notebook, we demonstrated:\n",
    "\n",
    "1. ✅ Basic usage of `clean_segmentation_mask()` for numpy arrays\n",
    "2. ✅ Visualizing before/after comparisons\n",
    "3. ✅ Quantifying changes with `compare_masks()`\n",
    "4. ✅ Experimenting with different cleaning parameters\n",
    "5. ✅ Processing GeoTIFF files with `clean_raster()`\n",
    "6. ✅ Batch processing with `clean_raster_batch()`\n",
    "7. ✅ Integration with segmentation workflows\n",
    "\n",
    "MultiClean is a powerful tool for post-processing segmentation results, helping you achieve cleaner, more professional outputs from your deep learning models.\n",
    "\n",
    "## References\n",
    "\n",
    "- [MultiClean GitHub Repository](https://github.com/DPIRD-DMA/MultiClean)\n",
    "- [GeoAI Documentation](https://opengeoai.org)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Cleanup temporary files\n",
    "import shutil\n",
    "\n",
    "shutil.rmtree(tmpdir)\n",
    "print(\"Cleaned up temporary files\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
