{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "435df764",
   "metadata": {},
   "source": [
    "# 3D Visualization for Defect Inspection\n",
    "\n",
    "This notebook demonstrates how to integrate the **MVTec 3D Anomaly Detection Dataset** into FiftyOne. \n",
    "The dataset contains high-resolution 3D scans of objects, including **point cloud data** and **RGB images**, \n",
    "which are useful for anomaly detection tasks.\n",
    "\n",
    "![visual_inspection](https://cdn.voxel51.com/getting_started_manufacturing/notebook7/visual_inspection.webp)\n",
    "\n",
    "## Learning Objectives:\n",
    "- Convert TIFF to PCD format for visualization in FiftyOne.\n",
    "- Create a Grouped Dataset in FiftyOne\n",
    "- Leverage FiftyOne for visualization and analysis.\n",
    "\n",
    "### Key Features:\n",
    "- **3D Representation**: The dataset provides XYZ point cloud representations stored as TIFF files.\n",
    "- **RGB and Mask Images**: Each sample includes an RGB image and a corresponding segmentation mask.\n",
    "- **Anomalous and Normal Samples**: The dataset includes both normal and defective objects for anomaly detection research.\n",
    "- **Grouped Datasets in FiftyOne**: FiftyOne supports the creation of grouped dataset which contain multiple modalities\n",
    "\n",
    "To make this dataset compatible with FiftyOne, we need to **convert TIFF files into PCD (Point Cloud Data) format** \n",
    "for visualization.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e64cdfd9",
   "metadata": {},
   "source": [
    "\n",
    "## Converting TIFF to PCD for Visualization\n",
    "\n",
    "MVTec 3D provides **XYZ coordinate data stored in TIFF format**, which must be converted to **PCD format** \n",
    "to be visualized in FiftyOne. The function below:\n",
    "1. Loads the TIFF file as a **NumPy array**.\n",
    "2. Reshapes it into **Nx3 (XYZ points) format**.\n",
    "3. Saves it as a **PCD file** using Open3D.\n",
    "\n",
    "Additionally, another function includes **color segmentation** from masks to highlight anomalies.\n",
    "\n",
    "The next three cells are for your reference, so you can take a look at how to convert from TIFF to PCD."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Before starting, install the following libraries in this environment\n",
    "!pip install open3d"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the TIFF example, please select one sample from the [MVTec 3D-AD dataset](https://www.mvtec.com/company/research/datasets/mvtec-3d-ad/)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import fiftyone as fo\n",
    "import numpy as np\n",
    "import open3d as o3d\n",
    "import tifffile\n",
    "\n",
    "def convert_tiff_xyz_to_pcd(tiff_path, pcd_path):\n",
    "    # Load the TIFF as a NumPy array\n",
    "    xyz_data = tifffile.imread(tiff_path)\n",
    "    \n",
    "    if xyz_data.ndim == 3 and xyz_data.shape[2] == 3:\n",
    "        # If the data is in the shape (H, W, 3), reshape it to (H*W, 3)\n",
    "        xyz_data = xyz_data.reshape(-1, 3)\n",
    "    elif xyz_data.ndim == 2 and xyz_data.shape[1] == 3:\n",
    "        # Already in Nx3 shape; no need to reshape\n",
    "        pass  \n",
    "    else:\n",
    "        raise ValueError(f\"Unexpected TIFF shape {xyz_data.shape}; adapt code as needed.\")\n",
    "    \n",
    "    # Create an Open3D point cloud from the reshaped data\n",
    "    pcd = o3d.geometry.PointCloud()\n",
    "    pcd.points = o3d.utility.Vector3dVector(xyz_data)\n",
    "    \n",
    "    # Save the point cloud to a .pcd file\n",
    "    o3d.io.write_point_cloud(pcd_path, pcd)\n",
    "\n",
    "# Example usage\n",
    "convert_tiff_xyz_to_pcd(\"path/to/example.tiff\", \"path/to/colored_example.pcd\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import open3d as o3d\n",
    "import tifffile\n",
    "\n",
    "def convert_tiff_xyz_to_pcd_with_color(tiff_path, mask_path, pcd_path):\n",
    "    \"\"\"\n",
    "    Converts a .xyz TIFF file into a PCD file, adding color information from a mask.\n",
    "    \n",
    "    Assumes:\n",
    "      - The TIFF file has shape (H, W, 3) and represents point coordinates.\n",
    "      - The mask has shape (H, W) and is aligned with the TIFF data.\n",
    "      \n",
    "    Points corresponding to mask > 0 are colored red; others are colored light gray.\n",
    "    \"\"\"\n",
    "    # Load the point coordinates from the TIFF and the segmentation mask\n",
    "    xyz_data = tifffile.imread(tiff_path)\n",
    "    mask = tifffile.imread(mask_path)\n",
    "    \n",
    "    # Ensure the shapes match spatially\n",
    "    if xyz_data.ndim == 3 and xyz_data.shape[2] == 3:\n",
    "        H, W, _ = xyz_data.shape\n",
    "        if mask.shape[0] != H or mask.shape[1] != W:\n",
    "            raise ValueError(\"The mask dimensions do not match the TIFF image dimensions.\")\n",
    "        # Flatten the point coordinates and mask to align one-to-one.\n",
    "        points = xyz_data.reshape(-1, 3)\n",
    "        mask_flat = mask.reshape(-1)\n",
    "    else:\n",
    "        raise ValueError(f\"Unexpected TIFF shape {xyz_data.shape}; expected (H,W,3)\")\n",
    "    \n",
    "    # Create a color array for each point.\n",
    "    colors = np.zeros((points.shape[0], 3))\n",
    "    # For example, assign red to segmented areas (mask > 0) and light gray elsewhere.\n",
    "    colors[mask_flat > 0] = [1, 0, 0]      # Red for segmentation\n",
    "    colors[mask_flat == 0] = [0.7, 0.7, 0.7] # Light gray for background\n",
    "    \n",
    "    # Create the Open3D point cloud and assign both points and colors.\n",
    "    pcd = o3d.geometry.PointCloud()\n",
    "    pcd.points = o3d.utility.Vector3dVector(points)\n",
    "    pcd.colors = o3d.utility.Vector3dVector(colors)\n",
    "    \n",
    "    # Write the colored point cloud to the specified PCD file.\n",
    "    o3d.io.write_point_cloud(pcd_path, pcd)\n",
    "\n",
    "# Example usage:\n",
    "convert_tiff_xyz_to_pcd_with_color(\"path/to/example.tiff\", \"path/to/example_mask.png\", \"path/to/colored_example.pcd\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5bc46127",
   "metadata": {},
   "source": [
    "\n",
    "## Creating a Grouped Dataset in FiftyOne\n",
    "\n",
    "FiftyOne allows creating **grouped datasets**, where multiple data modalities (e.g., RGB images, \n",
    "segmentation masks, and point clouds) can be linked together under a common identifier. This enables:\n",
    "- **Synchronized visualization**: Easily switch between different representations of the same object.\n",
    "- **Multi-modal analysis**: Combine insights from images, masks, and 3D data.\n",
    "\n",
    "This notebook demonstrates how to create a grouped dataset where each sample includes:\n",
    "- An **RGB image**\n",
    "- A **segmentation mask**\n",
    "- A **3D point cloud (PCD)**\n",
    "\n",
    "We will use `potato` object from the [MVTec 3D Dataset](https://www.mvtec.com/company/research/datasets/mvtec-3d-ad/), here are the modified subset of the dataset. \n",
    "\n",
    "You can download the subset here [Potato MVTec 3D](https://huggingface.co/datasets/pjramg/potato_mvtec3d). Please download the dataset to disk and add samples in FiftyOne as I am showing you in the following cell. \n",
    "\n",
    "<div style=\"border-left: 4px solid #3498db; padding: 6px;\">\n",
    "<strong>Note:</strong> Loading this dataset using the Hugging Face hub won't work because it is not saved in FiftyOne format.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Create the Dataset instance in FiftyOne"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import fiftyone as fo\n",
    "\n",
    "# Create an empty dataset\n",
    "dataset = fo.Dataset(\"potato_mvtec\", persistent=True, overwrite=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Read the MVTec Directory (potato object) and creating pairs of metadata before grouping dataset in FiftyOne"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "def process_potato_folder(root):\n",
    "    \"\"\"\n",
    "    Walks the directory tree under root (e.g. \"potato\") to create a list of dictionaries,\n",
    "    each containing the paths for the rgb image, mask, and the existing PCD file.\n",
    "    \n",
    "    It extracts metadata from the folder structure (e.g. split and quality) and assigns a label:\n",
    "      - \"normal\" if quality == \"good\"\n",
    "      - \"abnormal\" otherwise.\n",
    "    \n",
    "    Assumes that the folder structure is as follows:\n",
    "    \n",
    "      potato/\n",
    "          <split>/\n",
    "              <quality>/   (e.g., \"good\" or \"defective\")\n",
    "                  rgb/    <-- contains .png images\n",
    "                  gt/     <-- contains corresponding mask .png images\n",
    "                  pcd/    <-- contains pre-computed .pcd files\n",
    "    \"\"\"\n",
    "    pairs = []\n",
    "    # Walk the tree looking for \"rgb\" folders\n",
    "    for dirpath, dirnames, filenames in os.walk(root):\n",
    "        if os.path.basename(dirpath).lower() != \"rgb\":\n",
    "            continue\n",
    "\n",
    "        # Expected structure: .../<split>/<quality>/rgb\n",
    "        quality_folder = os.path.basename(os.path.dirname(dirpath))\n",
    "        split_folder = os.path.basename(os.path.dirname(os.path.dirname(dirpath)))\n",
    "        print(f\"Processing folder: {dirpath} (split: {split_folder}, quality: {quality_folder})\")\n",
    "        \n",
    "        # Define corresponding folders for masks (gt) and PCD files (pcd)\n",
    "        parent_dir = os.path.dirname(dirpath)\n",
    "        gt_dir = os.path.join(parent_dir, \"gt\")\n",
    "        pcd_dir = os.path.join(parent_dir, \"pcd\")\n",
    "        print(f\"Looking for PCD files in: {pcd_dir}\")\n",
    "        \n",
    "        for file in filenames:\n",
    "            if file.lower().endswith(\".png\"):\n",
    "                base_name = os.path.splitext(file)[0]\n",
    "                image_path = os.path.join(dirpath, file)\n",
    "                mask_path = os.path.join(gt_dir, file)  # Assumes mask has same filename as image\n",
    "                pcd_path = os.path.join(pcd_dir, base_name + \".pcd\")\n",
    "                \n",
    "                # Check existence of required files\n",
    "                image_exists = os.path.exists(image_path)\n",
    "                mask_exists = os.path.exists(mask_path)\n",
    "                pcd_exists = os.path.exists(pcd_path)\n",
    "                \n",
    "                if not (image_exists and mask_exists and pcd_exists):\n",
    "                    print(f\"Warning: Missing files for {base_name}: \"\n",
    "                          f\"image({image_exists}), mask({mask_exists}), pcd({pcd_exists}). Skipping.\")\n",
    "                    continue\n",
    "                \n",
    "                # Determine overall label: \"normal\" if quality folder is \"good\", otherwise \"abnormal\"\n",
    "                overall_label = \"normal\" if quality_folder.lower() == \"good\" else \"abnormal\"\n",
    "                \n",
    "                pair = {\n",
    "                    \"split\": split_folder,       \n",
    "                    \"quality\": quality_folder,   \n",
    "                    \"label\": overall_label,      \n",
    "                    \"image_path\": image_path,\n",
    "                    \"mask_path\": mask_path,\n",
    "                    \"pcd_path\": pcd_path\n",
    "                }\n",
    "                pairs.append(pair)\n",
    "    \n",
    "    print(f\"Total pairs processed: {len(pairs)}\")\n",
    "    return pairs\n",
    "\n",
    "# Specify your root directory (e.g., the \"potato\" folder)\n",
    "root_dir = \"/path/to/your/dataset/root\"\n",
    "pairs = process_potato_folder(root_dir)\n",
    "\n",
    "# Debug: print out the pairs list to verify paths and metadata\n",
    "for pair in pairs:\n",
    "    print(pair)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Creating the grouped dataset in FiftyOne"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "samples = []\n",
    "for idx, pair in enumerate(pairs):\n",
    "    # Create a unique group id for this sample group\n",
    "    group_id = f\"sample_{idx}\"\n",
    "    group = fo.Group()\n",
    "    \n",
    "    # --- Image slice ---\n",
    "    sample_image = fo.Sample(\n",
    "        filepath=pair[\"image_path\"],\n",
    "        group=group.element(\"image\")\n",
    "    )\n",
    "    sample_image[\"split\"] = pair[\"split\"]\n",
    "    sample_image[\"quality\"] = pair[\"quality\"]\n",
    "    sample_image[\"label\"] = pair[\"label\"]\n",
    "    \n",
    "    # --- Mask slice ---\n",
    "    sample_mask = fo.Sample(\n",
    "        filepath=pair[\"mask_path\"],\n",
    "        group=group.element(\"mask\")\n",
    "    )\n",
    "    sample_mask[\"split\"] = pair[\"split\"]\n",
    "    sample_mask[\"quality\"] = pair[\"quality\"]\n",
    "    sample_mask[\"label\"] = pair[\"label\"]\n",
    "    \n",
    "    # --- Point Cloud slice ---\n",
    "    sample_pcd = fo.Sample(\n",
    "        filepath=pair[\"pcd_path\"],\n",
    "        group=group.element(\"pcd\")\n",
    "    )\n",
    "    sample_pcd[\"split\"] = pair[\"split\"]\n",
    "    sample_pcd[\"quality\"] = pair[\"quality\"]\n",
    "    sample_pcd[\"label\"] = pair[\"label\"]\n",
    "    \n",
    "    # Add all three slices to the list\n",
    "    samples.extend([sample_image, sample_mask, sample_pcd])\n",
    "\n",
    "# Add the grouped samples to the dataset\n",
    "dataset.add_samples(samples)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Optionally, launch the FiftyOne App to inspect the dataset.\n",
    "session = fo.launch_app(dataset, port=5157, auto=False)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py311_anomalib200b3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
