{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "754db5f9-106b-4872-9a40-18184efebe44",
   "metadata": {},
   "source": [
    "<p align=\"center\">\n",
    "    <h1> NVIDIA Kaolin - CVPR 2025 Tutorial </h1>\n",
    "</p>\n",
    "\n",
    "<table style=\"background:white; border-collapse:collapse; width:auto;\">\n",
    "  <tr>\n",
    "    <td colspan=\"2\" align=\"center\">\n",
    "      <img src=\"../../../assets/nvidia-logo-horz.png\" style=\"height:200px; vertical-align:middle;\">\n",
    "    </td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td valign=\"middle\" style=\"text-align:center; padding:10px; background:white !important; border:none !important;\"><b>Rendering powered by:</b></td>\n",
    "    <td style=\"text-align:center; padding:10px; background:white !important; border:none !important;\">\n",
    "    3dgrut, NVIDIA Optix, NVIDIA Slang\n",
    "    </td>\n",
    "  </tr>\n",
    "  <tr>\n",
    "    <td valign=\"middle\"><b>Simulations powered by:</b></td>\n",
    "    <td>\n",
    "    NVIDIA Warp\n",
    "    </td>\n",
    "  </tr>\n",
    "</table>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8882370a-fc20-40c8-87a0-b3bcc716bde1",
   "metadata": {},
   "source": [
    "*Note: This tutorial was tested on Ubuntu 20.04. Full support for Windows OS is not guaranteed.*\n",
    "\n",
    "*Note 2: It is recommended to run cells one-by-one, Jupyter may run out of sync when running all cells at once.*"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f1dc9af5-a244-4f87-85fb-99c0e42f7a4d",
   "metadata": {},
   "source": [
    "-----\n",
    "\n",
    "**In the following, you will learn how to use NVIDIA kaolin to jointly ray trace & simulate volumetric radiance fields and meshes.** <br>\n",
    "**Throughout the tutorial, we will use various frameworks from the NVIDIA software stack, demonstrating how Kaolin can be used to integrate these useful tools together.**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d20f55a7-30c1-4e34-aa25-d2af16c8415e",
   "metadata": {},
   "source": [
    "**Relevant Literature:**\n",
    "* [Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation [Modi et al. 2024]](https://research.nvidia.com/labs/toronto-ai/simplicits/)\n",
    "* [3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [Moenne-Loccoz et al. 2024]](https://gaussiantracer.github.io/)\n",
    "* [3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting [Wu et al. 2025]](https://research.nvidia.com/labs/toronto-ai/3DGUT/)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8300d92f-57cc-44ee-aa59-336c3a6b152a",
   "metadata": {},
   "source": [
    "## Requirements for this demo"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf6025e3-43c6-45fb-b7fb-f7ce01c4765e",
   "metadata": {},
   "source": [
    "1. [Follow 3dgrut installation instructions.](\"https://github.com/nv-tlabs/3dgrut?tab=readme-ov-file#-1-dependencies-and-installation\")\n",
    "2. Install additional requirements:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2a25a869-9c60-46fa-b9b3-ae2dac188352",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install matplotlib ipywidgets k3d --quiet"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ca9611bd-4d3e-4319-a03d-adc0df1ceaef",
   "metadata": {},
   "outputs": [],
   "source": [
    "import copy\n",
    "import logging\n",
    "import numpy as np\n",
    "import os\n",
    "import sys\n",
    "import time\n",
    "import torch\n",
    "import torchvision.transforms.functional as F\n",
    "import kaolin\n",
    "from matplotlib import pyplot as plt\n",
    "from gaussian_utils import transform_gaussians_lbs, pad_transforms, PHYS_NOTEBOOKS_DIR\n",
    "\n",
    "logging.basicConfig(level=logging.INFO, stream=sys.stdout, format=\"%(asctime)s|%(levelname)8s| %(message)s\")\n",
    "logger = logging.getLogger(__name__)\n",
    "\n",
    "def log_tensor(t, name, **kwargs):\n",
    "    \"\"\" Debugging util, e.g. call: log_tensor(t, 'my tensor', print_stats=True) \"\"\"\n",
    "    logger.info(kaolin.utils.testing.tensor_info(t, name=name, **kwargs))\n",
    "\n",
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88b456c0-5b4b-4508-ab2b-0075dd051dbe",
   "metadata": {},
   "source": [
    "# Renderer (3DGRUT)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b2c651d2-015a-4bf9-8ce6-467441980c18",
   "metadata": {},
   "source": [
    "## Gaussian Splat and Mesh Assets"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "914c6a3d-93ca-4ba4-bd0b-968f4071a928",
   "metadata": {},
   "source": [
    "Our demo starts by loading a pretrained 3dgut or 3dgrt Gaussian Splat models. We will grab these assets from AWS below, but you can also train your own. <br>\n",
    "More details about training are available in the official github: https://github.com/nv-tlabs/3dgrut\n",
    "\n",
    "*3dgut is now being presented in the CVPR 2025, and the code was recently released by NVIDIA.*  <br>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff247a4a-3987-4b91-9298-c523f089f999",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Data will be stored relative to this notebook location\n",
    "root_data_path = os.path.join(PHYS_NOTEBOOKS_DIR, \"local_data\")\n",
    "os.makedirs(root_data_path, exist_ok=True)\n",
    "os.chdir(PHYS_NOTEBOOKS_DIR)\n",
    "\n",
    "data_path = os.path.join(root_data_path, \"grut_cvpr2025\")\n",
    "\n",
    "if os.path.exists(data_path):\n",
    "    logger.info(f'Data already downloaded: {data_path}')\n",
    "else:\n",
    "    logger.info(f'Downloading and unzipping data')\n",
    "    !wget https://nvidia-kaolin.s3.us-east-2.amazonaws.com/data/grut_cvpr2025_v3.zip -P local_data/; unzip local_data/grut_cvpr2025_v3.zip -d local_data/;\n",
    "\n",
    "# Pre-trained Mixture-of-Gaussians objects to load.\n",
    "# These objects can be trained with either 3dgut or 3dgrt, we will render them with 3dgrt's ray tracing here\n",
    "# 1. A doll object, the focus of our demo\n",
    "gs_object = os.path.join(data_path, \"3dgrt\", \"BluehairRagdoll.ply\")\n",
    "# 2. A table object on a patch of grass, which will serve as our environment\n",
    "gs_env = os.path.join(data_path, \"3dgrt\", \"tools_revised2.ply\")  # removed outliers\n",
    "\n",
    "# Folder containing available mesh assets in obj, glb, gltf format\n",
    "# Our demo app can load these meshes and move them around the scene\n",
    "mesh_assets_folder = os.path.join(data_path, \"mesh\")\n",
    "# Folder containing available envmaps in hdr format\n",
    "# Where the gaussian env is incomplete, we will query this env-map as background\n",
    "envmap_assets_folder = os.path.join(data_path, \"envmaps\")\n",
    "\n",
    "# Default config to use for the gaussian object if the saved model doesn't include it\n",
    "# (i.e. .ply files don't reference the original config)\n",
    "default_config = \"apps/colmap_3dgrt.yaml\" "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "608ebac0-ceab-4842-85fc-26fa72c2b65a",
   "metadata": {},
   "source": [
    "## Setup 3dgrut Engine & Scene"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a42a45b-58e3-4453-838a-6a034af31081",
   "metadata": {},
   "source": [
    "3dgrut includes a hybrid rendering engine that can render Gaussian based radiance fields, and meshes with a joint path tracer. <br>\n",
    "Let's set up the 3dgrut rendering engine, load the Gaussian objects and add a sample mesh for the fun of it.\n",
    "\n",
    "Starting with the Gaussian objects, pretrained with 3dgrut or 3dgrt:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "14e19b90-aeb0-4ff2-aa8c-71a384bff6d7",
   "metadata": {},
   "outputs": [],
   "source": [
    "from threedgrut_playground.engine import Engine3DGRUT\n",
    "from threedgrut_playground.utils.composition import join_gaussians\n",
    "\n",
    "# Start the engine with the env Gaussian object.\n",
    "# The engine will set the scene scale and rescale added meshes according to the scene size\n",
    "engine = Engine3DGRUT(\n",
    "    gs_object=gs_env,\n",
    "    mesh_assets_folder=mesh_assets_folder,\n",
    "    envmap_assets_folder=envmap_assets_folder,\n",
    "    default_config=default_config\n",
    ")\n",
    "\n",
    "# Load another 3dgrt Gaussian object, and translate up along z axis to position above table\n",
    "# Then concatenate both Gaussians into a single 3dgrut object the engine can render\n",
    "with torch.no_grad():\n",
    "    env_mog = engine.scene_mog\n",
    "    object_mog,_ = engine.load_3dgrt_object(gs_object, config_name=default_config)\n",
    "    object_mog.positions[:,2] = object_mog.positions[:,2] + 1.3\n",
    "    engine.scene_mog = join_gaussians(env_mog, object_mog)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c2dc17e-1eea-488a-a185-65741be491cc",
   "metadata": {},
   "source": [
    "Then we'll remove the default sphere primitive, and load another custom mesh primitive instead.\n",
    "\n",
    "ℹ️ **Kaolin** is used internally here within `Engine3DGRUT`, to load the sample meshes with their materials.\n",
    "Kaolin saves precious time of dealing with gltf format shenanigans, with a few lines of code the preprocessed mesh is loaded into tensors, compatible with the 3dgrut path tracer. <br> See here: https://github.com/nv-tlabs/3dgrut/blob/main/threedgrut_playground/utils/mesh_io.py#L132"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d0a908e4-535c-4af5-a80f-7121224d9c2c",
   "metadata": {},
   "outputs": [],
   "source": [
    "from threedgrut_playground.engine import OptixPrimitiveTypes\n",
    "\n",
    "# The engine is loaded with a sample mesh primitive (glass sphere),\n",
    "# remove the default mesh primitives from the engine here\n",
    "for mesh_name in list(engine.primitives.objects.keys()):\n",
    "    engine.primitives.remove_primitive(mesh_name)\n",
    "\n",
    "# Instead, we'll add another mesh object to a scene.\n",
    "# For now, we load and display it with a simple Lambertian shader.\n",
    "# To respect the original look, we'll later display as a Physically Based Rendered (PBR) mesh.\n",
    "print(f'Available meshes: {list(engine.primitives.assets.keys())}')\n",
    "engine.primitives.add_primitive(\n",
    "    geometry_type='Spray_bottle',\n",
    "    primitive_type=OptixPrimitiveTypes.DIFFUSE, # Lambertian\n",
    "    device='cuda'\n",
    ")\n",
    "\n",
    "# Take note of the mesh name, we'll need it to reference this primitive through the engine\n",
    "mesh_name = list(engine.primitives.objects.keys())[0]\n",
    "# Reference the mesh object, as OptixPrimitive, to quickly access the geometry and transforms later\n",
    "prim = engine.primitives.objects[mesh_name]\n",
    "\n",
    "# Position the mesh nicely within the scene, i.e. above the table\n",
    "engine.primitives.objects[mesh_name].transform.tx += 0.95\n",
    "engine.primitives.objects[mesh_name].transform.tz += 1.5\n",
    "engine.primitives.objects[mesh_name].transform.rz += 25.0\n",
    "engine.primitives.objects[mesh_name].transform.ry += 20.0\n",
    "\n",
    "# Configure some initial rendering settings - to low and fast quality\n",
    "engine.camera_type = 'Pinhole'\n",
    "engine.camera_fov = 60.0\n",
    "engine.use_spp = False                # Disable antialiasing for now\n",
    "engine.antialiasing_mode = '4x MSAA'  # Set the default antialiasing to 4x MSAA, if enabled later\n",
    "engine.use_optix_denoiser = False     # Disable Optix denoiser\n",
    "\n",
    "# Set a HDR envmap as background, to provide some light to our mesh\n",
    "engine.environment.set_env('drackenstein_quarry_puresky_4k.hdr')\n",
    "engine.environment.ibl_intensity=0.60\n",
    "engine.environment.exposure=-0.04\n",
    "engine.environment.envmap_offset = [-0.5, -0.25]\n",
    "\n",
    "# Finally, let engine know it should refresh internal structures due to changes\n",
    "engine.invalidate_materials_on_gpu()  # Sync newly loaded materials (loaded with the mesh)\n",
    "engine.rebuild_bvh(engine.scene_mog)                 # Rebuild gaussians BVH\n",
    "engine.primitives.rebuild_bvh_if_needed(True, True)  # Rebuild meshes BVH"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d5a8b6dd-9b3c-4d8a-8900-bc5370375732",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Uncomment to peek at documentation for further details about the 3dgrut rendering engine\n",
    "# engine?"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "156bb339-b182-4269-a244-a4ba6f5915b2",
   "metadata": {},
   "source": [
    "For the sake of the tutorial, we'll create some useful functions that quickly let us switch between fast, medium, and high quality settings.\n",
    "\n",
    "* Fast mode is useful for interacting with this notebook.\n",
    "* High quality mode is useful for, i.e. rendering and exporting a video.\n",
    "* The medium quality mode strikes a balance between the two, and allows for a quick preview of the high quality settings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d11bed53-653f-4ad1-af00-bf4a475fb079",
   "metadata": {},
   "outputs": [],
   "source": [
    "def set_fast_quality_mode():\n",
    "    engine.use_spp = False\n",
    "    engine.antialiasing_mode = '4x MSAA'\n",
    "    engine.spp.mode = 'msaa'\n",
    "    engine.spp.spp = 4\n",
    "    engine.spp.reset_accumulation()\n",
    "    engine.use_optix_denoiser = False\n",
    "\n",
    "    # Use simple Lambertian shading for all mesh objects\n",
    "    for prim in engine.primitives.objects.values():\n",
    "         prim.primitive_type = OptixPrimitiveTypes(OptixPrimitiveTypes.DIFFUSE)\n",
    "\n",
    "def set_medium_quality_mode():\n",
    "    engine.use_spp = True\n",
    "    engine.antialiasing_mode = '8x MSAA'\n",
    "    engine.spp.mode = 'msaa'\n",
    "    engine.spp.spp = 4\n",
    "    engine.spp.reset_accumulation()\n",
    "    engine.use_optix_denoiser = True\n",
    "\n",
    "    # Use Path Tracing for all mesh objects\n",
    "    for prim in engine.primitives.objects.values():\n",
    "         prim.primitive_type = OptixPrimitiveTypes(OptixPrimitiveTypes.PBR)\n",
    "\n",
    "def set_high_quality_mode():\n",
    "    # Make sure engine renders high quality frames\n",
    "    engine.use_spp = True\n",
    "    engine.antialiasing_mode = 'Quasi-Random (Sobol)'\n",
    "    engine.spp.mode = 'low_discrepancy_seq'\n",
    "    engine.spp.spp = 64\n",
    "    engine.spp.reset_accumulation()\n",
    "    engine.use_optix_denoiser = True\n",
    "\n",
    "    # Use Path Tracing for all mesh objects\n",
    "    for prim in engine.primitives.objects.values():\n",
    "         prim.primitive_type = OptixPrimitiveTypes(OptixPrimitiveTypes.PBR)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f36e34fe-9ed1-4946-aaa4-f72ecbc6bba5",
   "metadata": {},
   "source": [
    "Test the engine we loaded -- render a single frame:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9523c1aa-7c25-45bc-8541-99e0ddf3c0d0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from PIL import Image\n",
    "\n",
    "# Render a frame from the engine\n",
    "camera = kaolin.render.easy_render.default_camera(512)\n",
    "renderbuffer = engine.render(camera)\\\n",
    "\n",
    "# Convert to PIL so Jupyter can display it\n",
    "rgb_buffer = renderbuffer['rgb']\n",
    "rgb_buffer = (rgb_buffer[0] * 255).to(torch.uint8)\n",
    "image = Image.fromarray(rgb_buffer.cpu().numpy())\n",
    "\n",
    "display(image)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "076186a6-411e-40d5-89f1-5b28510d3235",
   "metadata": {},
   "source": [
    "## Interactive Renderer"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "962fbac1-2f09-4624-bde1-71139c23dedb",
   "metadata": {},
   "source": [
    "Rendering is more fun when it becomes interactive! Hence, next we'll add some camera control.."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "201b49b4-1090-42f0-a3b3-fb862e9a50ec",
   "metadata": {},
   "source": [
    "ℹ️ Here we use **Kaolin**'s \"easy render\" visualizer to drive the 3dgrut engine. <br>\n",
    "The visualizer is a quick way for rendering frames and interacting with the camera within the Jupyter notebook.\n",
    "\n",
    "Internally, the 3dgrut engine uses NVIDIA Slang & Optix to quickly ray trace gaussians and meshes within the scene."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6f21d042-27ff-4353-a957-0992d5b9364e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Start by setting up a kaolin camera object --\n",
    "\n",
    "def reset_camera(_camera):\n",
    "    # Position the Kaolin camera to observe the table\n",
    "    _camera.update(torch.tensor(\n",
    "        [[-0.51,   0.86,  0.00,  0.00],\n",
    "         [-0.25,  -0.14,  0.96,  0.02],\n",
    "         [ 0.83,   0.49,  0.29, -7.31],\n",
    "         [ 0.00,   0.00,  0.00,  1.00]],\n",
    "        dtype=_camera.dtype, device=_camera.device\n",
    "    ))\n",
    "    \n",
    "# Create initial camera\n",
    "camera = kaolin.render.easy_render.default_camera(512)\n",
    "# Set it to Blender coordinates (Z axis pointing upwards)\n",
    "camera.change_coordinate_system(\n",
    "    torch.tensor([[1, 0, 0],\n",
    "                  [0, 0, 1],\n",
    "                  [0, -1, 0]]\n",
    "))\n",
    "camera = camera.cuda()\n",
    "# ..and position above objects of interest\n",
    "reset_camera(camera)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88a460e9-e569-4058-a4cf-1cec5f1e2f10",
   "metadata": {},
   "source": [
    "Kaolin's Visualizer requires two rendering functions:\n",
    "* `fast_render()` - called during user interactions. Should be able to return frames quickly.\n",
    "* `render()` - called when the user stops interacting with the scene to generate higher quality frames.\n",
    "\n",
    "These functions integrate well with 3dgrut's engine:\n",
    "We'll set `fast_render()` to render a single-pass frame, without fancy effects. <br>\n",
    "Then `render()` will issue a multi-pass frame, where more complex effects like multisampling antialiasing and depth-of-field can be rendered."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ba452ae0-3cb0-4803-857a-9325d7e506ae",
   "metadata": {},
   "outputs": [],
   "source": [
    "def fast_render(in_cam, **kwargs):\n",
    "    # Called during interactions, disables effects for quick rendering\n",
    "    framebuffer = engine.render_pass(in_cam, is_first_pass=True)\n",
    "    # Alpha channel is the opacity output map from the renderer\n",
    "    if engine.environment.is_ignore_envmap():\n",
    "        alpha = framebuffer['opacity']\n",
    "    else: # However if using an env map - alpha channel is always 1, since we render the background\n",
    "        alpha = torch.ones_like(framebuffer['opacity'])\n",
    "    # Read back the rendered rgb buffer and convert to a format supported by the visualizer\n",
    "    rgba_buffer = torch.cat([framebuffer['rgb'], alpha], dim=-1)\n",
    "    rgba_buffer = torch.clamp(rgba_buffer, 0.0, 1.0)\n",
    "    return (rgba_buffer[0] * 255).to(torch.uint8) # Convert to RGBA image of uint8 pixels of [0,255]\n",
    "\n",
    "\n",
    "def render(in_cam, **kwargs):\n",
    "    # Called when the user stops interacting, to generate a high quality frame\n",
    "    is_first_pass = engine.is_dirty(in_cam)\n",
    "    framebuffer = engine.render_pass(in_cam, is_first_pass=True)\n",
    "    # Note: this loop will stall until all passes are done\n",
    "    while engine.has_progressive_effects_to_render():\n",
    "        framebuffer = engine.render_pass(in_cam, is_first_pass=False)\n",
    "    # Alpha channel is the opacity output map from the renderer\n",
    "    if engine.environment.is_ignore_envmap():\n",
    "        alpha = framebuffer['opacity']\n",
    "    else: # However if using an env map - alpha channel is always 1, since we render the background\n",
    "        alpha = torch.ones_like(framebuffer['opacity'])\n",
    "    # Read back the rendered rgb buffer and convert to a format supported by the visualizer\n",
    "    rgba_buffer = torch.cat([framebuffer['rgb'], alpha], dim=-1)\n",
    "    rgba_buffer = torch.clamp(rgba_buffer, 0.0, 1.0)\n",
    "    return (rgba_buffer[0] * 255).to(torch.uint8) # Convert to RGBA image of uint8 pixels of [0,255]\n",
    "\n",
    "\n",
    "# Initialize the visualizer with the 2 render functions above, and a camera object.\n",
    "visualizer = kaolin.visualize.IpyTurntableVisualizer(\n",
    "    height=camera.height,\n",
    "    width=camera.width,\n",
    "    camera=camera,\n",
    "    render=render,\n",
    "    fast_render=fast_render,\n",
    "    max_fps=8,\n",
    "    world_up_axis=2\n",
    ")\n",
    "# without a GUI, showing the interactive renderer is as simple as:\n",
    "# visualizer.show()\n",
    "\n",
    "# But we want to include some widgets.. and therefore we'll include some extra code below"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3df62c2c-07be-4e9d-8ef8-489dae2c391d",
   "metadata": {},
   "source": [
    "## Setup GUI Widgets"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "724c855f-b698-4140-9ed5-d702f05909f7",
   "metadata": {},
   "source": [
    "The next block is a technical part that adds ipywidgets, so users can control the visualizer from notebook -- feel free to skip the details."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc12e867-e6f8-4116-867f-6170523b2e5d",
   "metadata": {},
   "source": [
    "---------------"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b3cae399-8be1-4de4-8b29-91c0ac6640ce",
   "metadata": {},
   "source": [
    "#### Renderer Settings Widgets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f76eb3e3-b296-4de4-a0d1-5d30116ab70b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import ipywidgets as widgets\n",
    "from IPython.display import display\n",
    "\n",
    "separator = widgets.HTML(\n",
    "    value='<hr style=\"border: 1px dashed #ccc; margin: 10px 0;\">',\n",
    "    layout=widgets.Layout(width='100%')\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "618c1ba356ce563c",
   "metadata": {},
   "outputs": [],
   "source": [
    "lq_button = widgets.Button(description=\"🚗💨 Fast Settings\")\n",
    "mq_button = widgets.Button(description=\"⚖️ Balanced Settings\")\n",
    "hq_button = widgets.Button(description=\"💎 High Quality Settings\")\n",
    "\n",
    "aa_checkbox = widgets.Checkbox(\n",
    "    value=engine.use_spp,\n",
    "    description='✨ Use Antialiasing'\n",
    ")\n",
    "\n",
    "aa_mode_combo = widgets.Dropdown(\n",
    "    options=engine.ANTIALIASING_MODES,\n",
    "    value='4x MSAA',\n",
    "    description='Antialiasing Mode'\n",
    ")\n",
    "\n",
    "denoiser_checkbox = widgets.Checkbox(\n",
    "    value=engine.use_optix_denoiser,\n",
    "    description='🧠 Use Optix Denoiser'\n",
    ")\n",
    "\n",
    "spp_slider = widgets.IntSlider(\n",
    "    value=engine.spp.spp,\n",
    "    min=1,\n",
    "    max=64,\n",
    "    step=1,\n",
    "    orientation='horizontal',\n",
    "    description='Antialiasing SPP',\n",
    "    disabled=(engine.spp.mode == 'msaa')\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cc148ccf-ee2f-4b30-b937-ada22fc5c8bf",
   "metadata": {},
   "outputs": [],
   "source": [
    "envmap_combo = widgets.Dropdown(\n",
    "    options=engine.environment.available_envmaps,\n",
    "    value=engine.environment.current_name,\n",
    "    description=\"🌅 Environment Map\"\n",
    ")\n",
    "\n",
    "ibl_intensity_slider = widgets.FloatLogSlider(\n",
    "    value=engine.environment.ibl_intensity,\n",
    "    min=-3,\n",
    "    max=3,\n",
    "    step=0.01,\n",
    "    orientation='horizontal',\n",
    "    description='💡 IBL Intensity',\n",
    "    disabled=(engine.environment.is_ignore_envmap())\n",
    ")\n",
    "\n",
    "exposure_slider = widgets.FloatSlider(\n",
    "    value=engine.environment.exposure,\n",
    "    min=-10.0,\n",
    "    max=10.0,\n",
    "    step=0.01,\n",
    "    orientation='horizontal',\n",
    "    description='📸 Exposure',\n",
    "    disabled=(engine.environment.is_ignore_envmap())\n",
    ")\n",
    "\n",
    "env_offset_theta_slider = widgets.FloatSlider(\n",
    "    value=engine.environment.envmap_offset[0],\n",
    "    min=-0.5,\n",
    "    max=+0.5,\n",
    "    step=0.01,\n",
    "    orientation='horizontal',\n",
    "    description='Offset θ',\n",
    "    continuous_update=True,\n",
    "    readout=True,\n",
    "    readout_format='.3f',\n",
    "    disabled=(engine.environment.is_ignore_envmap())\n",
    ")\n",
    "\n",
    "env_offset_phi_slider = widgets.FloatSlider(\n",
    "    value=engine.environment.envmap_offset[1],\n",
    "    min=-0.5,\n",
    "    max=+0.5,\n",
    "    step=0.01,\n",
    "    orientation='horizontal',\n",
    "    description='Offset φ',\n",
    "    continuous_update=True,\n",
    "    readout=True,\n",
    "    readout_format='.3f',\n",
    "    disabled=(engine.environment.is_ignore_envmap())\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7e0268fd-cb4d-4281-8b55-06130e70b9d0",
   "metadata": {},
   "outputs": [],
   "source": [
    "def on_render_setting_change(change):\n",
    "    engine.use_spp = aa_checkbox.value\n",
    "\n",
    "    engine.antialiasing_mode = aa_mode_combo.value\n",
    "    if engine.antialiasing_mode == '4x MSAA':\n",
    "        engine.spp.mode = 'msaa'\n",
    "        engine.spp.spp = 4\n",
    "    elif engine.antialiasing_mode == '8x MSAA':\n",
    "        engine.spp.mode = 'msaa'\n",
    "        engine.spp.spp = 8\n",
    "    elif engine.antialiasing_mode == '16x MSAA':\n",
    "        engine.spp.mode = 'msaa'\n",
    "        engine.spp.spp = 16\n",
    "    elif engine.antialiasing_mode == 'Quasi-Random (Sobol)':\n",
    "        engine.spp.mode = 'low_discrepancy_seq'\n",
    "        engine.spp.spp = spp_slider.value\n",
    "    else:\n",
    "        raise ValueError('unknown antialiasing mode')\n",
    "\n",
    "    engine.spp.reset_accumulation()\n",
    "    engine.use_optix_denoiser = denoiser_checkbox.value\n",
    "\n",
    "    spp_slider.value = engine.spp.spp\n",
    "    spp_slider.disabled = (engine.spp.mode == 'msaa')\n",
    "    visualizer.render_update()\n",
    "\n",
    "def on_envmap_change(change):\n",
    "    engine.environment.tonemapper = 'None'\n",
    "    engine.environment.set_env(envmap_combo.value)\n",
    "    engine.environment.ibl_intensity = ibl_intensity_slider.value\n",
    "    engine.environment.exposure = exposure_slider.value\n",
    "    engine.environment.envmap_offset[0] = env_offset_theta_slider.value\n",
    "    engine.environment.envmap_offset[1] = env_offset_phi_slider.value\n",
    "\n",
    "    exposure_slider.disabled = engine.environment.is_ignore_envmap()\n",
    "    env_offset_theta_slider.disabled = engine.environment.is_ignore_envmap()\n",
    "    env_offset_phi_slider.disabled = engine.environment.is_ignore_envmap()\n",
    "    visualizer.render_update()\n",
    "\n",
    "aa_checkbox.observe(on_render_setting_change, names='value')\n",
    "aa_mode_combo.observe(on_render_setting_change, names='value')\n",
    "denoiser_checkbox.observe(on_render_setting_change, names='value')\n",
    "spp_slider.observe(on_render_setting_change, names='value')\n",
    "\n",
    "envmap_combo.observe(on_envmap_change, names='value')\n",
    "ibl_intensity_slider.observe(on_envmap_change, names='value')\n",
    "exposure_slider.observe(on_envmap_change, names='value')\n",
    "env_offset_theta_slider.observe(on_envmap_change, names='value')\n",
    "env_offset_phi_slider.observe(on_envmap_change, names='value')\n",
    "\n",
    "quality_buttons = widgets.HBox([lq_button, mq_button, hq_button])\n",
    "renderer_controls = widgets.VBox([quality_buttons, separator,\n",
    "                                  denoiser_checkbox, aa_checkbox, aa_mode_combo, spp_slider, separator, \n",
    "                                  envmap_combo, ibl_intensity_slider, exposure_slider, env_offset_theta_slider, env_offset_phi_slider])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d0602fc-1b0d-4292-8d6d-e01fac9f7071",
   "metadata": {},
   "source": [
    "#### Object Transform Widgets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8c8b5ef8-94fa-4aca-9367-c89698f7f6b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "_object_controls = []\n",
    "shader_type_controls = []\n",
    "prim_transform_controls = {}\n",
    "transform_events_on = True\n",
    "\n",
    "def reset_transform_widgets(prim_widgets, transform):\n",
    "    prim_widgets['pos_x_slider'].value = transform.tx\n",
    "    prim_widgets['pos_y_slider'].value = transform.ty\n",
    "    prim_widgets['pos_z_slider'].value = transform.tz\n",
    "    # Set rotation directly using the rx, ry, rz properties\n",
    "    prim_widgets['rot_x_slider'].value = transform.rx\n",
    "    prim_widgets['rot_y_slider'].value = transform.ry\n",
    "    prim_widgets['rot_z_slider'].value = transform.rz \n",
    "    # Set scale directly using the sx, sy, sz properties\n",
    "    prim_widgets['scale_slider'].value = transform.sx\n",
    "\n",
    "def reset_transform_widgets_to_engine_values():\n",
    "    global transform_events_on\n",
    "    transform_events_on = False\n",
    "    \n",
    "    for prim_name, prim in engine.primitives.objects.items():\n",
    "        transform = engine.primitives.objects[prim_name].transform\n",
    "\n",
    "        if prim_name in prim_transform_controls:\n",
    "            reset_transform_widgets(prim_transform_controls[prim_name], transform)\n",
    "        else:\n",
    "            logger.warn(f'Error? GUI controls for engine object {prim_name} not found')\n",
    "    transform_events_on = True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "79ea7395-9c4c-4284-9259-5fcb76311b38",
   "metadata": {},
   "outputs": [],
   "source": [
    "for prim_name, prim in engine.primitives.objects.items():\n",
    "\n",
    "    prim_widgets = {\n",
    "        'title': widgets.Label(value=f'🏺 {prim_name}')\n",
    "    }\n",
    "\n",
    "    def on_shader_change(change):\n",
    "        if prim_widgets['type'].value == 'Lambertian':\n",
    "            prim.primitive_type = OptixPrimitiveTypes(OptixPrimitiveTypes.DIFFUSE)\n",
    "        elif prim_widgets['type'].value == 'Cook Torrance':\n",
    "            prim.primitive_type = OptixPrimitiveTypes(OptixPrimitiveTypes.PBR)\n",
    "        engine.primitives.recompute_stacked_buffers()\n",
    "        engine.primitives.rebuild_bvh_if_needed(True, True)\n",
    "        visualizer.render_update()\n",
    "    \n",
    "    prim_type_combo = widgets.Dropdown(\n",
    "        options=['Lambertian', 'Cook Torrance'],\n",
    "        value='Lambertian',\n",
    "        description='Shader'\n",
    "    )\n",
    "    prim_type_combo.observe(on_shader_change, names='value')\n",
    "    prim_widgets['type'] = prim_type_combo\n",
    "    shader_type_controls.append([prim_type_combo, on_shader_change])\n",
    "    \n",
    "    def update_transform(change):\n",
    "        # # Reset transform\n",
    "        # engine.primitives.objects[prim_name].transform.reset()\n",
    "\n",
    "        global transform_events_on\n",
    "        if not transform_events_on:\n",
    "            return\n",
    "        \n",
    "        # Set position directly using the tx, ty, tz properties\n",
    "        engine.primitives.objects[prim_name].transform.tx = prim_widgets['pos_x_slider'].value\n",
    "        engine.primitives.objects[prim_name].transform.ty = prim_widgets['pos_y_slider'].value\n",
    "        engine.primitives.objects[prim_name].transform.tz = prim_widgets['pos_z_slider'].value\n",
    "        # Set rotation directly using the rx, ry, rz properties\n",
    "        engine.primitives.objects[prim_name].transform.rx = prim_widgets['rot_x_slider'].value\n",
    "        engine.primitives.objects[prim_name].transform.ry = prim_widgets['rot_y_slider'].value\n",
    "        engine.primitives.objects[prim_name].transform.rz = prim_widgets['rot_z_slider'].value\n",
    "        # Set scale directly using the sx, sy, sz properties\n",
    "        engine.primitives.objects[prim_name].transform.sx = prim_widgets['scale_slider'].value\n",
    "        engine.primitives.objects[prim_name].transform.sy = prim_widgets['scale_slider'].value\n",
    "        engine.primitives.objects[prim_name].transform.sz = prim_widgets['scale_slider'].value\n",
    "        \n",
    "        # Rebuild BVH if needed\n",
    "        engine.primitives.rebuild_bvh_if_needed(force=True, rebuild=True)\n",
    "        visualizer.render_update()\n",
    "    \n",
    "    for xyz in ['x', 'y', 'z']:\n",
    "        prim_widgets[f'pos_{xyz}_slider'] = widgets.FloatSlider(\n",
    "            value=getattr(prim.transform, f't{xyz}'),\n",
    "            min=-5.0,\n",
    "            max=5.0,\n",
    "            step=0.1,\n",
    "            description=f'Pos {xyz}:',\n",
    "            continuous_update=True,\n",
    "            readout=True,\n",
    "            readout_format='.3f',\n",
    "        )\n",
    "    for xyz in ['x', 'y', 'z']:\n",
    "        prim_widgets[f'rot_{xyz}_slider'] = widgets.FloatSlider(\n",
    "            value=getattr(prim.transform, f'r{xyz}'),\n",
    "            min=-180.0,\n",
    "            max=180.0,\n",
    "            step=0.1,\n",
    "            description=f'Rotation {xyz}:',\n",
    "            continuous_update=True,\n",
    "            readout=True,\n",
    "            readout_format='.3f',\n",
    "        )\n",
    "\n",
    "    prim_widgets[f'scale_slider'] = widgets.FloatSlider(\n",
    "            value=prim.transform.sx,\n",
    "            min=1e-3,\n",
    "            max=8,\n",
    "            step=0.001,\n",
    "            description=f'Scale:',\n",
    "            continuous_update=True,\n",
    "            readout=True,\n",
    "            readout_format='.3f',\n",
    "    )\n",
    "    for slider in prim_widgets.values():\n",
    "        slider.observe(update_transform, names='value')\n",
    "    prim_transform_controls[prim_name] = prim_widgets  # should be by reference\n",
    "    _object_controls.append(widgets.VBox(tuple(prim_widgets.values())))\n",
    "\n",
    "object_controls = widgets.VBox(_object_controls)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "aaf65de4-423c-4656-8ebd-04c9cb8eaf91",
   "metadata": {},
   "outputs": [],
   "source": [
    "def on_quality_change(quality):    \n",
    "    aa_checkbox.unobserve(on_render_setting_change, names='value')\n",
    "    aa_mode_combo.unobserve(on_render_setting_change, names='value')\n",
    "    denoiser_checkbox.unobserve(on_render_setting_change, names='value')\n",
    "    spp_slider.unobserve(on_render_setting_change, names='value')\n",
    "\n",
    "    for control, event in shader_type_controls:\n",
    "        control.unobserve(event, names='value')\n",
    "        if quality in ('medium', 'high'):\n",
    "            control.value = 'Cook Torrance'\n",
    "        else:\n",
    "            control.value = 'Lambertian'\n",
    "    \n",
    "    aa_checkbox.value = engine.use_spp\n",
    "    denoiser_checkbox.value = engine.use_optix_denoiser \n",
    "    aa_mode_combo.value = engine.antialiasing_mode\n",
    "    spp_slider.disabled = (engine.spp.mode == 'msaa')\n",
    "    spp_slider.value = engine.spp.spp\n",
    "\n",
    "    aa_checkbox.observe(on_render_setting_change, names='value')\n",
    "    aa_mode_combo.observe(on_render_setting_change, names='value')\n",
    "    denoiser_checkbox.observe(on_render_setting_change, names='value')\n",
    "    spp_slider.observe(on_render_setting_change, names='value')\n",
    "\n",
    "    for control, event in shader_type_controls:\n",
    "        control.observe(event, names='value')\n",
    "    \n",
    "    engine.primitives.recompute_stacked_buffers()\n",
    "    engine.primitives.rebuild_bvh_if_needed(True, True)\n",
    "    visualizer.render_update()\n",
    "def on_fast_quality_btn(b):\n",
    "    set_fast_quality_mode()\n",
    "    on_quality_change('fast')\n",
    "def on_medium_quality_btn(b):\n",
    "    set_medium_quality_mode()\n",
    "    on_quality_change('medium')\n",
    "def on_high_quality_btn(b):\n",
    "    set_high_quality_mode()\n",
    "    on_quality_change('high')\n",
    "lq_button.on_click(on_fast_quality_btn)\n",
    "mq_button.on_click(on_medium_quality_btn)\n",
    "hq_button.on_click(on_high_quality_btn)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c79ba4d-1d45-4477-8b6e-c0c8797b7bad",
   "metadata": {},
   "source": [
    "#### Materials Widget"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b47862e9-7de4-4785-ba68-1463e8d01f0d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from matplotlib.colors import to_rgb, to_hex\n",
    "\n",
    "def get_gui_rgb(color_picker):\n",
    "    return to_rgb(color_picker.value)\n",
    "def set_gui_rgb(color_picker, rgb):\n",
    "    r, g, b = rgb[0].cpu().item(), rgb[1].cpu().item(), rgb[2].cpu().item()\n",
    "    color_picker.value = to_hex([r, g, b])\n",
    "    \n",
    "available_pbr_materials = list(engine.primitives.registered_materials.keys())\n",
    "selected_pbr_material = available_pbr_materials[-1]\n",
    "\n",
    "pbr_materials_combo = widgets.Dropdown(\n",
    "    options=list(engine.primitives.registered_materials.keys()),\n",
    "    value=selected_pbr_material,\n",
    "    description='🎨 PBR Material'\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a7b519c3-29e5-444e-bace-a7b513235d89",
   "metadata": {},
   "outputs": [],
   "source": [
    "diffuse_factor = engine.primitives.registered_materials[selected_pbr_material].diffuse_factor\n",
    "diffuse_picker = widgets.ColorPicker(\n",
    "    concise=True,\n",
    "    description='Diffuse',\n",
    "    value=to_hex([diffuse_factor[0].cpu().item(), diffuse_factor[1].cpu().item(), diffuse_factor[2].cpu().item()]),\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "emissive_factor = engine.primitives.registered_materials[selected_pbr_material].emissive_factor\n",
    "emissive_picker = widgets.ColorPicker(\n",
    "    concise=True,\n",
    "    description='Emissive',\n",
    "    value=to_hex([emissive_factor[0].cpu().item(), emissive_factor[1].cpu().item(), emissive_factor[2].cpu().item()]),\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "metallic_slider = widgets.FloatSlider(\n",
    "    value=engine.primitives.registered_materials[selected_pbr_material].metallic_factor,\n",
    "    min=0.0,\n",
    "    max=1.0,\n",
    "    step=0.01,\n",
    "    orientation='horizontal',\n",
    "    description='Metallicness',\n",
    "    continuous_update=True,\n",
    "    readout=True,\n",
    "    readout_format='.3f'\n",
    ")\n",
    "\n",
    "roughness_slider = widgets.FloatSlider(\n",
    "    value=engine.primitives.registered_materials[selected_pbr_material].roughness_factor,\n",
    "    min=0.0,\n",
    "    max=1.0,\n",
    "    step=0.01,\n",
    "    orientation='horizontal',\n",
    "    description='Roughness',\n",
    "    continuous_update=True,\n",
    "    readout=True,\n",
    "    readout_format='.3f'\n",
    ")\n",
    "\n",
    "ior_slider = widgets.FloatSlider(\n",
    "    value=engine.primitives.registered_materials[selected_pbr_material].ior,\n",
    "    min=0.0,\n",
    "    max=1.0,\n",
    "    step=0.01,\n",
    "    orientation='horizontal',\n",
    "    description='IOR',\n",
    "    continuous_update=True,\n",
    "    readout=True,\n",
    "    readout_format='.3f'\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cd7e3116-d37c-4378-a069-8c198c842ea0",
   "metadata": {},
   "outputs": [],
   "source": [
    "def on_selected_material_change(change):\n",
    "    selected_pbr_material = pbr_materials_combo.value\n",
    "    set_gui_rgb(diffuse_picker, engine.primitives.registered_materials[selected_pbr_material].diffuse_factor)\n",
    "    set_gui_rgb(emissive_picker, engine.primitives.registered_materials[selected_pbr_material].emissive_factor)\n",
    "    metallic_slider.value = engine.primitives.registered_materials[selected_pbr_material].metallic_factor\n",
    "    roughness_slider.value = engine.primitives.registered_materials[selected_pbr_material].roughness_factor\n",
    "    ior_slider.value = engine.primitives.registered_materials[selected_pbr_material].ior\n",
    "    visualizer.render_update()\n",
    "\n",
    "def on_material_factor_change(change):\n",
    "    selected_pbr_material = pbr_materials_combo.value\n",
    "    diffuse_factor = get_gui_rgb(diffuse_picker)\n",
    "    emissive_factor = get_gui_rgb(emissive_picker)\n",
    "    engine.primitives.registered_materials[selected_pbr_material].diffuse_factor[0] = diffuse_factor[0]\n",
    "    engine.primitives.registered_materials[selected_pbr_material].diffuse_factor[1] = diffuse_factor[1]\n",
    "    engine.primitives.registered_materials[selected_pbr_material].diffuse_factor[2] = diffuse_factor[2]\n",
    "    engine.primitives.registered_materials[selected_pbr_material].emissive_factor[0] = emissive_factor[0]\n",
    "    engine.primitives.registered_materials[selected_pbr_material].emissive_factor[1] = emissive_factor[1]\n",
    "    engine.primitives.registered_materials[selected_pbr_material].emissive_factor[2] = emissive_factor[2]\n",
    "    engine.primitives.registered_materials[selected_pbr_material].metallic_factor = metallic_slider.value\n",
    "    engine.primitives.registered_materials[selected_pbr_material].roughness_factor = roughness_slider.value\n",
    "    engine.primitives.registered_materials[selected_pbr_material].ior = ior_slider.value\n",
    "\n",
    "    engine.invalidate_materials_on_gpu()\n",
    "    visualizer.render_update()\n",
    "\n",
    "\n",
    "pbr_materials_combo.observe(on_selected_material_change, names='value')\n",
    "\n",
    "diffuse_picker.observe(on_material_factor_change, names='value')\n",
    "emissive_picker.observe(on_material_factor_change, names='value')\n",
    "metallic_slider.observe(on_material_factor_change, names='value')\n",
    "roughness_slider.observe(on_material_factor_change, names='value')\n",
    "ior_slider.observe(on_material_factor_change, names='value')\n",
    "\n",
    "material_controls = widgets.VBox([pbr_materials_combo, diffuse_picker, emissive_picker, metallic_slider, roughness_slider, ior_slider])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "35a5984c-0ee9-40c3-b158-3e1cd9984f65",
   "metadata": {},
   "source": [
    "#### Build the GUI component"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2f83035a-e503-464d-b2e7-7215d32ae8f6",
   "metadata": {},
   "outputs": [],
   "source": [
    "gui_tab = widgets.Tab()\n",
    "tab_renderer = widgets.Box([renderer_controls])\n",
    "tab_objects = widgets.Box([object_controls])\n",
    "tab_materials = widgets.Box([material_controls])\n",
    "gui_tab.children = [tab_renderer, tab_objects, tab_materials]\n",
    "gui_tab.set_title(0, 'Render Settings')\n",
    "gui_tab.set_title(1, 'Objects')\n",
    "gui_tab.set_title(2, 'PBR Materials')\n",
    "tab_renderer.layout = widgets.Layout(width='100%')\n",
    "gui_tab.layout = widgets.Layout(width='50%')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4a347e8b-e22f-4e00-9d48-68352e4bae96",
   "metadata": {},
   "source": [
    "---- ---"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "534e2adb-9653-49a4-9b36-4c5ff7752389",
   "metadata": {},
   "source": [
    "### Display the Visualizer + GUI"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3ee4e4f4-eff0-4dc8-85da-1a03f123dffb",
   "metadata": {},
   "source": [
    "Feel free to experiment around with the object and engine settings through the GUI."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "07eaf0cd-35b8-412b-8e14-89edae2033bf",
   "metadata": {},
   "outputs": [],
   "source": [
    "visualizer.render_update() # Render initial frame\n",
    "visualizer_box = widgets.HBox([visualizer.canvas, gui_tab])\n",
    "display(visualizer_box)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96872542-8ad3-4ee3-bde3-92ad7d15b84d",
   "metadata": {},
   "source": [
    "Before continuing on to the next section, run the block below which prepares the mesh object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0c5a3600-c86a-473c-ada3-93de3f4eced3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Once happy with the mesh position, we bake the transform and rewrite the mesh coordinates.\n",
    "# Later, simulation will take place in object coordinates\n",
    "# Since we transform the mesh in world coordinates, we set the world coordinates as the new object coordinates.\n",
    "baked_prim = prim.apply_transform()\n",
    "baked_prim.transform.reset()\n",
    "engine.primitives.objects[mesh_name] = baked_prim\n",
    "prim = baked_prim\n",
    "\n",
    "# GUI sliders in the UI below should reflect the new transform value\n",
    "reset_transform_widgets_to_engine_values()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f94fb77-f724-4a78-8daf-bb840a3b457e",
   "metadata": {},
   "source": [
    "# Simulation (Simplicits)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3881f1b2-2c1f-49ed-a40b-27bff028e1f2",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "ℹ️ **Kaolin** includes an entire module for simulating objects, using the [Simplicits](https://research.nvidia.com/labs/toronto-ai/simplicits/) method. <br>\n",
    "The included implementation is further boosted with NVIDIA-warp."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5fd2b93f-e219-4666-8751-7af719747776",
   "metadata": {},
   "source": [
    "# Prerequisites"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e96fb114-a735-4c8f-b060-2737192c0ba4",
   "metadata": {},
   "source": [
    "Earlier we joined the doll and table gaussians into a single blob the engine can render.\n",
    "\n",
    "However, simulating the combined gaussian scene requires segmentation, or rather, masking out which gaussians belong to the doll and table."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0c6c7c1a-512c-479b-bc69-e8eb7d470db2",
   "metadata": {},
   "source": [
    "## Obtain Gaussian Segments"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "91515684-9046-41f8-ae1f-b44c70a6fc4a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Combined gaussian object contains: [ Table Gaussian 1...  Table Gaussian N, Doll Gaussian 1, .. Doll Gaussian N]\n",
    "table_mask = torch.zeros(len(env_mog) + len(object_mog), device=env_mog.device, dtype=torch.bool)\n",
    "table_mask[:len(env_mog)] = True\n",
    "\n",
    "doll_mask = torch.zeros(len(env_mog) + len(object_mog), device=env_mog.device, dtype=torch.bool)\n",
    "doll_mask[len(env_mog):] = True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e8e3878c-c49a-4c4f-8450-41215ce846eb",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Shapes we'll simulate\n",
    "doll_gaussians = engine.scene_mog[doll_mask]   # copy generated by indexing 3dgrut's MixtureOfGaussians model\n",
    "table_gaussians = engine.scene_mog[table_mask] # copy generated by indexing 3dgrut's MixtureOfGaussians model\n",
    "mesh = engine.primitives.objects[mesh_name]    # reference to OptixPrimitive\n",
    "\n",
    "# To use these shapes with Simplicits, we need to sample points on them.\n",
    "# As an approximation, we'll soon use the Gaussian means and mesh vertices.\n",
    "print('')\n",
    "print(\"Objects we'll use with Simplicits: \\n ---------------------------------------- \")\n",
    "print(f'Doll is made of {len(doll_gaussians)} Gaussians')\n",
    "print(f'Table is made of {len(table_gaussians)} Gaussians')\n",
    "print(f'Mesh is made of {len(mesh.vertices)} vertices')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f1a73fdf-3be7-4c79-924a-f6eebf4bfad5",
   "metadata": {},
   "source": [
    "## Material Parameters"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00835f2f-4ae9-415d-bef0-853477e5db88",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "We know the geometry of our objects (or at least, how to sample \"cubature points\" on them). <br>\n",
    "Next we define some physical material properties for our objects, and some initial settings for Simplicits."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4bda1979-2b6f-4b18-bf63-d2f4d2a6551d",
   "metadata": {},
   "outputs": [],
   "source": [
    "mesh_obj_args ={ # Will be a rigid object\n",
    "    \"yms\" : 1e5,     # Lower YM because its easier to converge and only 1 handle will exhibit numerical stiffening. \n",
    "    \"prs\" : 0.35,    # Metal has approximately 0.35 poisson ratio\n",
    "    \"rhos\" : 500,    # Approximating density for the can of paint\n",
    "    \"appx_vol\" : 0.5 # Approximating volume for paint can\n",
    "}\n",
    "\n",
    "doll_obj_args = { # Will be a trained, elastic object\n",
    "    \"yms\" : 1e5,                # The doll can be trained at 1e5 youngs modulus. Later we can update this before simulation. \n",
    "    \"prs\" : 0.45,               # Rubber-like poisson ratio for doll\n",
    "    \"rhos\" : 500,               # This density works well for elastic deformation\n",
    "    \"appx_vol\" : 0.5,           # The volume is approximated, making it too low will affect convergence at fp32\n",
    "    \"num_handles\" : 40,         # We want 40 control handles for a highly expressive object\n",
    "    \"model_layers\" : 10,        # Number of simplicits MLP layers\n",
    "    \"training_num_steps\" : 25000 # Training must be increased to 20k steps (or more) for 40 handles\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "92277556-41be-4d71-bf91-401f8899d15e",
   "metadata": {},
   "source": [
    "## Sample Cubature Points & Densify Shape Volume"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76c7c886-bb4f-42ce-925e-c3e29b3dc826",
   "metadata": {},
   "source": [
    "The Simplicits api in kaolin represents objects as cubature points.\n",
    "\n",
    "Since both meshes and Gaussians are supported by points on the surface (or close to it, in the case of Volumetric Radiance Gaussian fields), we'll try to fill the volume of our shapes with additinal sampled points.\n",
    "\n",
    "ℹ️ Fortunately, **Kaolin** includes a [densifier module](https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.gaussian.html#kaolin.ops.gaussian.sample_points_in_volume) that populates the gaussian shape with additional points.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a2f85350-f9d3-4b34-9e3f-2dfd4cead694",
   "metadata": {},
   "source": [
    "#### Sample points from Mesh"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6b838c23-2872-4d21-8353-f365ebc399b8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's sample points on the mesh surface (volume would be better, but this works ok for more rigid shapes)\n",
    "mesh_resampled_pts = kaolin.ops.mesh.sample_points(\n",
    "            prim.vertices.unsqueeze(0), prim.triangles, prim.vertices.shape[0] * 3)[0].squeeze(0)\n",
    "log_tensor(prim.vertices, 'mesh original pts', print_stats=True)\n",
    "log_tensor(mesh_resampled_pts, 'mesh resampled pts', print_stats=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0ea07ba-65ee-48cd-ba34-4115c80777db",
   "metadata": {},
   "source": [
    "#### Sample points from 3D Gaussian field"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f84eea0-54f8-4862-9222-5ad9c907433b",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's sample points inside the doll volume to make simulation more faithful\n",
    "doll_resampled_pts = kaolin.ops.gaussian.sample_points_in_volume(\n",
    "    xyz=doll_gaussians.get_positions().detach(), \n",
    "    scale=doll_gaussians.get_scale().detach(),\n",
    "    rotation=doll_gaussians.get_rotation().detach(),\n",
    "    opacity=doll_gaussians.get_density().detach(),\n",
    "    clip_samples_to_input_bbox=False\n",
    ")\n",
    "log_tensor(doll_gaussians.get_positions(), 'doll orig pts', print_stats=True)\n",
    "log_tensor(doll_resampled_pts, 'doll resampled pts', print_stats=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "502b58c9-bd62-4d74-aaa0-966ea2f6d0ba",
   "metadata": {},
   "outputs": [],
   "source": [
    "# For the table, we will just use more opaque Gaussians, as sample_points_in_volume is defined for objects and not full environment fields\n",
    "table_resampled_pts = table_gaussians.get_positions()[(table_gaussians.get_density() > 0.3).squeeze(), :]\n",
    "log_tensor(table_gaussians.get_positions(), 'table and scene pts', print_stats=True)\n",
    "log_tensor(table_resampled_pts, 'table and scene resampled pts', print_stats=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d1d7c721-1d60-4b41-8005-6f65458c0b65",
   "metadata": {},
   "source": [
    "## Train Simplicits Skinning Weights / Initialize Rigid Objects\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "803028ca-ef7a-4ed7-bbec-1eb4f255dce4",
   "metadata": {},
   "source": [
    "Let's start by creating some rigid SimplicitsObjects. Creating rigid objects is instantaneous."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "59583cb8-2793-4c42-a043-f5334e7ee7d6",
   "metadata": {},
   "outputs": [],
   "source": [
    "common_kwargs = {  # Some common starting parameters for materials (tune to preference)\n",
    "    \"yms\": 1e6,    # This is a good starting point for a rubbery-object, in a reduced sim this is affected by numerical stiffness\n",
    "    \"prs\": 0.45,   # Measures compressibility\n",
    "    \"rhos\": 1000,  # Density of water, in a reduced sim, this is approximated due to numerical stiffness\n",
    "    \"appx_vol\": 1  # Also close, but approximated. When this is too small and timestep is also small it can cause precision issues in our fp32 sims\n",
    "}\n",
    "other_kwargs = {}\n",
    "\n",
    "# Let's initialize rigid objects\n",
    "mesh_obj = kaolin.physics.simplicits.SimplicitsObject.create_rigid(\n",
    "    mesh_resampled_pts, **mesh_obj_args, **other_kwargs)\n",
    "logger.info(f'Created mesh rigid object')\n",
    "\n",
    "# We may want to set a dynamically moving table to higher stiffness, but its a static object in this scene\n",
    "table_obj = kaolin.physics.simplicits.SimplicitsObject.create_rigid(\n",
    "    table_resampled_pts, **common_kwargs, **other_kwargs) \n",
    "logger.info(f'Created table rigid object')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a183816e-49ac-41c7-b461-fb1e60c3c7b4",
   "metadata": {},
   "source": [
    "Next we create an elastic SimplicitsObject and train its skinning weight functions using the volume samples from above. The simulator will then use these reduced degrees of freedom to drive the simulation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6924707a-7723-4784-b6a8-2ed32e5152ed",
   "metadata": {},
   "outputs": [],
   "source": [
    "# One-liner to set up and train a Simplicits object..\n",
    "# However, in the interest of time, you may run the block below to load a pretrained simplicits object.\n",
    "\n",
    "# sim_obj = kaolin.physics.simplicits.SimplicitsObject.create_trained(\n",
    "#     points,  # point samples\n",
    "#     *args, **kwargs) "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46b121ff-e9b7-455d-8940-d4273bfce16e",
   "metadata": {},
   "source": [
    "**Note:** Since training of elastic objects takes a bit of time, we cache the result and reuse it next time we run the notebook:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "740bb0ef-616b-449c-bf9d-623d3478c577",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Whether to save reduced degress of freedom used by the simulator and load from cache automatically\n",
    "ENABLE_SIMPLICITS_CACHING = True # set to False to always retrain\n",
    "\n",
    "cache_dir = os.path.join(PHYS_NOTEBOOKS_DIR, 'cache')\n",
    "os.makedirs(cache_dir, exist_ok=True)\n",
    "logger.info(f'Caching trained simplicits objects in {cache_dir}')\n",
    "\n",
    "def train_or_load_simplicits_object(points, fname, *args, **kwargs):\n",
    "    if not ENABLE_SIMPLICITS_CACHING or not os.path.exists(fname):\n",
    "        logger.info('Training simplicits object. This will take 2-3min... ')\n",
    "        start = time.time()\n",
    "\n",
    "        # One-liner to set up Simplicits object\n",
    "        sim_obj = kaolin.physics.simplicits.SimplicitsObject.create_trained(\n",
    "            points,  # point samples\n",
    "            *args, **kwargs) \n",
    "        \n",
    "        end = time.time()\n",
    "        logger.info(f\"Ended training in {end-start} seconds\")\n",
    "\n",
    "        # We'll cache the result so we can quickly rerun the notebook.\n",
    "        torch.save(sim_obj, fname)\n",
    "        logger.info(f\"Cached training result in {fname}\")\n",
    "    else:\n",
    "        logger.info(f'Loading cached simplicits object from: {fname}')\n",
    "        sim_obj = torch.load(fname, weights_only=False)\n",
    "    return sim_obj\n",
    "\n",
    "# Let's train doll as a deformable simplicits object\n",
    "# For a high number like 40 handles, train with 20k steps at least\n",
    "doll_obj = train_or_load_simplicits_object(\n",
    "    doll_resampled_pts,\n",
    "    os.path.join(cache_dir, 'simplicits_3dgrut_doll_20k.pt'),\n",
    "    **doll_obj_args, **other_kwargs)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "774119ac-47a2-45dc-9935-3bb1d4ac8c34",
   "metadata": {},
   "source": [
    "#### Visualize Sampled Cubature Points"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d15ec14a-8fe9-4543-bb0e-ad6696c5bd86",
   "metadata": {},
   "source": [
    "Check that we sampled enough points on the object. <br>\n",
    "Note how the purple gaussian object is solid due densification, and the red mesh object is hollow.\n",
    "\n",
    "*Homework: try visualizing the trained simplicits weights of the doll as colored points (one weight at a time)!*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a7b5c6d0-357d-4867-bb5d-1f69100a439a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import k3d\n",
    "plot = k3d.plot()\n",
    "plot += k3d.points(mesh_obj.pts.detach().cpu().numpy(), point_size=0.01, color=0xFF0000)\n",
    "plot += k3d.points(doll_obj.pts.detach().cpu().numpy(), point_size=0.01, color=0x7700FF)\n",
    "plot += k3d.points(table_obj.pts.detach().cpu().numpy(), point_size=0.01, color=0xCCCCCC)\n",
    "plot.display()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "87d79cda-9588-4189-834c-cf6c340f7a63",
   "metadata": {},
   "source": [
    "## Setup Physics Scene\n",
    "\n",
    "Here, we will reset initial conditions for the simulation and set up our forces using newest `kaolin.physics` API. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7d664d9a-7d45-4d2e-8795-0f3df3cd5c13",
   "metadata": {},
   "outputs": [],
   "source": [
    "# The simulation material properties can be different from the ones used during training!\n",
    "doll_obj.yms = 5e4*torch.ones_like(doll_obj.prs)\n",
    "doll_obj.rhos = 500*torch.ones_like(doll_obj.prs)\n",
    "mesh_obj.yms = 1e5*torch.ones_like(mesh_obj.prs)\n",
    "mesh_obj.rhos = 200*torch.ones_like(mesh_obj.prs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4b132e7c-9038-448e-851c-aee403ac54c7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a default scene # default empty scene\n",
    "scene = kaolin.physics.simplicits.SimplicitsScene()\n",
    "# Convergence might not be guaranteed at few NM iterations, but runs very fast\n",
    "scene.max_newton_steps = 4 \n",
    "# Don't set too small when using small volumes (leads to fp precision issues)\n",
    "scene.timestep=0.02 \n",
    "# This helps the conditioning of the newton hessian, but approximating the hessian is worse for convergence\n",
    "scene.newton_hessian_regularizer = 1e-5\n",
    "# Use a direct dense solver in small scenes \n",
    "scene.direct_solve = True\n",
    "\n",
    "# Add simulatable objects to the scene\n",
    "mesh_obj_idx = scene.add_object(mesh_obj, num_qp=1000)\n",
    "doll_obj_idx = scene.add_object(doll_obj, num_qp=2000) \n",
    "table_obj_idx = scene.add_object(table_obj, num_qp=15000, is_kinematic=True)\n",
    "\n",
    "# Set gravity of the scene\n",
    "scene.set_scene_gravity(acc_gravity=torch.tensor([0, 0, 9.8]))\n",
    "\n",
    "# Setup collisions\n",
    "collision_particle_radius = 0.05 # Currently a scene-wide radius at which collision energy starts to aggregate\n",
    "scene.enable_collisions(\n",
    "    collision_particle_radius=collision_particle_radius, \n",
    "    detection_ratio=1.25,                              # Detect collision within this multiple of the radius\n",
    "    impenetrable_barrier_ratio=0.1,                    # Collision energy is infinite at this multiple of radius\n",
    "    collision_penalty=500,                             # Coefficient for strength of collision response\n",
    "    max_contact_pairs=20000,                           # Max number of contacting pairs allowed in scene\n",
    "    friction=0.1                                       # Friction parameter for scene\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3d4ce17c-c8a9-4615-95a6-09de210cfe6a",
   "metadata": {},
   "source": [
    "## Simulate and Visualize\n",
    "\n",
    "Now we are ready to simulate, but we still need to put in a bit of work to deform the Gaussians according to the simulation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1318ea14-4be3-4e99-b29f-bdfa65071207",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Save for resetting the simulation\n",
    "mesh_orig_pos = prim.vertices.clone()\n",
    "# Look up learned skinning weights using original doll positions\n",
    "doll_skinning_weights = doll_obj.skinning_weight_function(doll_gaussians.get_positions())\n",
    "\n",
    "def engine_refresh():\n",
    "    \"\"\" Updates the 3dgrut engine with changes to objects \"\"\"\n",
    "    # Rebuild BVH of Gaussians\n",
    "    engine.rebuild_bvh(engine.scene_mog)\n",
    "    # Rebuild BVH of meshes\n",
    "    engine.primitives.rebuild_bvh_if_needed(True, True)\n",
    "\n",
    "def reset_simulation():\n",
    "    # Reset simulator state back to initial conditions\n",
    "    scene.reset_scene()\n",
    "    # Set object to original coordinates\n",
    "    engine.primitives.objects[mesh_name].transform.reset()\n",
    "    scene.set_object_initial_transform(\n",
    "        mesh_obj_idx, \n",
    "        init_transform=engine.primitives.objects[mesh_name].transform.model_matrix()\n",
    "    )\n",
    "    # Reset object coords in engine to frame 0\n",
    "    deform_rendered_scene()\n",
    "    # Sync geometry changes to 3dgrut engine\n",
    "    engine_refresh()\n",
    "\n",
    "# Update the global gaussians object given simulator transforms\n",
    "def deform_rendered_scene():\n",
    "    global engine  # just so we are clear what's mutated\n",
    "\n",
    "    with torch.no_grad():\n",
    "        # Update mesh positions\n",
    "        mesh_deformed_pts = scene.get_object_deformed_pts(mesh_obj_idx, mesh_orig_pos).squeeze()\n",
    "        engine.primitives.objects[mesh_name].vertices = mesh_deformed_pts\n",
    "    \n",
    "        # Update doll positions\n",
    "        doll_transforms = scene.get_object_transforms(doll_obj_idx)\n",
    "        # To support simulations with gaussians, \n",
    "        # we need to apply our affine transformations over the gaussian positions, rotation and scale.\n",
    "        # The skinning weights were obtained by training the doll object.\n",
    "        # The transforms are a 4x4 matrix, per skinning weight - a product of the simulation\n",
    "        # The utility below takes care of applying these weighted transformations to Gaussians using Linear Blend Skinning\n",
    "        new_pos, new_rot, new_scale = transform_gaussians_lbs(\n",
    "            doll_gaussians.positions, doll_gaussians.rotation, doll_gaussians.scale,\n",
    "            doll_skinning_weights, doll_transforms)\n",
    "        engine.scene_mog.positions[doll_mask] = new_pos\n",
    "        engine.scene_mog.rotation[doll_mask] = new_rot\n",
    "        engine.scene_mog.scale[doll_mask] = new_scale\n",
    "\n",
    "        # Debug: is table moving? It should not!\n",
    "        # engine.scene_mog.positions[table_mask] = scene.get_object_deformed_pts(table_obj_idx, table_gaussians.get_positions()).squeeze()\n",
    "\n",
    "    # Sync geometry changes to 3dgrut engine\n",
    "    engine_refresh()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9a607e20-0ec2-42fb-8136-09813e0e26e8",
   "metadata": {},
   "outputs": [],
   "source": [
    "scene.reset_scene()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "466ac6b3-a1d7-4194-b542-da9b71cbb0a4",
   "metadata": {},
   "source": [
    "#### Interactive Simulation Interface"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3fbe6d6f-b333-47c0-9987-7eb9bff1cd0a",
   "metadata": {},
   "source": [
    "Add a little GUI for the physics:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "af8b4313-88c6-4f93-b3fb-5c08a3439f79",
   "metadata": {},
   "outputs": [],
   "source": [
    "num_sim_steps_slider = widgets.IntSlider(\n",
    "    value=100,\n",
    "    min=10,\n",
    "    max=300,\n",
    "    step=1,\n",
    "    orientation='horizontal',\n",
    "    description='Steps',\n",
    "    continuous_update=True,\n",
    "    readout=True,\n",
    ")\n",
    "\n",
    "def run_sim_step():\n",
    "    scene.run_sim_step()\n",
    "    print(\".\", end=\"\")\n",
    "    with torch.no_grad():\n",
    "        deform_rendered_scene()\n",
    "        visualizer.render_update()\n",
    "\n",
    "def run_simulation(b=None):\n",
    "    if scene.current_sim_step == 0:\n",
    "\n",
    "        # Uncomment lines below to try few different placements for the mesh and doll\n",
    "\n",
    "        # Doll placements\n",
    "        # scene.set_object_initial_transform(doll_obj_idx, init_transform=torch.tensor([[1,0,0,-0.3],[0,1,0,-0.2],[0,0,1,0.5]], dtype=torch.float32, device='cuda')) # doll falls on blue speakers\n",
    "        # scene.set_object_initial_transform(doll_obj_idx, init_transform=torch.tensor([[1,0,0,0.0],[0,1,0,0.8],[0,0,1,0.5]], dtype=torch.float32, device='cuda'))   # doll falls in bag\n",
    "        \n",
    "        # Mesh placements\n",
    "        # scene.set_object_initial_transform(mesh_obj_idx, init_transform=torch.tensor([[1,0,0,-1.8],[0,1,0,0.6],[0,0,1,1]], dtype=torch.float32, device='cuda')) # mesh on drill\n",
    "        # scene.set_object_initial_transform(mesh_obj_idx, init_transform=torch.tensor([[1,0,0,-1.5],[0,1,0,1.5],[0,0,1,2]], dtype=torch.float32, device='cuda')) # mesh falls on boots\n",
    "        # scene.set_object_initial_transform(mesh_obj_idx, init_transform=torch.tensor([[1,0,0,0.0],[0,1,0,-0.2],[0,0,1,1]], dtype=torch.float32, device='cuda')) # mesh falls on pliers\n",
    "        # Use sliders to control mesh position\n",
    "        scene.set_object_initial_transform(\n",
    "            mesh_obj_idx, \n",
    "            init_transform=engine.primitives.objects[mesh_name].transform.model_matrix()\n",
    "        )\n",
    "        # Scene already takes care of transformation for us, so we can keep the transform at identity.\n",
    "        # This will stop the engine from applying the transform twice..\n",
    "        engine.primitives.objects[mesh_name].transform.reset()\n",
    "        reset_transform_widgets_to_engine_values()\n",
    "    num_simulation_steps = num_sim_steps_slider.value\n",
    "    for s in range(num_simulation_steps):\n",
    "        with visualizer.out:\n",
    "            run_sim_step()\n",
    "\n",
    "def reset_and_rerender(b=None):\n",
    "    reset_simulation()\n",
    "    visualizer.render_update()\n",
    "\n",
    "# Create new tab for simulation\n",
    "run_button = widgets.Button(description='Run Simulation')\n",
    "run_button.on_click(run_simulation)\n",
    "reset_button = widgets.Button(description='Reset Simulation')\n",
    "reset_button.on_click(reset_and_rerender)\n",
    "sim_buttons = widgets.HBox([run_button, reset_button])\n",
    "simulation_controls = widgets.VBox([num_sim_steps_slider, sim_buttons])\n",
    "\n",
    "# Recreate GUI component with an additional tab for simulation\n",
    "tab_simulation = widgets.Box([simulation_controls])\n",
    "gui_tab.children = [tab_renderer, tab_objects, tab_materials, tab_simulation]\n",
    "gui_tab.set_title(0, 'Render Settings')\n",
    "gui_tab.set_title(1, 'Objects')\n",
    "gui_tab.set_title(2, 'PBR Materials')\n",
    "gui_tab.set_title(3, 'Simulation')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7d604860-9fd4-46d0-ad69-011ca451259d",
   "metadata": {},
   "source": [
    "Show the updated visualizer -- let's try running the simulation!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "418c88c9-a158-417e-9c08-552d82cb01b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "reset_and_rerender() # Always start with initial conditions\n",
    "visualizer_box = widgets.VBox([widgets.HBox([visualizer.canvas, gui_tab]), visualizer.out])\n",
    "display(visualizer_box)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cac42adf-d1b5-4fac-af09-5f324f07b89b",
   "metadata": {},
   "source": [
    "------"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8eb02fd7-854b-4426-a22f-91af99e79b31",
   "metadata": {},
   "source": [
    "#### DEBUG SIMULATED POINT POSITIONS"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d4dcaec5-7bfb-4ba8-922f-798a2f1143b3",
   "metadata": {},
   "source": [
    "The cell below visualizes the simulation from cubature point of view. This is useful for i.e. debugging collisions.\n",
    "\n",
    "Feel free to uncomment and explore more about the behavior of the simulation!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2f17582e-a5c5-418c-8a35-fc46027daf14",
   "metadata": {},
   "outputs": [],
   "source": [
    "# import k3d\n",
    "# import warp as wp\n",
    "# plot = k3d.plot()\n",
    "\n",
    "# def _get_sim_pts():\n",
    "#     with torch.no_grad():\n",
    "#         sim_rest_pts = wp.to_torch(scene.sim_pts)\n",
    "#         log_tensor(sim_rest_pts, 'sim_rest_pts', print_stats=True)\n",
    "\n",
    "#     dx = wp.array((scene.sim_B@scene.sim_z), dtype=wp.vec3)\n",
    "#     dx_torch = wp.to_torch(dx)\n",
    "#     log_tensor(dx_torch, 'dx_torch', print_stats=True)\n",
    "\n",
    "#     return (sim_rest_pts + dx_torch).detach()\n",
    "\n",
    "# sim_pts = _get_sim_pts()\n",
    "# def _obj_idx(oid):\n",
    "#     return wp.to_torch(scene.object_to_qp_map[oid]).detach()\n",
    "    \n",
    "# plot += k3d.points(sim_pts[_obj_idx(mesh_obj_idx)].detach().cpu().numpy(), point_size=collision_particle_radius, color=0xFF0000)\n",
    "# plot += k3d.points(sim_pts[_obj_idx(doll_obj_idx)].detach().cpu().numpy(), point_size=collision_particle_radius, color=0x7700FF)\n",
    "# plot += k3d.points(sim_pts[_obj_idx(table_obj_idx)].detach().cpu().numpy(), point_size=collision_particle_radius, color=0xCCCCCC)\n",
    "# plot.display()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb32aa53-f8fe-43c3-8c65-221730e2bc9e",
   "metadata": {},
   "source": [
    "# Export High Quality Video"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54f150f3-e395-466a-97b7-8b832ff0a895",
   "metadata": {},
   "source": [
    "ℹ️ The last section shows how to use **Kaolin** cameras to create a trajectory and export a high quality video for offline viewing."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d450eedd-7993-45b6-978e-b6539f0122ce",
   "metadata": {},
   "source": [
    "First, add buttons to save cameras from the visualizer.<br>Every time this button is pressed, the current visualizer camera is added to the list."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "752670b9-64e9-4a6e-91ed-7bcdc605459d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import copy\n",
    "cameras_to_record = []\n",
    "\n",
    "# Add a button to the gui that populates this list\n",
    "def add_cam(b):\n",
    "    cam = copy.deepcopy(visualizer.camera)\n",
    "    cameras_to_record.append(cam.cpu())\n",
    "def reset_cams_button(b):\n",
    "    cameras_to_record = []\n",
    "\n",
    "\n",
    "add_cam_button = widgets.Button(description='📷 Add Camera')\n",
    "add_cam_button.on_click(add_cam)\n",
    "reset_cams_button = widgets.Button(description='Reset Trajectory')\n",
    "reset_cams_button.on_click(add_cam)\n",
    "export_vid_controls = widgets.HBox([add_cam_button, reset_cams_button])\n",
    "tab_export_vid = widgets.Box([export_vid_controls])\n",
    "\n",
    "# Recreate GUI component with an additional tab for exporting a video\n",
    "gui_tab.children = [tab_renderer, tab_objects, tab_materials, tab_simulation, tab_export_vid]\n",
    "gui_tab.set_title(0, 'Render Settings')\n",
    "gui_tab.set_title(1, 'Objects')\n",
    "gui_tab.set_title(2, 'PBR Materials')\n",
    "gui_tab.set_title(3, 'Simulation')\n",
    "gui_tab.set_title(4, 'Export Video')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6baaca34-3c3b-4f07-8819-560e9c2ab511",
   "metadata": {},
   "outputs": [],
   "source": [
    "reset_and_rerender() # Always start with initial conditions\n",
    "visualizer_box = widgets.VBox([widgets.HBox([visualizer.canvas, gui_tab]), visualizer.out])\n",
    "display(visualizer_box)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd7d9d34-450d-42bf-927a-e7a457d9eb75",
   "metadata": {},
   "source": [
    "Then, use these keyframe cameras to generate a full path.<br>\n",
    "Once you're happy with the camera trajectory set above, run the section below to export the video.\n",
    "\n",
    "To export the video, we'll traverse the interpolated path: we run the simulation and render frames for the video, in alternating steps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "877f7f8a-9288-4ccd-95ae-0132573cb9dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "import cv2\n",
    "from tqdm import tqdm\n",
    "from threedgrut_playground.utils.kaolin_future.interpolated_cameras import camera_path_generator\n",
    "\n",
    "@torch.no_grad()\n",
    "def run_offline_sim_step():\n",
    "    scene.run_sim_step()\n",
    "    deform_rendered_scene()\n",
    "\n",
    "def render_smooth_trajectory(trajectory, output_path, frames_between_cameras, video_fps=30):\n",
    "    if len(trajectory) < 2:\n",
    "        raise ValueError('Rendering a path trajectory requires at least 2 cameras.')\n",
    "\n",
    "    # Let kaolin generate an interpolated trajectory from the key cameras\n",
    "    interpolated_path = camera_path_generator(\n",
    "        trajectory=trajectory,\n",
    "        frames_between_cameras=frames_between_cameras,\n",
    "        interpolation='polynomial'\n",
    "    )\n",
    "\n",
    "    out_video = None\n",
    "    reset_simulation()\n",
    "\n",
    "    # Advance on trajectory and run simulation step as we go\n",
    "    for idx, camera in enumerate(tqdm(interpolated_path)):\n",
    "        # Render a frame\n",
    "        rgb = engine.render(camera)['rgb']\n",
    "        \n",
    "        # Advance physics simulation\n",
    "        if idx % 2 == 0:\n",
    "            with torch.no_grad():\n",
    "                scene.run_sim_step()\n",
    "                deform_rendered_scene()\n",
    "\n",
    "        # Save rendered frame to video\n",
    "        if out_video is None:\n",
    "            out_video = cv2.VideoWriter(output_path, cv2.VideoWriter_fourcc(*'mp4v'),\n",
    "                                        video_fps, (rgb.shape[2], rgb.shape[1]), True)\n",
    "        data = rgb[0].clip(0, 1).detach().cpu().numpy()\n",
    "        data = (data * 255).astype(np.uint8)\n",
    "        data = cv2.cvtColor(data, cv2.COLOR_BGR2RGB)\n",
    "        out_video.write(data)\n",
    "    out_video.release()\n",
    "\n",
    "    print(f'Video exported to {output_path}')\n",
    "\n",
    "if len(cameras_to_record) == 0:\n",
    "    print('Add some cameras to export a high definition video!')\n",
    "else:\n",
    "    set_high_quality_mode()\n",
    "    num_simulation_steps = num_sim_steps_slider.value\n",
    "    num_frames_between_keyframes = 2 * (math.ceil(num_simulation_steps // len(cameras_to_record)) + 1)\n",
    "    render_smooth_trajectory(cameras_to_record, output_path='output.mp4',frames_between_cameras=num_frames_between_keyframes, video_fps=30)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c44abe08-44c8-440e-a5a9-a65daeb0e9ad",
   "metadata": {},
   "source": [
    "That's it! Your exported video should be ready!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "02e32709-df32-482a-800d-c289e38d2180",
   "metadata": {},
   "source": [
    "Final words on cameras: Kaolin cameras are flexible. Throughout the tutorial we've used them to control the visualizer and interpolate keyframes for a video, but there is even more you can do with them!\n",
    "\n",
    "See [the short examples page](https://github.com/NVIDIAGameWorks/kaolin/tree/master/examples/recipes/camera) on Github."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b1c6cf2a-a498-4c0d-aefa-72084ccaddde",
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "kaolin.render.camera.Camera?"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
