{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "S4pgCuuygWGH"
   },
   "source": [
    "# Waymo Open Scenario Gen Challenge Tutorial 🧬\n",
    "\n",
    "Follow along the\n",
    "[Scenario Gen Challenge web page](https://waymo.com/open/challenges/2025/scenario-gen)\n",
    "for more details.\n",
    "\n",
    "This tutorial demonstrates:\n",
    "\n",
    "- How to load the motion dataset.\n",
    "\n",
    "- How to generate a scenario with a simple baseline.\n",
    "\n",
    "- How to visualize the results.\n",
    "\n",
    "- How to evaluate the generated scenario locally.\n",
    "\n",
    "- How to package the generated results into the protobuf used for submission."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Y13_DSHa3bFN"
   },
   "source": [
    "## Package installation 🛠️"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "89un4-pTS5rM"
   },
   "outputs": [],
   "source": [
    "!pip install waymo-open-dataset-tf-2-12-0==1.6.7"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "F3qW_kdSgTM5"
   },
   "outputs": [],
   "source": [
    "# Imports\n",
    "import collections\n",
    "import os\n",
    "import tarfile\n",
    "\n",
    "from matplotlib import rc\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "import tqdm\n",
    "\n",
    "from waymo_open_dataset.protos import scenario_pb2\n",
    "from waymo_open_dataset.protos import sim_agents_submission_pb2\n",
    "from waymo_open_dataset.utils import trajectory_utils\n",
    "from waymo_open_dataset.utils.sim_agents import submission_specs\n",
    "from waymo_open_dataset.utils.sim_agents import visualizations\n",
    "from waymo_open_dataset.wdl_limited.sim_agents_metrics import metric_features\n",
    "from waymo_open_dataset.wdl_limited.sim_agents_metrics import metrics\n",
    "\n",
    "# Set matplotlib to jshtml so animations work with colab.\n",
    "rc('animation', html='jshtml')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "f23GbrgV3naf"
   },
   "source": [
    "# Loading the data\n",
    "\n",
    "Visit the [Waymo Open Dataset Website](https://waymo.com/open/) to download the\n",
    "full dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "sk0njhKl3yWX"
   },
   "outputs": [],
   "source": [
    "# Please edit.\n",
    "\n",
    "# Replace this path with your own tfrecords./\n",
    "# This tutorial is based on using data in the Scenario proto format directly,\n",
    "# so choose the correct dataset version.\n",
    "DATASET_FOLDER = '/waymo_open_dataset_'\n",
    "\n",
    "TRAIN_FILES = os.path.join(DATASET_FOLDER, 'training.tfrecord*')\n",
    "VALIDATION_FILES = os.path.join(DATASET_FOLDER, 'validation.tfrecord*')\n",
    "TEST_FILES = os.path.join(DATASET_FOLDER, 'test.tfrecord*')\n",
    "\n",
    "\n"]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "mDdThI324gwG"
   },
   "source": [
    "We create a dataset starting from the validation set, which is smaller than the\n",
    "training set but contains all ground-truth states (which the test set does not). We need\n",
    "the ground truth to demonstrate how to evaluate your submission locally."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "dbXocuUZ4qgH"
   },
   "outputs": [],
   "source": [
    "# Define the dataset from the TFRecords.\n",
    "filenames = tf.io.matching_files(VALIDATION_FILES)\n",
    "dataset = tf.data.TFRecordDataset(filenames)\n",
    "# Since these are raw Scenario protos, we need to parse them in eager mode.\n",
    "dataset_iterator = dataset.as_numpy_iterator()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "LOMPeqU05c2S"
   },
   "source": [
    "Load one example and visualize it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "bEZoJmVm5b1O"
   },
   "outputs": [],
   "source": [
    "bytes_example = next(dataset_iterator)\n",
    "scenario = scenario_pb2.Scenario.FromString(bytes_example)\n",
    "print(f'Checking type: {type(scenario)}')\n",
    "print(f'Loaded scenario with ID: {scenario.scenario_id}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "NercfCF9Cowc"
   },
   "outputs": [],
   "source": [
    "# Visualize the reference (ground truth) scenario.\n",
    "\n",
    "\n",
    "def plot_track_trajectory(track: scenario_pb2.Track) -> None:\n",
    "  valids = np.array([state.valid for state in track.states])\n",
    "  if np.any(valids):\n",
    "    x = np.array([state.center_x for state in track.states])\n",
    "    y = np.array([state.center_y for state in track.states])\n",
    "    ax.plot(x[valids], y[valids], linewidth=5)\n",
    "\n",
    "\n",
    "# Plot their tracks.\n",
    "fig, ax = plt.subplots(1, 1, figsize=(10, 10))\n",
    "visualizations.add_map(ax, scenario)\n",
    "\n",
    "for track in scenario.tracks:\n",
    "  if track.id in submission_specs.get_sim_agent_ids(\n",
    "      scenario, challenge_type=submission_specs.ChallengeType.SCENARIO_GEN\n",
    "  ):\n",
    "    plot_track_trajectory(track)\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "eybmOjww52Me"
   },
   "source": [
    "# Scenario generation stage 🤖\n",
    "\n",
    "Please read the\n",
    "[challenge web page](https://waymo.com/open/challenges/2025/scenario-gen) first,\n",
    "where we explain generation requirements and settings.\n",
    "\n",
    "Many of the requirements specified on the challenge website are encoded into\n",
    "`waymo_open_dataset/utils/sim_agents/submission_specs.py`. For example, we have\n",
    "specifications of:\n",
    "\n",
    "- Generation length and frequency.\n",
    "\n",
    "- Number of parallel generations required.\n",
    "\n",
    "- Agents to generate and agents to evaluate."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "q87Rqy9G6Fke"
   },
   "outputs": [],
   "source": [
    "challenge_type = submission_specs.ChallengeType.SCENARIO_GEN\n",
    "submission_config = submission_specs.get_submission_config(challenge_type)\n",
    "\n",
    "print(f'Generation length, in steps: {submission_config.n_simulation_steps}')\n",
    "print(\n",
    "    'Duration of a step, in seconds:'\n",
    "    f' {submission_config.step_duration_seconds}s (frequency:'\n",
    "    f' {1/submission_config.step_duration_seconds}Hz)'\n",
    ")\n",
    "print(\n",
    "    'Number of parallel generation per Scenario:'\n",
    "    f' {submission_config.n_rollouts}'\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ObeYHuJbBJuO"
   },
   "source": [
    "### Inputs to scenario generation\n",
    "\n",
    "Here are the inputs that are available to participants:\n",
    "\n",
    "- The road graph (map)\n",
    "- The history of traffic light states (past and current timesteps)\n",
    "- The history of the ADV (past and current timesteps)\n",
    "- The number of agents to generate for each type (TYPE_VEHICLE, TYPE_PEDESTRIAN, or TYPE_CYCLIST)\n",
    "\n",
    "<font color='red'>Note: both the validation and test subsets in the Waymo Open Motion Dataset contain the history of agent tracks, which reveal their initial states as well as their current and past positions over 1 second.  Participants should ignore this information and train their models to generate full agent trajectories without looking at this data.  The Scenario data can only be used to determine the number of agents to generate for each agent type.  Below, we provide utility functions that can:\n",
    "- Extract the number of agents present in the scenario for each type\n",
    "- Strip the ground truth data of privileged information that should not be visible to the model during inference on evaluation and test sets.</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Cb_a_LembS3W"
   },
   "outputs": [],
   "source": [
    "# Determine the number of agents to generate of each type.\n",
    "\n",
    "TYPE_VEHICLE = 1\n",
    "TYPE_PEDESTRIAN = 2\n",
    "TYPE_CYCLIST = 3\n",
    "\n",
    "def num_agents_per_type(scenario: scenario_pb2.Scenario) -> dict[int, int]:\n",
    "  num_agents_by_type = collections.defaultdict(int)\n",
    "\n",
    "  for track in scenario.tracks:\n",
    "    if track.id in submission_specs.get_sim_agent_ids(scenario, challenge_type):\n",
    "      num_agents_by_type[track.object_type] += 1\n",
    "\n",
    "  return num_agents_by_type\n",
    "\n",
    "\n",
    "total_agents = len(submission_specs.get_sim_agent_ids(scenario, challenge_type))\n",
    "print(f'Total agents to generate: {total_agents}')\n",
    "\n",
    "num_agents_by_type = num_agents_per_type(scenario)\n",
    "print(f'Number of vehicles to generate, including ADV: {num_agents_by_type[TYPE_VEHICLE]}')\n",
    "print(f'Number of pedestrians to generate: {num_agents_by_type[TYPE_PEDESTRIAN]}')\n",
    "print(f'Number of cyclists to generate: {num_agents_by_type[TYPE_CYCLIST]}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "VZdNgwKetwfQ"
   },
   "outputs": [],
   "source": [
    "def strip_logged_trajectories(\n",
    "    logged_trajectories: trajectory_utils.ObjectTrajectories,\n",
    "    submission_config: submission_specs.SubmissionConfig,\n",
    ") -> trajectory_utils.ObjectTrajectories:\n",
    "  \"\"\"Strips privileged information from `ObjectTrajectories`.\n",
    "\n",
    "  Args:\n",
    "    logged_trajectories: an `ObjectTrajectories` containing full trajectories of\n",
    "      all the objects in a scenario.\n",
    "    submission_config: Config object holding number of past/current/future\n",
    "      timesteps.\n",
    "\n",
    "  Returns:\n",
    "    An `ObjectTrajectories` with all trajectory information removed except for\n",
    "    the ADV's history and the object_types and object_ids of all agents.\n",
    "  \"\"\"\n",
    "  x = np.zeros_like(logged_trajectories.x)\n",
    "  y = np.zeros_like(logged_trajectories.y)\n",
    "  z = np.zeros_like(logged_trajectories.z)\n",
    "  heading = np.zeros_like(logged_trajectories.heading)\n",
    "  length = np.zeros_like(logged_trajectories.length)\n",
    "  width = np.zeros_like(logged_trajectories.width)\n",
    "  height = np.zeros_like(logged_trajectories.height)\n",
    "  valid = np.zeros_like(logged_trajectories.valid, dtype=np.bool)\n",
    "  # Restore ADV history.\n",
    "  adv_idx = 0\n",
    "  current_time_idx = submission_config.current_time_index\n",
    "  hist_end = current_time_idx + 1\n",
    "  x[adv_idx, :hist_end] = logged_trajectories.x[adv_idx, :hist_end]\n",
    "  y[adv_idx, :hist_end] = logged_trajectories.y[adv_idx, :hist_end]\n",
    "  z[adv_idx, :hist_end] = logged_trajectories.z[adv_idx, :hist_end]\n",
    "  heading[adv_idx, :hist_end] = logged_trajectories.heading[adv_idx, :hist_end]\n",
    "  length[adv_idx, :hist_end] = logged_trajectories.length[adv_idx, :hist_end]\n",
    "  width[adv_idx, :hist_end] = logged_trajectories.width[adv_idx, :hist_end]\n",
    "  height[adv_idx, :hist_end] = logged_trajectories.height[adv_idx, :hist_end]\n",
    "  valid[adv_idx, :hist_end] = logged_trajectories.valid[adv_idx, :hist_end]\n",
    "  # Restore validity of all objects at current timestep.\n",
    "  valid[:, current_time_idx] = logged_trajectories.valid[:, current_time_idx]\n",
    "  # Return new object.\n",
    "  return trajectory_utils.ObjectTrajectories(\n",
    "      x=tf.convert_to_tensor(x),\n",
    "      y=tf.convert_to_tensor(y),\n",
    "      z=tf.convert_to_tensor(z),\n",
    "      heading=tf.convert_to_tensor(heading),\n",
    "      length=tf.convert_to_tensor(length),\n",
    "      width=tf.convert_to_tensor(width),\n",
    "      height=tf.convert_to_tensor(height),\n",
    "      valid=tf.convert_to_tensor(valid),\n",
    "      object_id=logged_trajectories.object_id,\n",
    "      object_type=logged_trajectories.object_type,\n",
    "  )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9kh1gGFofzVs"
   },
   "source": [
    "### Outputs of scenario generation\n",
    "\n",
    "For scenarios, we borrow an abstraction from the Waymo Open Motion Dataset: we represent objects as boxes, and we are interested in how\n",
    "they *move* around the world.\n",
    "\n",
    "The task is to generate new agents with initial states (x, y, z, heading, length, width, height) and full trajectories (past, current, and future steps).\n",
    "\n",
    "To generate a full scenario, contestants need to generate the fields specified in the\n",
    "`sim_agents_submission_pb2.SimulatedTrajectory` proto, namely:\n",
    "\n",
    "- 3D coordinates\n",
    "of the box centers (x/y/z in the same reference frame as the original Scenario).\n",
    "\n",
    "- Heading of those objects.\n",
    "\n",
    "- Sizes of those objects (length/width/height).\n",
    "\n",
    "- Object type (TYPE_VEHICLE, TYPE_PEDESTRIAN, or TYPE_CYCLIST).\n",
    "\n",
    "To demonstrate the scenario generation process, we implement a random policy which samples a random initial position and velocity for each agent and then simulates a constant velocity trajectory.  Since these agents will not be reactive, this will result in a bad score in the final evaluation (more details below).\n",
    "\n",
    "For more details refer to the\n",
    "[challenge's web page](https://waymo.com/open/challenges/2025/scenario-gen)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "n9AH8dB-C0Zx"
   },
   "outputs": [],
   "source": [
    "def _generate_trajectories(\n",
    "    logged_trajectories: trajectory_utils.ObjectTrajectories,\n",
    "    submission_config: submission_specs.SubmissionConfig,\n",
    "    print_verbose_comments: bool = True,\n",
    ") -> tf.Tensor:\n",
    "  \"\"\"Generates initial states and trajectories for all required agents.\"\"\"\n",
    "  vprint = print if print_verbose_comments else lambda arg: None\n",
    "  # Extract the ADV's velocity (x/y/z components) at the first timestep.\n",
    "  adv_idx = 0\n",
    "  init_adv_states = tf.stack(\n",
    "      [\n",
    "          logged_trajectories.x[adv_idx, :2],\n",
    "          logged_trajectories.y[adv_idx, :2],\n",
    "          logged_trajectories.z[adv_idx, :2],\n",
    "          logged_trajectories.heading[adv_idx, :2],\n",
    "      ],\n",
    "      axis=-1,\n",
    "  )\n",
    "  init_adv_velocity = init_adv_states[-1, :3] - init_adv_states[-2, :3]\n",
    "  # We also make the heading constant, so concatenate 0.0 as angular velocity.\n",
    "  init_adv_velocity = tf.concat([init_adv_velocity, tf.zeros([1])], axis=-1)\n",
    "\n",
    "  # Now we create a simulation. As we discussed, we actually want 32\n",
    "  # parallel simulations, so we make this batched from the very beginning. We\n",
    "  # add some random noise on top of our actions to make sure the behaviors are\n",
    "  # different.\n",
    "  NOISE_SCALE = 0.05\n",
    "  # `max_action` shape: (4,).\n",
    "  max_action = tf.constant([5, 5, 5, 0], dtype=tf.float32)\n",
    "\n",
    "  init_adv_states = tf.stack(\n",
    "      [\n",
    "          logged_trajectories.x[adv_idx, :1],\n",
    "          logged_trajectories.y[adv_idx, :1],\n",
    "          logged_trajectories.z[adv_idx, :1],\n",
    "          logged_trajectories.heading[adv_idx, :1],\n",
    "      ],\n",
    "      axis=-1,\n",
    "  )\n",
    "\n",
    "  # We create `simulated_states` with shape (n_rollouts, n_objects, n_steps, 4).\n",
    "  n_objects, n_steps = logged_trajectories.valid.shape\n",
    "  simulated_states = tf.tile(\n",
    "      init_adv_states[tf.newaxis, :, tf.newaxis, :],\n",
    "      [submission_config.n_rollouts, n_objects, 1, 1],\n",
    "  )\n",
    "  vprint(f'Initial simulated state shape: {simulated_states.shape}')\n",
    "\n",
    "  # Set initial agent locations to random locations around the ADV.\n",
    "  init_state_noise = tf.random.normal(\n",
    "      simulated_states.shape, mean=0.0, stddev=5\n",
    "  )\n",
    "  simulated_states = simulated_states + init_state_noise\n",
    "\n",
    "  # Rollout trajectories using constant velocity.\n",
    "  for _ in range(submission_config.n_simulation_steps - 1):\n",
    "    current_state = simulated_states[:, :, -1, :]\n",
    "    # Random actions, take a normal and normalize by min/max actions\n",
    "    action_noise = tf.random.normal(\n",
    "        current_state.shape, mean=0.0, stddev=NOISE_SCALE\n",
    "    )\n",
    "    actions_with_noise = init_adv_velocity[tf.newaxis, tf.newaxis, :] + (\n",
    "        action_noise * max_action\n",
    "    )\n",
    "    next_state = current_state + actions_with_noise\n",
    "    simulated_states = tf.concat(\n",
    "        [simulated_states, next_state[:, :, None, :]], axis=2\n",
    "    )\n",
    "\n",
    "  vprint(f'Final simulated states shape: {simulated_states.shape}')\n",
    "  return simulated_states\n",
    "\n",
    "\n",
    "def _generate_sizes(\n",
    "    logged_trajectories: trajectory_utils.ObjectTrajectories,\n",
    ") -> tf.Tensor:\n",
    "  \"\"\"Generates agent sizes for all required agents.\"\"\"\n",
    "  # For demonstration purposes, we use a simple policy which sets all agents to\n",
    "  # a fixed size depending on the agent type.\n",
    "  size_vehicle = tf.constant([4.78, 2.07, 1.53])\n",
    "  size_pedestrian = tf.constant([0.92, 0.82, 1.52])\n",
    "  size_cyclist = tf.constant([1.70, 0.82, 1.76])\n",
    "\n",
    "  is_veh = tf.cast(logged_trajectories.object_type == TYPE_VEHICLE, tf.float32)\n",
    "  is_ped = tf.cast(\n",
    "      logged_trajectories.object_type == TYPE_PEDESTRIAN, tf.float32\n",
    "  )\n",
    "  is_cyc = tf.cast(logged_trajectories.object_type == TYPE_CYCLIST, tf.float32)\n",
    "\n",
    "  n_objects, n_steps = logged_trajectories.valid.shape\n",
    "\n",
    "  agent_size_if_veh = (\n",
    "      tf.tile(\n",
    "          size_vehicle[tf.newaxis, tf.newaxis, tf.newaxis, :],\n",
    "          (submission_config.n_rollouts, n_objects, n_steps, 1),\n",
    "      )\n",
    "      * is_veh[tf.newaxis, :, tf.newaxis, tf.newaxis]\n",
    "  )\n",
    "  agent_size_if_ped = (\n",
    "      tf.tile(\n",
    "          size_pedestrian[tf.newaxis, tf.newaxis, tf.newaxis, :],\n",
    "          (submission_config.n_rollouts, n_objects, n_steps, 1),\n",
    "      )\n",
    "      * is_ped[tf.newaxis, :, tf.newaxis, tf.newaxis]\n",
    "  )\n",
    "  agent_size_if_cyc = (\n",
    "      tf.tile(\n",
    "          size_cyclist[tf.newaxis, tf.newaxis, tf.newaxis, :],\n",
    "          (submission_config.n_rollouts, n_objects, n_steps, 1),\n",
    "      )\n",
    "      * is_cyc[tf.newaxis, :, tf.newaxis, tf.newaxis]\n",
    "  )\n",
    "  # Shape (n_rollouts, n_objects, n_steps, 3)\n",
    "  simulated_sizes = agent_size_if_veh + agent_size_if_ped + agent_size_if_cyc\n",
    "  return simulated_sizes\n",
    "\n",
    "\n",
    "def generate_with_random_policy(\n",
    "    scenario: scenario_pb2.Scenario, print_verbose_comments: bool = True\n",
    ") -> tuple[tf.Tensor, trajectory_utils.ObjectTrajectories]:\n",
    "  vprint = print if print_verbose_comments else lambda arg: None\n",
    "  full_logged_trajectories = trajectory_utils.ObjectTrajectories.from_scenario(\n",
    "      scenario\n",
    "  )\n",
    "  # Remove all privileged information for the scenario gen challenge.\n",
    "  logged_trajectories = strip_logged_trajectories(\n",
    "      full_logged_trajectories, submission_config\n",
    "  )\n",
    "  # Select just the objects that we need to simulate.\n",
    "  vprint(\n",
    "      'Original shape of tensors containing trajectory data:'\n",
    "      f' {logged_trajectories.valid.shape} (n_objects, n_steps)'\n",
    "  )\n",
    "  logged_trajectories = logged_trajectories.gather_objects_by_id(\n",
    "      tf.convert_to_tensor(\n",
    "          submission_specs.get_sim_agent_ids(scenario, challenge_type)\n",
    "      )\n",
    "  )\n",
    "  vprint(\n",
    "      'Modified shape of tensors containing trajectory data:'\n",
    "      f' {logged_trajectories.valid.shape} (n_objects, n_steps)'\n",
    "  )\n",
    "\n",
    "  # We can verify that all of these objects are valid at the current step.\n",
    "  current_time_index = submission_config.current_time_index\n",
    "  all_agents_valid = tf.reduce_all(\n",
    "      logged_trajectories.valid[:, current_time_index]\n",
    "  )\n",
    "  vprint(f'Are all agents valid: {all_agents_valid.numpy()}')\n",
    "\n",
    "  simulated_states = _generate_trajectories(\n",
    "      logged_trajectories, submission_config, print_verbose_comments\n",
    "  )\n",
    "  simulated_sizes = _generate_sizes(logged_trajectories)\n",
    "  return logged_trajectories, simulated_states, simulated_sizes\n",
    "\n",
    "\n",
    "logged_trajectories, simulated_states, simulated_sizes = (\n",
    "    generate_with_random_policy(scenario, print_verbose_comments=True)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WQlNA6WCm4OX"
   },
   "source": [
    "### Visualize the simulated trajectories"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "hI_Qbh2cMJfQ"
   },
   "outputs": [],
   "source": [
    "# Select which one of the 32 simulations to visualize.\n",
    "SAMPLE_INDEX = 0\n",
    "\n",
    "n_objects, n_steps = logged_trajectories.valid.shape\n",
    "\n",
    "fig, ax = plt.subplots(1, 1, figsize=(10, 10))\n",
    "visualizations.get_animated_states(\n",
    "    fig,\n",
    "    ax,\n",
    "    scenario,\n",
    "    simulated_states[SAMPLE_INDEX, :, :, 0],\n",
    "    simulated_states[SAMPLE_INDEX, :, :, 1],\n",
    "    simulated_states[SAMPLE_INDEX, :, :, 3],\n",
    "    length=simulated_sizes[SAMPLE_INDEX, :, :, 0],\n",
    "    width=simulated_sizes[SAMPLE_INDEX, :, :, 1],\n",
    "    color_idx=tf.zeros((n_objects, n_steps), dtype=tf.int32),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "T3KKORMMdEU7"
   },
   "source": [
    "## Submission generation\n",
    "\n",
    "To package the generated scenarios for submission, we are going to save them in the proto format defined inside `sim_agents_submission_pb2`.\n",
    "\n",
    "More specifically:\n",
    "\n",
    "- `SimulatedTrajectory` contains **one** trajectory for a\n",
    "single object, with the fields we need to simulate (x, y, z, heading).\n",
    "\n",
    "- `JointScene` is a set of all the object trajectories from a **single**\n",
    "simulation, describing one of the possible rollouts. - `ScenarioRollouts` is a\n",
    "collection of all the parallel simulations for a single initial Scenario.\n",
    "\n",
    "- `SimAgentsChallengeSubmission` is used to package submissions for multiple\n",
    "Scenarios (e.g. for the whole testing dataset).\n",
    "\n",
    "The simulation we performed above, for example, needs to be packaged inside a\n",
    "`ScenarioRollouts` message. Let's see how it's done.\n",
    "\n",
    "*Note: We also provide helper functions inside* `submission_specs.py` *to\n",
    "validate the submission protos.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "kAHXM4jm29E_"
   },
   "outputs": [],
   "source": [
    "def joint_scene_from_states(\n",
    "    states: tf.Tensor,\n",
    "    sizes: tf.Tensor,\n",
    "    object_ids: tf.Tensor,\n",
    ") -> sim_agents_submission_pb2.JointScene:\n",
    "  # States shape: (num_objects, num_steps, 4).\n",
    "  # Objects IDs shape: (num_objects,).\n",
    "  states = states.numpy()\n",
    "  sizes = sizes.numpy()\n",
    "  simulated_trajectories = []\n",
    "  for i_object in range(len(object_ids)):\n",
    "    simulated_trajectories.append(\n",
    "        sim_agents_submission_pb2.SimulatedTrajectory(\n",
    "            center_x=states[i_object, :, 0],\n",
    "            center_y=states[i_object, :, 1],\n",
    "            center_z=states[i_object, :, 2],\n",
    "            heading=states[i_object, :, 3],\n",
    "            object_id=object_ids[i_object],\n",
    "            length=sizes[i_object, :, 0],\n",
    "            width=sizes[i_object, :, 1],\n",
    "            height=sizes[i_object, :, 2],\n",
    "        )\n",
    "    )\n",
    "  return sim_agents_submission_pb2.JointScene(\n",
    "      simulated_trajectories=simulated_trajectories\n",
    "  )\n",
    "\n",
    "\n",
    "# Package the first simulation into a `JointScene`\n",
    "joint_scene = joint_scene_from_states(\n",
    "    simulated_states[0, :, :, :],\n",
    "    simulated_sizes[0, :, :, :],\n",
    "    logged_trajectories.object_id,\n",
    ")\n",
    "# Validate the joint scene. Should raise an exception if it's invalid.\n",
    "submission_specs.validate_joint_scene(joint_scene, scenario, challenge_type)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "AVKZvDKfG8Q9"
   },
   "outputs": [],
   "source": [
    "# Now we can replicate this strategy to export all the parallel simulations.\n",
    "def scenario_rollouts_from_states(\n",
    "    scenario: scenario_pb2.Scenario,\n",
    "    states: tf.Tensor,\n",
    "    sizes: tf.Tensor,\n",
    "    object_ids: tf.Tensor,\n",
    ") -> sim_agents_submission_pb2.ScenarioRollouts:\n",
    "  # States shape: (num_rollouts, num_objects, num_steps, 4).\n",
    "  # Objects IDs shape: (num_objects,).\n",
    "  joint_scenes = []\n",
    "  for i_rollout in range(states.shape[0]):\n",
    "    joint_scenes.append(\n",
    "        joint_scene_from_states(states[i_rollout], sizes[i_rollout], object_ids)\n",
    "    )\n",
    "  return sim_agents_submission_pb2.ScenarioRollouts(\n",
    "      # Note: remember to include the Scenario ID in the proto message.\n",
    "      joint_scenes=joint_scenes,\n",
    "      scenario_id=scenario.scenario_id,\n",
    "  )\n",
    "\n",
    "\n",
    "scenario_rollouts = scenario_rollouts_from_states(\n",
    "    scenario, simulated_states, simulated_sizes, logged_trajectories.object_id\n",
    ")\n",
    "# As before, we can validate the message we just generated.\n",
    "submission_specs.validate_scenario_rollouts(\n",
    "    scenario_rollouts, scenario, challenge_type\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9EesEPDXdPZh"
   },
   "source": [
    "## Evaluation\n",
    "\n",
    "Once we have created the submission for a single Scenario, we can evaluate the\n",
    "scenarios we have generated.\n",
    "\n",
    "The evaluation of scenario generation tries to capture distributional realism, i.e. how well our simulations capture the distribution of human behavior from the real world. A key difference to the existing Behavior Prediction task, is that we are focusing our comparison on quantities (**features**) that try to capture the behavior of humans.\n",
    "\n",
    "More specifically, for this challenge we will look at the following features:\n",
    "\n",
    "- Kinematic features: speed / accelerations of objects, both linear and angular.\n",
    "\n",
    "- Interactive features: features capturing relationships between objects, like\n",
    "collisions, distances to other objects and time to collision (TTC).\n",
    "\n",
    "- Map-based\n",
    "features: features capturing how objects move with respect to the road itself,\n",
    "e.g. going offroad for a car.\n",
    "\n",
    "While we require all those objects to be generated, we are going to evaluate\n",
    "only a subset of them, namely the `tracks_to_predict` inside the Scenario. This\n",
    "criteria was put in place to ensure less noisy measures, as these objects will\n",
    "have consistently long observations from the real world, which we need to\n",
    "properly evaluate our agents.\n",
    "\n",
    "Note that, while all the other generated agents are not *directly* evaluated, they are still part of the simulation. This means that all the interactive features will be computed considering those generated agents, and the *evaluated* scenario agents need to be reactive to these objects.\n",
    "\n",
    "Now let's compute the features to understand better the evaluation in practice.\n",
    "Everything is included inside `metric_features.py`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "lpqTJ6oNoWoE"
   },
   "outputs": [],
   "source": [
    "# Compute the features for a single JointScene.\n",
    "single_scene_features = metric_features.compute_metric_features(\n",
    "    scenario, joint_scene, challenge_type\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "jopGed89EmGf"
   },
   "outputs": [],
   "source": [
    "# These features will be computed only for the `tracks_to_predict` objects.\n",
    "print(\n",
    "    'Evaluated objects:'\n",
    "    f' {submission_specs.get_evaluation_sim_agent_ids(scenario, challenge_type)}'\n",
    ")\n",
    "# This will also match single_scene_features.object_ids\n",
    "print(f'Evaluated objects in features: {single_scene_features.object_id}')\n",
    "\n",
    "# Features contain a validity flag, which for simulated rollouts must be always\n",
    "# True, because we are requiring the generated agents to be always valid when\n",
    "# replaced.\n",
    "print(f'Are all agents valid: {tf.reduce_all(single_scene_features.valid)}')\n",
    "\n",
    "# ============ FEATURES ============\n",
    "# Average displacement feature. This corresponds to ADE in the BP challenges.\n",
    "# Here it is used just for demonstration (it's not actually included in the\n",
    "# final score).\n",
    "# Shape: (1, n_objects).\n",
    "print(\n",
    "    f'ADE: {tf.reduce_mean(single_scene_features.average_displacement_error)}'\n",
    ")\n",
    "\n",
    "# Kinematic features.\n",
    "print('\\n============ KINEMATIC FEATURES ============')\n",
    "fig, axes = plt.subplots(1, 4, figsize=(16, 4))\n",
    "for i_object in range(len(single_scene_features.object_id)):\n",
    "  _object_id = single_scene_features.object_id[i_object].numpy()\n",
    "  axes[0].plot(\n",
    "      single_scene_features.linear_speed[0, i_object, :], label=str(_object_id)\n",
    "  )\n",
    "  axes[1].plot(\n",
    "      single_scene_features.linear_acceleration[0, i_object, :],\n",
    "      label=str(_object_id),\n",
    "  )\n",
    "  axes[2].plot(\n",
    "      single_scene_features.angular_speed[0, i_object, :], label=str(_object_id)\n",
    "  )\n",
    "  axes[3].plot(\n",
    "      single_scene_features.angular_acceleration[0, i_object, :],\n",
    "      label=str(_object_id),\n",
    "  )\n",
    "\n",
    "\n",
    "TITLES = [\n",
    "    'linear_speed',\n",
    "    'linear_acceleration',\n",
    "    'angular_speed',\n",
    "    'angular_acceleration',\n",
    "]\n",
    "for ax, title in zip(axes, TITLES):\n",
    "  ax.legend()\n",
    "  ax.set_title(title)\n",
    "plt.show()\n",
    "\n",
    "# Interactive features.\n",
    "print('\\n============ INTERACTIVE FEATURES ============')\n",
    "print(f'Colliding objects: {single_scene_features.collision_per_step[0]}')\n",
    "fig, axes = plt.subplots(1, 2, figsize=(8, 4))\n",
    "for i_object in range(len(single_scene_features.object_id)):\n",
    "  _object_id = single_scene_features.object_id[i_object].numpy()\n",
    "  axes[0].plot(\n",
    "      single_scene_features.distance_to_nearest_object[0, i_object, :],\n",
    "      label=str(_object_id),\n",
    "  )\n",
    "  axes[1].plot(\n",
    "      single_scene_features.time_to_collision[0, i_object, :],\n",
    "      label=str(_object_id),\n",
    "  )\n",
    "\n",
    "TITLES = ['distance to nearest object', 'time to collision']\n",
    "for ax, title in zip(axes, TITLES):\n",
    "  ax.legend()\n",
    "  ax.set_title(title)\n",
    "plt.show()\n",
    "\n",
    "# Map-based features.\n",
    "print('\\n============ MAP-BASED FEATURES ============')\n",
    "print(f'Offroad objects: {single_scene_features.offroad_per_step[0]}')\n",
    "fig, axes = plt.subplots(1, 1, figsize=(4, 4))\n",
    "for i_object in range(len(single_scene_features.object_id)):\n",
    "  _object_id = single_scene_features.object_id[i_object].numpy()\n",
    "  axes.plot(\n",
    "      single_scene_features.distance_to_road_edge[0, i_object, :],\n",
    "      label=str(_object_id),\n",
    "  )\n",
    "axes.legend()\n",
    "axes.set_title('distance to road edge')\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "oYzJ0HevRbI8"
   },
   "source": [
    "These features are computed for each of the submitted `JointScenes`. So, for a\n",
    "given `ScenarioRollouts` we actually get a distribution of these features over\n",
    "the parallel rollouts.\n",
    "\n",
    "The final metric we will be evaluating is a measure of the likelihood of what\n",
    "happened in real life, compared to the distribution of what *we predicted might\n",
    "have happened* (in simulation). For more details see the challenge\n",
    "documentation.\n",
    "\n",
    "The final metrics can be called directly from `metrics.py`, as shown below.\n",
    "\n",
    "Some of the details of how these metrics are computed and aggregated can be\n",
    "found in `SimAgentMetricsConfig`. The following code demonstrates how to load\n",
    "the config used for the challenge and how to score your own submission."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "1nbLEo0aSKhK"
   },
   "outputs": [],
   "source": [
    "# Load the test configuration.\n",
    "config = metrics.load_metrics_config(challenge_type)\n",
    "\n",
    "scenario_metrics = metrics.compute_scenario_metrics_for_bundle(\n",
    "    config, scenario, scenario_rollouts, challenge_type\n",
    ")\n",
    "print(scenario_metrics)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8cwPMiJjS-TT"
   },
   "source": [
    "As you can see, there is a score in the range [0,1] for each of the features\n",
    "listed above. The new field to highlight is `metametric`: this is a linear\n",
    "combination of the per-feature scores, and it's the final metric used to score\n",
    "and rank submissions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4hwm-Mmh18Le"
   },
   "source": [
    "# Generate a submission\n",
    "\n",
    "This last section will show how to package the rollouts into a valid submission.\n",
    "\n",
    "We previously showed how to generate a `ScenarioRollouts` message, the\n",
    "per-scenario container of simulations. Now we need to package multiple\n",
    "`ScenarioRollouts` into a `SimAgentsChallengeSubmission`, which also contains\n",
    "metadata about the submission (e.g. author and method name). This message then\n",
    "needs to be packaged into a binproto file.\n",
    "\n",
    "We expect the submission to be fairly large in size, which means that if we were\n",
    "to package all the `ScenarioRollouts` into a single binproto file we would\n",
    "exceed the 2GB limit imposed by protobuffers. Instead, we suggest to create a\n",
    "binproto file for each shard of the dataset, as shown below.\n",
    "\n",
    "The number of shards can be arbitrary, but the file naming needs to be\n",
    "consistent with the following structure: `filename.binproto-00001-of-00150`\n",
    "validate by the following regular expression `.*\\.binproto(-\\d{5}-of-\\d{5})?`\n",
    "\n",
    "Once all the binproto files have been created, we can compress them into a\n",
    "single tar.gz archive, ready for submission. Follow the instructions on the\n",
    "challenge web page to understand how to submit this tar.gz file to our servers\n",
    "for evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "YDljjF_mLifF"
   },
   "outputs": [],
   "source": [
    "# Where results are going to be saved.\n",
    "OUTPUT_ROOT_DIRECTORY = '/tmp/waymo_scenario_gen/'\n",
    "os.makedirs(OUTPUT_ROOT_DIRECTORY, exist_ok=True)\n",
    "output_filenames = []\n",
    "\n",
    "# Iterate over shards. This could be parallelized in any custom way, as the\n",
    "# number of output shards is not required to be the same as the initial dataset.\n",
    "for shard_filename in tqdm.tqdm(filenames):\n",
    "  # A shard filename has the structure: `validation.tfrecord-00000-of-00150`.\n",
    "  # We want to maintain the same shard naming here, for simplicity, so we can\n",
    "  # extract the suffix.\n",
    "  shard_suffix = shard_filename.numpy().decode('utf8')[\n",
    "      -len('-00000-of-00150') :\n",
    "  ]\n",
    "\n",
    "  # Now we can iterate over the Scenarios in the shard. To make this faster as\n",
    "  # part of the tutorial, we will only process 2 Scenarios per shard. Obviously,\n",
    "  # to create a valid submission, all the scenarios needs to be present.\n",
    "  shard_dataset = tf.data.TFRecordDataset([shard_filename]).take(2)\n",
    "  shard_iterator = shard_dataset.as_numpy_iterator()\n",
    "\n",
    "  scenario_rollouts = []\n",
    "  for scenario_bytes in shard_iterator:\n",
    "    scenario = scenario_pb2.Scenario.FromString(scenario_bytes)\n",
    "    logged_trajectories, simulated_states, simulated_sizes = (\n",
    "        generate_with_random_policy(scenario, print_verbose_comments=False)\n",
    "    )\n",
    "    sr = scenario_rollouts_from_states(\n",
    "        scenario,\n",
    "        simulated_states,\n",
    "        simulated_sizes,\n",
    "        logged_trajectories.object_id,\n",
    "    )\n",
    "    submission_specs.validate_scenario_rollouts(sr, scenario, challenge_type)\n",
    "    scenario_rollouts.append(sr)\n",
    "\n",
    "  # Now that we have 2 `ScenarioRollouts` for this shard, we can package them\n",
    "  # into a `SimAgentsChallengeSubmission`. Remember to populate the metadata\n",
    "  # for each shard.\n",
    "  shard_submission = sim_agents_submission_pb2.SimAgentsChallengeSubmission(\n",
    "      scenario_rollouts=scenario_rollouts,\n",
    "      submission_type=sim_agents_submission_pb2.SimAgentsChallengeSubmission.SIM_AGENTS_SUBMISSION,\n",
    "      account_name='your_account@test.com',\n",
    "      unique_method_name='scenario_gen_tutorial',\n",
    "      authors=['test'],\n",
    "      affiliation='waymo',\n",
    "      description='Submission from the Scenario Gen tutorial',\n",
    "      method_link='https://waymo.com/open/',\n",
    "      # New REQUIRED fields.\n",
    "      uses_lidar_data=False,\n",
    "      uses_camera_data=False,\n",
    "      uses_public_model_pretraining=False,\n",
    "      num_model_parameters='24',\n",
    "      acknowledge_complies_with_closed_loop_requirement=True,\n",
    "  )\n",
    "\n",
    "  # Now we can export this message to a binproto, saved to local storage.\n",
    "  output_filename = f'submission.binproto{shard_suffix}'\n",
    "  with open(os.path.join(OUTPUT_ROOT_DIRECTORY, output_filename), 'wb') as f:\n",
    "    f.write(shard_submission.SerializeToString())\n",
    "  output_filenames.append(output_filename)\n",
    "\n",
    "# Once we have created all the shards, we can package them directly into a\n",
    "# tar.gz archive, ready for submission.\n",
    "with tarfile.open(\n",
    "    os.path.join(OUTPUT_ROOT_DIRECTORY, 'submission.tar.gz'), 'w:gz'\n",
    ") as tar:\n",
    "  for output_filename in output_filenames:\n",
    "    tar.add(\n",
    "        os.path.join(OUTPUT_ROOT_DIRECTORY, output_filename),\n",
    "        arcname=output_filename,\n",
    "    )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "yNK6rrc0puDZ"
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "private_outputs": true,
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
