{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Batch\n",
    "\n",
    "The `Batch` class serves as a fundamental data structure within Tianshou, designed to efficiently store and manipulate hierarchical named tensors. This tutorial provides comprehensive guidance on understanding the conceptual foundations and operational behavior of `Batch`, enabling users to fully leverage Tianshou's capabilities.\n",
    "\n",
    "The tutorial is organized into three sections: first, we establish the concept of hierarchical named tensors; second, we introduce basic `Batch` operations; and third, we explore advanced topics.\n",
    "\n",
    "## Hierarchical Named Tensors\n",
    "\n",
    "Hierarchical named tensors refer to a collection of tensors whose identifiers form a structured hierarchy. Consider a set of four tensors `[t1, t2, t3, t4]` with corresponding names `[name1, name2, name3, name4]`, where `name1` and `name2` reside within namespace `name0`. In this configuration, the fully qualified name of tensor `t1` becomes `name0.name1`, demonstrating how hierarchy manifests through tensor naming conventions.\n",
    "\n",
    "The structure of hierarchical named tensors can be represented using a tree data structure. This representation includes a virtual root node representing the entire object, with internal nodes serving as keys (names) and leaf nodes containing values (scalars or tensors).\n",
    "\n",
    "<div align=center>\n",
    "<img src=\"../_static/images/batch_tree.png\" style=\"width:50%\" title=\"data flow\">\n",
    "</div>\n",
    "\n",
    "The necessity for hierarchical named tensors arises from the inherent heterogeneity of reinforcement learning problems. While the RL abstraction is elegantly simple:\n",
    "\n",
    "```python\n",
    "state, reward, done = env.step(action)\n",
    "```\n",
    "\n",
    "The `reward` and `done` components are typically scalar values. However, both `state` and `action` exhibit significant variation across different environments. For instance, a `state` may be represented as a simple vector, a tensor, or a combination of camera and sensory inputs. In the latter case, hierarchical named tensors provide a natural storage mechanism. This hierarchical structure extends beyond `state` and `action` to encompass all transition components (`state`, `action`, `reward`, `done`) within a unified hierarchical framework.\n",
    "\n",
    "While storing hierarchical named tensors is straightforward using nested dictionary structures:\n",
    "\n",
    "```python\n",
    "{\n",
    "    'done': done,\n",
    "    'reward': reward,\n",
    "    'state': {\n",
    "        'camera': camera,\n",
    "        'sensory': sensory\n",
    "    },\n",
    "    'action': {\n",
    "        'direct': direct,\n",
    "        'point_3d': point_3d,\n",
    "        'force': force,\n",
    "    }\n",
    "}\n",
    "```\n",
    "\n",
    "The challenge lies in **manipulating** these structures efficiently—for example, when adding new transition tuples to a replay buffer while handling their heterogeneity. The `Batch` class addresses this challenge by providing streamlined methods to create, store, and manipulate hierarchical named tensors.\n",
    "\n",
    "`Batch` can be conceptualized as a NumPy-enhanced Python dictionary. It shares similarities with PyTorch's `tensordict`, though with distinct type structure characteristics.\n",
    "\n",
    "<div align=center>\n",
    "<img src=\"../_static/images/concepts_arch.png\", title=\"data flow\">\n",
    "Data flow\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {
    "tags": [
     "remove-output",
     "hide-cell"
    ]
   },
   "source": [
    "import pickle\n",
    "\n",
    "import numpy as np\n",
    "import torch\n",
    "\n",
    "from tianshou.data import Batch"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Basic Usage\n",
    "\n",
    "This section covers fundamental `Batch` operations, including the contents of `Batch` objects, construction methods, and manipulation techniques.\n",
    "\n",
    "### Content Specification\n",
    "\n",
    "The content of `Batch` objects is defined by the following rules:\n",
    "\n",
    "1. A `Batch` object may be empty (`Batch()`) or contain at least one key-value pair. Empty `Batch` objects can be utilized for key reservation (detailed in the Advanced Topics section).\n",
    "\n",
    "2. Keys must be strings, serving as identifiers for their corresponding values.\n",
    "\n",
    "3. Values may be scalars, tensors, or `Batch` objects. This recursive definition enables the construction of hierarchical batch structures.\n",
    "\n",
    "4. Tensors constitute the primary value type. Tensors are n-dimensional arrays of uniform data type. Two tensor types are supported: [PyTorch](https://pytorch.org/) tensor type `torch.Tensor` and [NumPy](https://numpy.org/) tensor type `np.ndarray`.\n",
    "\n",
    "5. Scalars represent valid values, comprising single boolean values, numbers, or objects. These include Python scalars (`False`, `1`, `2.3`, `None`, `'hello'`) and NumPy scalars (`np.bool_(True)`, `np.int32(1)`, `np.float64(2.3)`). Scalars must not be conflated with `Batch`/dict/tensor types.\n",
    "\n",
    "**Note:** `Batch` objects cannot directly store `dict` objects due to internal implementation using dictionaries for data storage. During construction, `dict` objects are automatically converted to `Batch` objects.\n",
    "\n",
    "Supported tensor data types include boolean and numeric types (any integer or floating-point precision supported by NumPy or PyTorch). NumPy's support for object arrays enables storage of non-numeric data types within `Batch`. For data that are neither boolean nor numeric (e.g., strings, sets), storage within `np.ndarray` with `np.object` data type is supported, allowing `Batch` to accommodate arbitrary Python objects."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "data = Batch(a=4, b=[5, 5], c=\"2312312\", d=(\"a\", -2, -3))\n",
    "print(data)\n",
    "print(data.b)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A `Batch` object stores all input data as key-value pairs and automatically converts values to NumPy arrays when applicable."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Construction Methods\n",
    "\n",
    "Two primary construction methods are available for `Batch` objects: construction from a dictionary, or using keyword arguments. The following examples demonstrate these approaches."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Dictionary-Based Construction"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Direct dictionary passing (potentially nested) is supported\n",
    "data = Batch({\"a\": 4, \"b\": [5, 5], \"c\": \"2312312\"})\n",
    "# Lists are automatically converted to NumPy arrays\n",
    "print(data.b)\n",
    "data.b = np.array([3, 4, 5])\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Lists of dictionary objects (potentially nested) are automatically stacked\n",
    "data = Batch([{\"a\": 0.0, \"b\": \"hello\"}, {\"a\": 1.0, \"b\": \"world\"}])\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Keyword Argument Construction"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Construction using keyword arguments\n",
    "data = Batch(a=[4, 4], b=[5, 5], c=[None, None])\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Combining dictionary and keyword arguments\n",
    "data = Batch(\n",
    "    {\"a\": [4, 4], \"b\": [5, 5]}, c=[None, None]\n",
    ")  # First argument is a dictionary; 'c' is a keyword argument\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "arr = np.zeros((3, 4))\n",
    "# By default, Batch maintains references to data; explicit copying is supported via the copy parameter\n",
    "data = Batch(arr=arr, copy=True)  # data.arr is now a copy of 'arr'"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Nested Batch Construction"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Nested dictionaries are converted to nested Batch objects\n",
    "data = {\n",
    "    \"action\": np.array([1.0, 2.0, 3.0]),\n",
    "    \"reward\": 3.66,\n",
    "    \"obs\": {\n",
    "        \"rgb_obs\": np.zeros((3, 3)),\n",
    "        \"flatten_obs\": np.ones(5),\n",
    "    },\n",
    "}\n",
    "\n",
    "batch = Batch(data, extra=\"extra_string\")\n",
    "print(batch)\n",
    "# batch.obs is also a Batch instance\n",
    "print(type(batch.obs))\n",
    "print(batch.obs.rgb_obs)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Lists of dictionaries/Batches are automatically concatenated/stacked\n",
    "# This feature facilitates data collection from parallelized environments\n",
    "batch = Batch([data] * 3)\n",
    "print(batch)\n",
    "print(batch.obs.rgb_obs.shape)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data Manipulation\n",
    "\n",
    "Internal data can be accessed using either `b.key` or `b[key]` notation, where `b.key` retrieves the subtree rooted at `key`. When the result is a non-empty subtree, key references can be chained (e.g., `b.key.key1.key2.key3`). Upon reaching a leaf node, the stored data (scalars or tensors) is returned."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "data = Batch(a=4, b=[5, 5])\n",
    "print(data.b)\n",
    "# Attribute access (obj.key) is equivalent to dictionary access (obj[\"key\"])\n",
    "print(data[\"a\"])"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Dictionary-style iteration over items is supported\n",
    "for key, value in data.items():\n",
    "    print(f\"{key}: {value}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Methods keys() and values() behave analogously to their dict counterparts\n",
    "for key in data.keys():\n",
    "    print(f\"{key}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# The update() method operates analogously to dict.update()\n",
    "# Equivalent to: data.c = 1; data.d = 2; data.e = 3;\n",
    "data.update(c=1, d=2, e=3)\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Adding and deleting key-value pairs\n",
    "batch1 = Batch({\"a\": [4, 4], \"b\": (5, 5)})\n",
    "print(batch1)\n",
    "\n",
    "batch1.c = Batch(c1=np.arange(3), c2=False)\n",
    "del batch1.a\n",
    "print(batch1)\n",
    "\n",
    "# Accessing values by key\n",
    "assert batch1[\"c\"] is batch1.c\n",
    "print(\"c\" in batch1)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Important Note:** While `for x in data` iterates over keys when `data` is a `dict` object, for `Batch` objects this syntax iterates over `data[0], data[1], ..., data[-1]`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Length, Shape, Indexing, and Slicing\n",
    "\n",
    "`Batch` implements a subset of NumPy ndarray APIs, supporting advanced slicing operations (e.g., `batch[:, i]`) provided the slice is valid. NumPy's broadcasting mechanism is also supported."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Initializing Batch with tensors\n",
    "data = Batch(a=np.array([[0.0, 2.0], [1.0, 3.0]]), b=[[5.0, -5.0], [1.0, -2.0]])\n",
    "# When all values share the same length/shape, the Batch adopts that length/shape\n",
    "print(len(data))\n",
    "print(data.shape)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Accessing the first element of all stored tensors while preserving Batch structure\n",
    "print(data[0])"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Iteration over data[0], data[1], ..., data[-1]\n",
    "for sample in data:\n",
    "    print(sample.a)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Advanced slicing with arithmetic operations and broadcasting\n",
    "data[:, 1] += 1\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Direct application of NumPy functions to Batch objects\n",
    "print(np.mean(data))"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Conversion to list is supported\n",
    "list(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Environment Stepping Example"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Example: Data collected from four parallel environments\n",
    "step_outputs = [\n",
    "    {\n",
    "        \"act\": np.random.randint(10),\n",
    "        \"rew\": 0.0,\n",
    "        \"obs\": np.ones((3, 3)),\n",
    "        \"info\": {\"done\": np.random.choice(2), \"failed\": False},\n",
    "        \"terminated\": False,\n",
    "        \"truncated\": False,\n",
    "    }\n",
    "    for _ in range(4)\n",
    "]\n",
    "batch = Batch(step_outputs)\n",
    "print(batch)\n",
    "print(batch.shape)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Advanced indexing for selecting data from specific environments\n",
    "print(batch[0])\n",
    "print(batch[[0, 3]])"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Slicing operations are supported\n",
    "print(batch[-2:])"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Stack, Concatenate, and Split Operations\n",
    "\n",
    "Tianshou provides intuitive methods for stacking and concatenating multiple `Batch` instances, as well as splitting instances into multiple batches. Currently, we focus on aggregation (stack/concatenate) of homogeneous (structurally identical) batches."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "data_1 = Batch(a=np.array([0.0, 2.0]), b=5)\n",
    "data_2 = Batch(a=np.array([1.0, 3.0]), b=-5)\n",
    "data = Batch.stack((data_1, data_2))\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Split operation with optional shuffling\n",
    "data_split = list(data.split(1, shuffle=False))\n",
    "print(data_split)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "data_cat = Batch.cat(data_split)\n",
    "print(data_cat)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Additional Concatenation and Stacking Examples"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Concatenating batches with compatible keys\n",
    "b1 = Batch(a=[{\"b\": np.float64(1.0), \"d\": Batch(e=np.array(3.0))}])\n",
    "b2 = Batch(a=[{\"b\": np.float64(4.0), \"d\": {\"e\": np.array(6.0)}}])\n",
    "b12_cat_out = Batch.cat([b1, b2])\n",
    "print(b1)\n",
    "print(b2)\n",
    "print(b12_cat_out)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Stacking batches with compatible keys along specified axis\n",
    "b3 = Batch(a=np.zeros((3, 2)), b=np.ones((2, 3)), c=Batch(d=[[1], [2]]))\n",
    "b4 = Batch(a=np.ones((3, 2)), b=np.ones((2, 3)), c=Batch(d=[[0], [3]]))\n",
    "b34_stack = Batch.stack((b3, b4), axis=1)\n",
    "print(b3)\n",
    "print(b4)\n",
    "print(b34_stack)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Splitting batch into unit-sized batches with optional shuffling\n",
    "print(type(b34_stack.split(1)))\n",
    "print(list(b34_stack.split(1, shuffle=True)))"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Data Type Conversion\n",
    "\n",
    "While `Batch` supports both NumPy arrays and PyTorch Tensors with identical usage patterns, seamless conversion between these types is provided."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "batch1 = Batch(a=np.arange(2), b=torch.zeros((2, 2)))\n",
    "batch2 = Batch(a=np.arange(2), b=torch.ones((2, 2)))\n",
    "batch_cat = Batch.cat([batch1, batch2, batch1])\n",
    "print(batch_cat)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Data type conversion is straightforward when uniform data types are desired."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "data = Batch(a=np.zeros((3, 4)))\n",
    "data.to_torch_(dtype=torch.float32, device=\"cpu\")\n",
    "print(data.a)\n",
    "# Conversion to NumPy is also supported via to_numpy_()\n",
    "data.to_numpy_()\n",
    "print(data.a)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "batch_cat.to_numpy_()\n",
    "print(batch_cat)\n",
    "batch_cat.to_torch_()\n",
    "print(batch_cat)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Serialization\n",
    "\n",
    "`Batch` objects are serializable and compatible with Python's `pickle` module, enabling persistent storage and restoration. This capability is particularly important for distributed environment sampling."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "batch = Batch(obs=Batch(a=0.0, c=torch.Tensor([1.0, 2.0])), np=np.zeros([3, 4]))\n",
    "batch_pk = pickle.loads(pickle.dumps(batch))\n",
    "print(batch_pk)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Advanced Topics\n",
    "\n",
    "This section addresses advanced `Batch` concepts, including key reservation mechanisms, detailed length and shape semantics, and aggregation of heterogeneous batches.\n",
    "\n",
    "### Key Reservation\n",
    "\n",
    "In many scenarios, the key structure is known in advance while value shapes remain undetermined until runtime (e.g., after environment execution). Tianshou supports key reservation through placeholder values.\n",
    "\n",
    "<div style=\"text-align: center; padding: 1rem;\">\n",
    "<img src=\"../_static/images/batch_reserve.png\" style=\"width: 50%; padding-bottom: 1rem;\"><br>\n",
    "Structure of a batch with reserved keys\n",
    "</div>\n",
    "\n",
    "Key reservation is implemented using empty `Batch()` objects as placeholder values."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "a = Batch(b=Batch())  # 'b' is a reserved key\n",
    "print(a)\n",
    "\n",
    "# Hierarchical key reservation\n",
    "a = Batch(b=Batch(c=Batch()), d=Batch())  # 'c' and 'd' are reserved keys\n",
    "print(a)\n",
    "\n",
    "a = Batch(key1=np.array([1, 2]), key2=np.array([3, 4]), key3=Batch(key4=Batch(), key5=Batch()))\n",
    "print(a)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The structure of `Batch` objects with reserved keys can be visualized using tree notation, where reserved keys represent internal nodes lacking attached leaf nodes.\n",
    "\n",
    "**Important:** Reserved keys indicate that values will eventually be assigned. These values may be scalars, tensors, or `Batch` objects. Understanding this concept is essential for working with heterogeneous batches.\n",
    "\n",
    "The introduction of reserved keys necessitates verification methods."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Examples of checking whether a Batch is empty\n",
    "print(len(Batch().get_keys()) == 0)\n",
    "print(len(Batch(a=Batch(), b=Batch(c=Batch())).get_keys()) == 0)\n",
    "print(len(Batch(a=Batch(), b=Batch(c=Batch()))) == 0)\n",
    "print(len(Batch(d=1).get_keys()) == 0)\n",
    "print(len(Batch(a=np.float64(1.0)).get_keys()) == 0)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To verify emptiness, use `len(Batch.get_keys()) == 0` for direct emptiness (a simple `Batch()`) or `len(Batch) == 0` for recursive emptiness (a `Batch` without scalar or tensor leaf nodes).\n",
    "\n",
    "**Note:** The `Batch.empty` attribute differs from emptiness checking. `Batch.empty` and its in-place variant `Batch.empty_` are used to reset values to zeros or `None`. Consult the API documentation for additional details."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Length and Shape Semantics\n",
    "\n",
    "The primary use case for `Batch` is storing batched data collections. The term \"Batch\" originates from deep learning terminology, denoting mini-batches sampled from datasets. Typically, a \"Batch\" represents a collection of tensors sharing a common first dimension, with batch size corresponding to the `Batch` object's length.\n",
    "\n",
    "When all leaf nodes in a `Batch` object are tensors but possess different lengths, storage within `Batch` remains possible. However, the semantics of `len(obj)` become ambiguous. Currently, Tianshou returns the minimum tensor length, though we strongly recommend avoiding `len(obj)` operations on `Batch` objects containing tensors of varying lengths."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Length and shape examples for Batch objects\n",
    "data = Batch(a=[5.0, 4.0], b=np.zeros((2, 3, 4)))\n",
    "print(data.shape)\n",
    "print(len(data))\n",
    "print(data[0].shape)\n",
    "try:\n",
    "    len(data[0])\n",
    "except TypeError as e:\n",
    "    print(f\"TypeError: {e}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Important:** Following scientific computing conventions, scalars possess no length. If any scalar leaf node exists in a `Batch` object, invoking `len(obj)` raises an exception.\n",
    "\n",
    "Similarly, reserved keys have undetermined values and therefore no defined length (or equivalently, **arbitrary** length). When tensors and reserved keys coexist, the latter are ignored in `len(obj)` calculations, returning the minimum tensor length. When no tensors exist in the `Batch` object, Tianshou raises an exception.\n",
    "\n",
    "The `obj.shape` attribute exhibits similar behavior to `len(obj)`:\n",
    "\n",
    "1. When all leaf nodes are tensors with identical shapes, that shape is returned.\n",
    "\n",
    "2. When all leaf nodes are tensors with differing shapes, the minimum length per dimension is returned.\n",
    "\n",
    "3. When any scalar value exists, `obj.shape` returns `[]`.\n",
    "\n",
    "4. Reserved keys have undetermined shape, treated as `[]`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Aggregation of Heterogeneous Batches\n",
    "\n",
    "This section examines aggregation operations (stack/concatenate) on heterogeneous `Batch` objects, focusing on structural heterogeneity. Aggregation operations ultimately invoke NumPy/PyTorch operators (`np.stack`, `np.concatenate`, `torch.stack`, `torch.cat`). Value heterogeneity that violates these operators' requirements (e.g., stacking `np.ndarray` with `torch.Tensor`, or stacking tensors with incompatible shapes) results in exceptions.\n",
    "\n",
    "<div style=\"text-align: center; padding: 1rem;\">\n",
    "<img src=\"../_static/images/aggregation.png\" style=\"width: 100%; padding-bottom: 0rem;\"><br>\n",
    "</div>\n",
    "\n",
    "The behavior is intuitive: keys not shared across all batches are padded with zeros (or `None` for `np.object` data type) in batches lacking these keys."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Stack example: batch a lacks key 'b', batch b lacks key 'a'\n",
    "a = Batch(a=np.zeros([4, 4]), common=Batch(c=np.zeros([4, 5])))\n",
    "b = Batch(b=np.zeros([4, 6]), common=Batch(c=np.zeros([4, 5])))\n",
    "c = Batch.stack([a, b])\n",
    "print(c.a.shape)\n",
    "print(c.b.shape)\n",
    "print(c.common.c.shape)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Automatic padding with None or 0 using appropriate shapes\n",
    "data_1 = Batch(a=np.array([0.0, 2.0]))\n",
    "data_2 = Batch(a=np.array([1.0, 3.0]), b=\"done\")\n",
    "data = Batch.stack((data_1, data_2))\n",
    "print(data)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Concatenation example: batch a lacks key 'b', batch b lacks key 'a'\n",
    "a = Batch(a=np.zeros([3, 4]), common=Batch(c=np.zeros([3, 5])))\n",
    "b = Batch(b=np.zeros([4, 3]), common=Batch(c=np.zeros([4, 5])))\n",
    "# Note: Recent changes have modified concatenation behavior for heterogeneous batches\n",
    "# The following operation is no longer supported:\n",
    "# c = Batch.cat([a, b])\n",
    "# print(c.a.shape)\n",
    "# print(c.b.shape)\n",
    "# print(c.common.c.shape)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "However, certain cases of extreme heterogeneity prevent aggregation:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Example of incompatible batches that cannot be aggregated\n",
    "try:\n",
    "    a = Batch(a=np.zeros([4, 4]))\n",
    "    b = Batch(a=Batch(b=Batch()))\n",
    "    c = Batch.stack([a, b])\n",
    "except Exception as e:\n",
    "    print(f\"Exception: {e}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "How can we determine if batches can be aggregated? Reconsider the purpose of reserved keys. The distinction between `a1=Batch(b=Batch())` and `a2=Batch()` is that `a1.b` returns `Batch()` while `a2.b` raises an exception. **Reserved keys enable attribute reference for future value assignment.**\n",
    "\n",
    "A key chain `k=[key1, key2, ..., keyn]` applies to `b` if the expression `b.key1.key2.{...}.keyn` is valid, with the result being `b[k]`.\n",
    "\n",
    "For a set of `Batch` objects S, aggregation is possible if there exists a `Batch` object `b` satisfying:\n",
    "\n",
    "1. **Key chain applicability:** For any object `bi` in S and any key chain `k`, if `bi[k]` is valid, then `b[k]` must be valid.\n",
    "\n",
    "2. **Type consistency:** If `bi[k]` is not `Batch()` (the final key in the chain is not reserved), then the type of `b[k]` must match `bi[k]` (both must be scalar/tensor/non-empty Batch values).\n",
    "\n",
    "The `Batch` object `b` satisfying these rules with minimal keys determines the aggregation structure. Values are defined as follows: for any applicable key chain `k`, `b[k]` represents the stack/concatenation of `[bi[k] for bi in S]` (with appropriate zero or `None` padding when `k` does not apply to `bi`). When all `bi[k]` are `Batch()`, the aggregation result is also an empty `Batch()`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Additional Considerations\n",
    "\n",
    "1. Environment observations typically utilize NumPy ndarrays, while policies require `torch.Tensor` for prediction and learning. Tianshou provides helper functions for in-place conversion between NumPy arrays and Torch tensors.\n",
    "\n",
    "2. `obj.stack_([a, b])` is equivalent to `Batch.stack([obj, a, b])`, and `obj.cat_([a, b])` is equivalent to `Batch.cat([obj, a, b])`. For frequently required two-batch concatenation, `obj.cat_(a)` serves as an alias for `obj.cat_([a])`.\n",
    "\n",
    "3. `Batch.cat` and `Batch.cat_` currently do not support the `axis` argument available in `np.concatenate` and `torch.cat`.\n",
    "\n",
    "4. `Batch.stack` and `Batch.stack_` support the `axis` argument, enabling stacking along dimensions beyond the first. However, when keys are not shared across all batches, `stack` with `axis != 0` is undefined and currently raises an exception."
   ]
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
