{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Buffer: Experience Replay in Tianshou\n",
    "\n",
    "The replay buffer is a fundamental component in reinforcement learning, particularly for off-policy algorithms. Tianshou's buffer implementation extends beyond simple data storage to provide sophisticated trajectory tracking, efficient sampling, and seamless integration with the RL training pipeline.\n",
    "\n",
    "This tutorial provides comprehensive coverage of Tianshou's buffer system, from basic concepts to advanced features and integration patterns."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle\n",
    "import tempfile\n",
    "\n",
    "import numpy as np\n",
    "\n",
    "from tianshou.data import Batch, PrioritizedReplayBuffer, ReplayBuffer, VectorReplayBuffer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Introduction: Why Buffers in Reinforcement Learning?\n",
    "\n",
    "### The Role of Experience Replay\n",
    "\n",
    "Experience replay is a critical technique in modern reinforcement learning that addresses three fundamental challenges:\n",
    "\n",
    "1. **Breaking Temporal Correlation**: Sequential experiences from an agent are highly correlated. Training directly on these sequences can lead to unstable learning. By storing experiences and sampling randomly, we break these correlations.\n",
    "\n",
    "2. **Sample Efficiency**: In RL, collecting data through environment interaction is often expensive. Experience replay allows us to reuse each experience multiple times for training, dramatically improving sample efficiency.\n",
    "\n",
    "3. **Mini-batch Training**: Modern deep learning requires mini-batch gradient descent. Buffers enable efficient batching of experiences for neural network training.\n",
    "\n",
    "### Why Not Alternatives?\n",
    "\n",
    "**Plain Python Lists**\n",
    "- No efficient random sampling\n",
    "- No automatic circular queue behavior\n",
    "- No trajectory boundary tracking\n",
    "- Poor memory management for large datasets\n",
    "\n",
    "**Simple Batch Storage**\n",
    "- No automatic overwriting when full\n",
    "- No episode metadata (returns, lengths)\n",
    "- No methods for boundary navigation (prev/next)\n",
    "- No specialized sampling strategies\n",
    "\n",
    "### Buffer = Batch + Trajectory Management + Sampling\n",
    "\n",
    "Tianshou's buffers build on the `Batch` class to provide:\n",
    "- **Circular queue storage**: Automatic overwriting of oldest data\n",
    "- **Trajectory tracking**: Episode boundaries, returns, and lengths\n",
    "- **Efficient sampling**: Random access with various strategies\n",
    "- **Integration utilities**: Seamless connection to Collector and Policy\n",
    "\n",
    "### Use Cases\n",
    "\n",
    "- **Off-policy algorithms**: DQN, SAC, TD3, DDPG require experience replay\n",
    "- **On-policy with replay**: Some PPO implementations reuse buffer data\n",
    "- **Offline RL**: Loading and using pre-collected datasets\n",
    "- **Multi-environment training**: VectorReplayBuffer for parallel collection"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Buffer Types and Hierarchy\n",
    "\n",
    "Tianshou provides several buffer implementations, each designed for specific use cases. Understanding this hierarchy is crucial for choosing the right buffer.\n",
    "\n",
    "### Buffer Hierarchy\n",
    "\n",
    "```mermaid\n",
    "graph TD\n",
    "    RB[ReplayBuffer<br/>Single environment<br/>Circular queue] --> RBM[ReplayBufferManager<br/>Manages multiple buffers<br/>Contiguous memory]\n",
    "    RBM --> VRB[VectorReplayBuffer<br/>Parallel environments<br/>Maintains temporal order]\n",
    "    \n",
    "    RB --> PRB[PrioritizedReplayBuffer<br/>TD-error based sampling<br/>Importance weights]\n",
    "    PRB --> PVRB[PrioritizedVectorReplayBuffer<br/>Prioritized + Parallel]\n",
    "    \n",
    "    RB --> CRB[CachedReplayBuffer<br/>Primary + auxiliary caches<br/>Imitation learning]\n",
    "    \n",
    "    RB --> HERB[HERReplayBuffer<br/>Hindsight Experience Replay<br/>Goal-conditioned RL]\n",
    "    HERB --> HVRB[HERVectorReplayBuffer<br/>HER + Parallel]\n",
    "    \n",
    "    style RB fill:#e1f5ff\n",
    "    style RBM fill:#fff4e1\n",
    "    style VRB fill:#ffe1f5\n",
    "    style PRB fill:#e8f5e1\n",
    "    style CRB fill:#f5e1e1\n",
    "    style HERB fill:#e1e1f5\n",
    "```\n",
    "\n",
    "### When to Use Which Buffer\n",
    "\n",
    "**ReplayBuffer**: Single environment scenarios\n",
    "- Simple setup and testing\n",
    "- Debugging algorithms\n",
    "- Low-parallelism training\n",
    "\n",
    "**VectorReplayBuffer**: Multiple parallel environments (most common)\n",
    "- Standard production use case\n",
    "- Efficient parallel data collection\n",
    "- Maintains per-environment episode boundaries\n",
    "\n",
    "**PrioritizedReplayBuffer**: DQN variants with prioritization\n",
    "- Rainbow DQN\n",
    "- Algorithms requiring importance sampling\n",
    "- When some transitions are more valuable than others\n",
    "\n",
    "**CachedReplayBuffer**: Separate primary and auxiliary caches\n",
    "- Imitation learning (expert + agent data)\n",
    "- GAIL and similar algorithms\n",
    "- When you need different sampling strategies for different data sources\n",
    "\n",
    "**HERReplayBuffer**: Goal-conditioned reinforcement learning\n",
    "- Sparse reward environments\n",
    "- Robotics tasks with explicit goals\n",
    "- Relabeling failed experiences with achieved goals"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Basic Operations\n",
    "\n",
    "### 3.1 Construction and Configuration\n",
    "\n",
    "The ReplayBuffer constructor accepts several important parameters that control its behavior:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a buffer with all configuration options\n",
    "buf = ReplayBuffer(\n",
    "    size=20,  # Maximum capacity (transitions)\n",
    "    stack_num=1,  # Frame stacking for RNNs (default: 1, no stacking)\n",
    "    ignore_obs_next=False,  # Save memory by not storing obs_next\n",
    "    save_only_last_obs=False,  # For temporal stacking (Atari-style)\n",
    "    sample_avail=False,  # Sample only valid indices for frame stacking\n",
    "    random_seed=42,  # Reproducible sampling\n",
    ")\n",
    "\n",
    "print(f\"Buffer created: {buf}\")\n",
    "print(f\"Max size: {buf.maxsize}\")\n",
    "print(f\"Current length: {len(buf)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Parameter Explanations**:\n",
    "\n",
    "- `size`: Maximum number of transitions the buffer can hold. When full, oldest data is overwritten.\n",
    "- `stack_num`: Number of consecutive frames to stack. Used for RNN inputs or frame-based policies (Atari).\n",
    "- `ignore_obs_next`: If True, obs_next is not stored, saving memory. The buffer reconstructs it from the next obs when needed.\n",
    "- `save_only_last_obs`: For temporal stacking. Only saves the last observation in a stack.\n",
    "- `sample_avail`: When True with stack_num > 1, only samples indices where a complete stack is available.\n",
    "- `random_seed`: Seeds the random number generator for reproducible sampling."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 Reserved Keys and the Done Flag System\n",
    "\n",
    "ReplayBuffer uses nine reserved keys that integrate with Gymnasium conventions. Understanding the done flag system is critical."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# The nine reserved keys\n",
    "print(\"Reserved keys:\")\n",
    "print(ReplayBuffer._reserved_keys)\n",
    "print(\"\\nKeys required for add():\")\n",
    "print(ReplayBuffer._required_keys_for_add)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Important: Understanding done, terminated, and truncated**\n",
    "\n",
    "Gymnasium (the successor to OpenAI Gym) introduced a crucial distinction:\n",
    "\n",
    "- `terminated`: Episode ended naturally (agent reached goal or failed)\n",
    "  - Examples: CartPole fell over, agent reached goal state\n",
    "  - Should be used for bootstrapping calculations\n",
    "\n",
    "- `truncated`: Episode was cut off artificially (time limit, external interruption)\n",
    "  - Examples: Maximum episode length reached, environment reset externally  \n",
    "  - Should NOT be used for bootstrapping (the episode could have continued)\n",
    "\n",
    "- `done`: Computed automatically as `terminated OR truncated`\n",
    "  - Used internally for episode boundary tracking\n",
    "  - You should NEVER manually set this field\n",
    "\n",
    "**Best Practice**: Always use the `info` dictionary for custom metadata rather than adding top-level keys:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# GOOD: Custom metadata in info dictionary\n",
    "good_batch = Batch(\n",
    "    obs=np.array([1.0, 2.0]),\n",
    "    act=0,\n",
    "    rew=1.0,\n",
    "    terminated=False,\n",
    "    truncated=False,\n",
    "    obs_next=np.array([1.5, 2.5]),\n",
    "    info={\"custom_metric\": 0.95, \"step_count\": 10},  # Custom data here\n",
    ")\n",
    "\n",
    "# BAD: Don't add custom top-level keys (may conflict with future buffer features)\n",
    "# bad_batch = Batch(..., custom_metric=0.95)  # Don't do this!\n",
    "\n",
    "print(\"Good batch structure:\")\n",
    "print(good_batch)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.3 Circular Queue Storage\n",
    "\n",
    "The buffer implements a circular queue: when it reaches maximum capacity, new data overwrites the oldest entries."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a small buffer to demonstrate circular behavior\n",
    "demo_buf = ReplayBuffer(size=5)\n",
    "\n",
    "print(\"Adding 3 transitions:\")\n",
    "for i in range(3):\n",
    "    demo_buf.add(\n",
    "        Batch(\n",
    "            obs=i,\n",
    "            act=i,\n",
    "            rew=float(i),\n",
    "            terminated=False,\n",
    "            truncated=False,\n",
    "            obs_next=i + 1,\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "print(f\"Length: {len(demo_buf)}, Max: {demo_buf.maxsize}\")\n",
    "print(f\"Observations: {demo_buf.obs[: len(demo_buf)]}\")\n",
    "\n",
    "print(\"\\nAdding 5 more transitions (total 8, exceeds capacity 5):\")\n",
    "for i in range(3, 8):\n",
    "    demo_buf.add(\n",
    "        Batch(\n",
    "            obs=i,\n",
    "            act=i,\n",
    "            rew=float(i),\n",
    "            terminated=False,\n",
    "            truncated=False,\n",
    "            obs_next=i + 1,\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "print(f\"Length: {len(demo_buf)}, Max: {demo_buf.maxsize}\")\n",
    "print(f\"Observations: {demo_buf.obs[: len(demo_buf)]}\")\n",
    "print(\"\\nNotice: First 3 transitions (0,1,2) were overwritten by (3,4,5)\")\n",
    "print(\"Buffer now contains: [3, 4, 5, 6, 7]\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.4 Batch-Compatible Operations\n",
    "\n",
    "Since ReplayBuffer extends Batch functionality, it supports standard indexing and slicing:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Indexing and slicing\n",
    "print(\"Last transition:\")\n",
    "print(demo_buf[-1])\n",
    "\n",
    "print(\"\\nLast 3 transitions:\")\n",
    "print(demo_buf[-3:])\n",
    "\n",
    "print(\"\\nSpecific indices [0, 2, 4]:\")\n",
    "print(demo_buf[np.array([0, 2, 4])])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Trajectory Management\n",
    "\n",
    "A key distinguishing feature of ReplayBuffer is its automatic tracking of episode boundaries and metadata.\n",
    "\n",
    "### 4.1 Episode Tracking and Metadata\n",
    "\n",
    "The `add()` method returns four values that provide episode information:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a fresh buffer for trajectory demonstration\n",
    "traj_buf = ReplayBuffer(size=20)\n",
    "\n",
    "print(\"Episode 1: 4 steps, terminates naturally\")\n",
    "for i in range(4):\n",
    "    idx, ep_rew, ep_len, ep_start = traj_buf.add(\n",
    "        Batch(\n",
    "            obs=i,\n",
    "            act=i,\n",
    "            rew=float(i + 1),  # Rewards: 1, 2, 3, 4\n",
    "            terminated=i == 3,  # Last step terminates\n",
    "            truncated=False,\n",
    "            obs_next=i + 1,\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "    print(f\"  Step {i}: idx={idx}, ep_rew={ep_rew}, ep_len={ep_len}, ep_start={ep_start}\")\n",
    "\n",
    "print(\"\\nNotice: Episode return (10.0) and length (4) only appear at the end!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Return Values Explained**:\n",
    "\n",
    "1. `idx`: Index where the transition was inserted (np.ndarray of shape (1,))\n",
    "2. `ep_rew`: Episode return, only non-zero when `done=True` (np.ndarray of shape (1,))\n",
    "3. `ep_len`: Episode length, only non-zero when `done=True` (np.ndarray of shape (1,))\n",
    "4. `ep_start`: Index where the episode started (np.ndarray of shape (1,))\n",
    "\n",
    "This automatic computation eliminates manual episode tracking during data collection."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Continue with Episode 2: 5 steps\n",
    "print(\"Episode 2: 5 steps, truncated (time limit)\")\n",
    "for i in range(4, 9):\n",
    "    idx, ep_rew, ep_len, ep_start = traj_buf.add(\n",
    "        Batch(\n",
    "            obs=i,\n",
    "            act=i,\n",
    "            rew=float(i + 1),\n",
    "            terminated=False,\n",
    "            truncated=i == 8,  # Last step truncated\n",
    "            obs_next=i + 1,\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "    if i == 8:\n",
    "        print(\n",
    "            f\"  Final step: idx={idx}, ep_rew={ep_rew[0]:.1f}, ep_len={ep_len[0]}, ep_start={ep_start}\"\n",
    "        )\n",
    "\n",
    "# Episode 3: Ongoing (not finished)\n",
    "print(\"\\nEpisode 3: 3 steps, ongoing (not done)\")\n",
    "for i in range(9, 12):\n",
    "    idx, ep_rew, ep_len, ep_start = traj_buf.add(\n",
    "        Batch(\n",
    "            obs=i,\n",
    "            act=i,\n",
    "            rew=float(i + 1),\n",
    "            terminated=False,\n",
    "            truncated=False,  # Episode continues\n",
    "            obs_next=i + 1,\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "    if i == 11:\n",
    "        print(\n",
    "            f\"  Latest step: idx={idx}, ep_rew={ep_rew}, ep_len={ep_len} (zeros because not done)\"\n",
    "        )\n",
    "\n",
    "print(f\"\\nBuffer state: {len(traj_buf)} transitions across 2 complete + 1 ongoing episode\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.2 Boundary Navigation: prev() and next()\n",
    "\n",
    "The buffer provides methods to navigate within episodes while respecting episode boundaries:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Examine the buffer structure\n",
    "print(\"Buffer contents:\")\n",
    "print(f\"Indices:    {np.arange(len(traj_buf))}\")\n",
    "print(f\"Obs:        {traj_buf.obs[: len(traj_buf)]}\")\n",
    "print(f\"Terminated: {traj_buf.terminated[: len(traj_buf)]}\")\n",
    "print(f\"Truncated:  {traj_buf.truncated[: len(traj_buf)]}\")\n",
    "print(f\"Done:       {traj_buf.done[: len(traj_buf)]}\")\n",
    "print(\"\\nEpisode boundaries: indices 3 (terminated) and 8 (truncated)\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# prev() returns the previous index within the same episode\n",
    "# It STOPS at episode boundaries\n",
    "test_indices = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])\n",
    "prev_indices = traj_buf.prev(test_indices)\n",
    "\n",
    "print(\"prev() behavior:\")\n",
    "print(f\"Index:     {test_indices}\")\n",
    "print(f\"Prev:      {prev_indices}\")\n",
    "print(\"\\nObservations:\")\n",
    "print(\"- Index 0 stays at 0 (start of episode 1)\")\n",
    "print(\"- Index 4 stays at 4 (start of episode 2, can't go back to episode 1)\")\n",
    "print(\"- Index 9 stays at 9 (start of episode 3, can't go back to episode 2)\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# next() returns the next index within the same episode\n",
    "# It STOPS at episode boundaries\n",
    "next_indices = traj_buf.next(test_indices)\n",
    "\n",
    "print(\"next() behavior:\")\n",
    "print(f\"Index:     {test_indices}\")\n",
    "print(f\"Next:      {next_indices}\")\n",
    "print(\"\\nObservations:\")\n",
    "print(\"- Index 3 stays at 3 (end of episode 1, terminated)\")\n",
    "print(\"- Index 8 stays at 8 (end of episode 2, truncated)\")\n",
    "print(\"- Indices 9-11 advance normally (episode 3 ongoing)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Use Cases for prev() and next()**:\n",
    "\n",
    "These methods are essential for computing algorithmic quantities:\n",
    "- **N-step returns**: Use prev() to look back N steps within an episode\n",
    "- **GAE (Generalized Advantage Estimation)**: Navigate backwards through episodes\n",
    "- **Episode extraction**: Find episode start/end indices\n",
    "- **Temporal difference targets**: Ensure you don't bootstrap across episode boundaries"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.3 Identifying Unfinished Episodes\n",
    "\n",
    "The `unfinished_index()` method returns indices of ongoing episodes:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "unfinished = traj_buf.unfinished_index()\n",
    "print(f\"Unfinished episode indices: {unfinished}\")\n",
    "print(f\"Latest step of ongoing episode: obs={traj_buf.obs[unfinished[0]]}\")\n",
    "\n",
    "# After finishing episode 3\n",
    "traj_buf.add(\n",
    "    Batch(\n",
    "        obs=12,\n",
    "        act=12,\n",
    "        rew=13.0,\n",
    "        terminated=True,\n",
    "        truncated=False,\n",
    "        obs_next=13,\n",
    "        info={},\n",
    "    )\n",
    ")\n",
    "\n",
    "unfinished_after = traj_buf.unfinished_index()\n",
    "print(\"\\nAfter finishing episode 3:\")\n",
    "print(f\"Unfinished episodes: {unfinished_after} (empty array)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. Sampling Strategies\n",
    "\n",
    "Efficient sampling is critical for RL training. The buffer provides several sampling methods and strategies.\n",
    "\n",
    "### 5.1 Basic Sampling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a buffer with some data\n",
    "sample_buf = ReplayBuffer(size=100)\n",
    "for i in range(50):\n",
    "    sample_buf.add(\n",
    "        Batch(\n",
    "            obs=i,\n",
    "            act=i % 4,\n",
    "            rew=np.random.random(),\n",
    "            terminated=(i + 1) % 10 == 0,\n",
    "            truncated=False,\n",
    "            obs_next=i + 1,\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "\n",
    "# Sample with batch_size\n",
    "batch, indices = sample_buf.sample(batch_size=8)\n",
    "print(f\"Sampled batch size: {len(batch)}\")\n",
    "print(f\"Sampled indices: {indices}\")\n",
    "print(f\"Sampled observations: {batch.obs}\")\n",
    "\n",
    "# batch_size=None: return all data in random order\n",
    "all_data, all_indices = sample_buf.sample(batch_size=None)\n",
    "print(f\"\\nSample all (batch_size=None): {len(all_data)} transitions\")\n",
    "\n",
    "# batch_size=0: return all data in buffer order\n",
    "ordered_data, ordered_indices = sample_buf.sample(batch_size=0)\n",
    "print(f\"Get all in order (batch_size=0): {len(ordered_data)} transitions\")\n",
    "print(f\"Indices in order: {ordered_indices[:10]}...\")  # Show first 10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Sampling Behavior Summary**:\n",
    "\n",
    "- `batch_size > 0`: Random sample of specified size\n",
    "- `batch_size = None`: All data in random order  \n",
    "- `batch_size = 0`: All data in insertion order\n",
    "- `batch_size < 0`: Empty array (edge case handling)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.2 Frame Stacking\n",
    "\n",
    "The `stack_num` parameter enables automatic frame stacking, useful for RNN inputs or Atari-style environments where temporal context matters:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create buffer with frame stacking\n",
    "stack_buf = ReplayBuffer(size=20, stack_num=4)\n",
    "\n",
    "# Add observations: 0, 1, 2, ..., 9\n",
    "for i in range(10):\n",
    "    stack_buf.add(\n",
    "        Batch(\n",
    "            obs=np.array([i]),  # Single frame\n",
    "            act=0,\n",
    "            rew=1.0,\n",
    "            terminated=i == 9,\n",
    "            truncated=False,\n",
    "            obs_next=np.array([i + 1]),\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "\n",
    "# Get stacked frames for index 6\n",
    "# Should return [3, 4, 5, 6] (4 consecutive frames ending at 6)\n",
    "stacked = stack_buf.get(index=6, key=\"obs\")\n",
    "print(\"Frame stacking demo:\")\n",
    "print(\"Requested index: 6\")\n",
    "print(f\"Stacked frames shape: {stacked.shape}\")\n",
    "print(f\"Stacked frames: {stacked.flatten()}\")\n",
    "print(\"\\nExplanation: stack_num=4, so index 6 returns [obs[3], obs[4], obs[5], obs[6]]\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Demonstrate episode boundary handling with frame stacking\n",
    "boundary_buf = ReplayBuffer(size=20, stack_num=4)\n",
    "\n",
    "# Episode 1: indices 0-4\n",
    "for i in range(5):\n",
    "    boundary_buf.add(\n",
    "        Batch(\n",
    "            obs=np.array([i]),\n",
    "            act=0,\n",
    "            rew=1.0,\n",
    "            terminated=i == 4,\n",
    "            truncated=False,\n",
    "            obs_next=np.array([i + 1]),\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "\n",
    "# Episode 2: indices 5-9\n",
    "for i in range(5, 10):\n",
    "    boundary_buf.add(\n",
    "        Batch(\n",
    "            obs=np.array([i]),\n",
    "            act=0,\n",
    "            rew=1.0,\n",
    "            terminated=i == 9,\n",
    "            truncated=False,\n",
    "            obs_next=np.array([i + 1]),\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "\n",
    "# Try to get stacked frames at episode boundary\n",
    "boundary_stack = boundary_buf.get(index=6, key=\"obs\")  # Early in episode 2\n",
    "print(\"\\nFrame stacking at episode boundary:\")\n",
    "print(f\"Index 6 stacked frames: {boundary_stack.flatten()}\")\n",
    "print(\"Notice: Frames don't cross episode boundary (5,5,5,6 not 3,4,5,6)\")\n",
    "print(\"The buffer uses prev() internally, which respects episode boundaries\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Frame Stacking Use Cases**:\n",
    "\n",
    "- **RNN/LSTM inputs**: Provide temporal context to recurrent networks\n",
    "- **Atari games**: Stack 4 frames to capture motion (as in DQN paper)\n",
    "- **Velocity estimation**: Multiple frames allow computing derivatives\n",
    "- **Partially observable environments**: Build up state estimates\n",
    "\n",
    "**Important Notes**:\n",
    "- Frame stacking respects episode boundaries (won't stack across episodes)\n",
    "- Set `sample_avail=True` to only sample indices where full stacks are available\n",
    "- `save_only_last_obs=True` saves memory in Atari-style setups"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. VectorReplayBuffer: Parallel Environment Support\n",
    "\n",
    "VectorReplayBuffer is essential for modern RL training with parallel environments. It maintains separate subbuffers for each environment while providing a unified interface.\n",
    "\n",
    "### 6.1 Motivation and Architecture\n",
    "\n",
    "When training with multiple parallel environments (e.g., 8 environments running simultaneously), we need:\n",
    "- **Per-environment episode tracking**: Each environment has its own episode boundaries\n",
    "- **Temporal ordering**: Preserve the sequence of events within each environment\n",
    "- **Unified sampling**: Sample uniformly across all environments for training\n",
    "\n",
    "```mermaid\n",
    "graph LR\n",
    "    E1[Env 1] --> B1[Subbuffer 1<br/>2500 capacity]\n",
    "    E2[Env 2] --> B2[Subbuffer 2<br/>2500 capacity]\n",
    "    E3[Env 3] --> B3[Subbuffer 3<br/>2500 capacity]\n",
    "    E4[Env 4] --> B4[Subbuffer 4<br/>2500 capacity]\n",
    "    \n",
    "    B1 --> VRB[VectorReplayBuffer<br/>Total: 10000<br/>Unified Sampling]\n",
    "    B2 --> VRB\n",
    "    B3 --> VRB\n",
    "    B4 --> VRB\n",
    "    \n",
    "    VRB --> Policy[Policy Training]\n",
    "    \n",
    "    style E1 fill:#e1f5ff\n",
    "    style E2 fill:#e1f5ff\n",
    "    style E3 fill:#e1f5ff\n",
    "    style E4 fill:#e1f5ff\n",
    "    style B1 fill:#fff4e1\n",
    "    style B2 fill:#fff4e1\n",
    "    style B3 fill:#fff4e1\n",
    "    style B4 fill:#fff4e1\n",
    "    style VRB fill:#ffe1f5\n",
    "    style Policy fill:#e8f5e1\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create VectorReplayBuffer for 4 parallel environments\n",
    "vec_buf = VectorReplayBuffer(\n",
    "    total_size=100,  # Total capacity across all subbuffers\n",
    "    buffer_num=4,  # Number of parallel environments\n",
    ")\n",
    "\n",
    "print(\"VectorReplayBuffer created:\")\n",
    "print(f\"Total size: {vec_buf.maxsize}\")\n",
    "print(f\"Number of subbuffers: {vec_buf.buffer_num}\")\n",
    "print(f\"Size per subbuffer: {vec_buf.maxsize // vec_buf.buffer_num}\")\n",
    "print(f\"Subbuffer edges: {vec_buf.subbuffer_edges}\")\n",
    "print(\"\\nSubbuffer edges define the boundary indices: [0, 25, 50, 75, 100]\")\n",
    "print(\"Subbuffer 0: indices 0-24, Subbuffer 1: indices 25-49, etc.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.2 The buffer_ids Parameter\n",
    "\n",
    "This is one of the most confusing aspects for new users. The `buffer_ids` parameter specifies which subbuffer each transition belongs to."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Simulate data from 4 parallel environments\n",
    "# Each environment produces one transition\n",
    "parallel_batch = Batch(\n",
    "    obs=np.array([[0.1, 0.2], [1.1, 1.2], [2.1, 2.2], [3.1, 3.2]]),  # 4 observations\n",
    "    act=np.array([0, 1, 0, 1]),  # 4 actions\n",
    "    rew=np.array([1.0, 2.0, 3.0, 4.0]),  # 4 rewards\n",
    "    terminated=np.array([False, False, False, False]),\n",
    "    truncated=np.array([False, False, False, False]),\n",
    "    obs_next=np.array([[0.2, 0.3], [1.2, 1.3], [2.2, 2.3], [3.2, 3.3]]),\n",
    "    info=np.array([{}, {}, {}, {}], dtype=object),\n",
    ")\n",
    "\n",
    "print(\"Parallel batch shape:\", parallel_batch.obs.shape)\n",
    "print(\"This represents 4 transitions, one from each environment\")\n",
    "\n",
    "# Add with buffer_ids specifying which subbuffer each transition goes to\n",
    "indices, ep_rews, ep_lens, ep_starts = vec_buf.add(\n",
    "    parallel_batch,\n",
    "    buffer_ids=[0, 1, 2, 3],  # Transition 0→Subbuf 0, 1→Subbuf 1, etc.\n",
    ")\n",
    "\n",
    "print(f\"\\nAdded to indices: {indices}\")\n",
    "print(\"Notice: Indices are in different subbuffers:\")\n",
    "print(f\"  Index {indices[0]} in subbuffer 0 (range 0-24)\")\n",
    "print(f\"  Index {indices[1]} in subbuffer 1 (range 25-49)\")\n",
    "print(f\"  Index {indices[2]} in subbuffer 2 (range 50-74)\")\n",
    "print(f\"  Index {indices[3]} in subbuffer 3 (range 75-99)\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Add more data to demonstrate buffer_ids\n",
    "# Environments don't always produce data in order 0,1,2,3\n",
    "# For example, if only environments 1 and 3 are ready:\n",
    "partial_batch = Batch(\n",
    "    obs=np.array([[1.2, 1.3], [3.2, 3.3]]),  # Only 2 observations\n",
    "    act=np.array([0, 1]),\n",
    "    rew=np.array([2.5, 4.5]),\n",
    "    terminated=np.array([False, False]),\n",
    "    truncated=np.array([False, False]),\n",
    "    obs_next=np.array([[1.3, 1.4], [3.3, 3.4]]),\n",
    "    info=np.array([{}, {}], dtype=object),\n",
    ")\n",
    "\n",
    "# Only environments 1 and 3 produced data\n",
    "indices2, _, _, _ = vec_buf.add(\n",
    "    partial_batch,\n",
    "    buffer_ids=[1, 3],  # Only these two subbuffers receive data\n",
    ")\n",
    "\n",
    "print(\"Added partial batch (only envs 1 and 3):\")\n",
    "print(f\"Indices: {indices2}\")\n",
    "print(f\"Subbuffer 1 received data at index {indices2[0]}\")\n",
    "print(f\"Subbuffer 3 received data at index {indices2[1]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Important: buffer_ids Requirements**:\n",
    "\n",
    "For `VectorReplayBuffer`:\n",
    "- `buffer_ids` length must match batch size\n",
    "- Values must be in range [0, buffer_num)\n",
    "- Can be partial (not all environments at once)\n",
    "\n",
    "For regular `ReplayBuffer`:\n",
    "- If `buffer_ids` is not None, it must be [0]\n",
    "- Batch must have shape (1, data_length)\n",
    "- This is for API compatibility with VectorReplayBuffer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.3 Subbuffer Edges and Episode Handling\n",
    "\n",
    "Subbuffer edges prevent episodes from spanning across subbuffers, ensuring data from different environments doesn't get mixed:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# The subbuffer_edges property defines boundaries\n",
    "print(f\"Subbuffer edges: {vec_buf.subbuffer_edges}\")\n",
    "print(\"\\nThis creates 4 subbuffers:\")\n",
    "for i in range(vec_buf.buffer_num):\n",
    "    start = vec_buf.subbuffer_edges[i]\n",
    "    end = vec_buf.subbuffer_edges[i + 1]\n",
    "    print(f\"Subbuffer {i}: indices [{start}, {end})\")\n",
    "\n",
    "# Episodes cannot cross these boundaries\n",
    "# prev() and next() respect subbuffer edges just like episode boundaries\n",
    "test_idx = np.array([24, 25, 49, 50])  # At subbuffer edges\n",
    "prev_result = vec_buf.prev(test_idx)\n",
    "next_result = vec_buf.next(test_idx)\n",
    "\n",
    "print(\"\\nBoundary navigation test:\")\n",
    "print(f\"Indices:  {test_idx}\")\n",
    "print(f\"prev():   {prev_result}\")\n",
    "print(f\"next():   {next_result}\")\n",
    "print(\"\\nNotice: prev/next don't cross subbuffer boundaries\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6.4 Sampling from VectorReplayBuffer\n",
    "\n",
    "Sampling is uniform across all subbuffers (proportional to their current fill level):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Add more data to have enough for sampling\n",
    "for _step in range(10):\n",
    "    batch = Batch(\n",
    "        obs=np.random.randn(4, 2),\n",
    "        act=np.random.randint(0, 2, size=4),\n",
    "        rew=np.random.random(4),\n",
    "        terminated=np.zeros(4, dtype=bool),\n",
    "        truncated=np.zeros(4, dtype=bool),\n",
    "        obs_next=np.random.randn(4, 2),\n",
    "        info=np.array([{}] * 4, dtype=object),\n",
    "    )\n",
    "    vec_buf.add(batch, buffer_ids=[0, 1, 2, 3])\n",
    "\n",
    "# Sample batch\n",
    "sampled, indices = vec_buf.sample(batch_size=16)\n",
    "print(f\"Sampled {len(sampled)} transitions\")\n",
    "print(f\"Sample indices (from different subbuffers): {indices}\")\n",
    "print(\"\\nNotice indices span across all subbuffer ranges\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7. Specialized Buffer Variants\n",
    "\n",
    "### 7.1 PrioritizedReplayBuffer\n",
    "\n",
    "Implements prioritized experience replay where transitions are sampled based on their TD-error magnitudes:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": "# Create prioritized buffer\nprio_buf = PrioritizedReplayBuffer(\n    size=100,\n    alpha=0.6,  # Prioritization exponent (0=uniform, 1=fully prioritized)\n    beta=0.4,  # Importance sampling correction (annealed to 1)\n)\n\n# Add some transitions\nfor i in range(20):\n    prio_buf.add(\n        Batch(\n            obs=np.array([i]),\n            act=i % 4,\n            rew=np.random.random(),\n            terminated=False,\n            truncated=False,\n            obs_next=np.array([i + 1]),\n            info={},\n        )\n    )\n\n# Sample returns batch and indices\n# Importance weights are INSIDE the batch as batch.weight\nbatch, indices = prio_buf.sample(batch_size=8)\nprint(f\"Sampled batch size: {len(batch)}\")\nprint(f\"Indices: {indices}\")\nprint(f\"Importance weights (batch.weight): {batch.weight}\")\nprint(\"\\nWeights are stored in batch.weight and compensate for biased sampling\")"
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# After computing TD-errors from the sampled batch, update priorities\n",
    "# In practice, these would be actual TD-errors: |Q(s,a) - (r + γ*max Q(s',a'))|\n",
    "fake_td_errors = np.random.random(len(indices)) * 10  # Simulated TD-errors\n",
    "\n",
    "# Update priorities (higher TD-error = higher priority)\n",
    "prio_buf.update_weight(indices, fake_td_errors)\n",
    "\n",
    "print(\"Updated priorities based on TD-errors\")\n",
    "print(\"Transitions with higher TD-errors will be sampled more frequently\")\n",
    "\n",
    "# Demonstrate beta annealing\n",
    "prio_buf.set_beta(0.6)  # Increase beta over training\n",
    "print(f\"\\nAnnealed beta to: {prio_buf.options['beta']}\")\n",
    "print(\"Beta typically starts at 0.4 and anneals to 1.0 over training\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**PrioritizedReplayBuffer Use Cases**:\n",
    "- Rainbow DQN and variants\n",
    "- Any algorithm where some transitions are more \"surprising\" and valuable\n",
    "- Environments with rare but important events\n",
    "\n",
    "**Key Parameters**:\n",
    "- `alpha`: Controls how much prioritization affects sampling (0=uniform, 1=fully proportional to priority)\n",
    "- `beta`: Importance sampling correction to remain unbiased (anneal from ~0.4 to 1.0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.2 Other Specialized Buffers\n",
    "\n",
    "**CachedReplayBuffer**: Maintains a primary buffer plus auxiliary caches\n",
    "- Use case: Imitation learning where you want separate expert and agent buffers\n",
    "- Example: GAIL (Generative Adversarial Imitation Learning)\n",
    "- Allows different sampling ratios from different sources\n",
    "\n",
    "**HERReplayBuffer**: Hindsight Experience Replay for goal-conditioned tasks\n",
    "- Use case: Sparse reward robotics tasks\n",
    "- Relabels failed episodes with achieved goals as if they were intended\n",
    "- Dramatically improves learning in goal-reaching tasks\n",
    "- See the HER documentation for detailed examples\n",
    "\n",
    "For detailed usage of these specialized buffers, refer to the Tianshou API documentation and algorithm-specific tutorials."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 8. Serialization and Persistence\n",
    "\n",
    "Buffers support multiple serialization formats for saving and loading data.\n",
    "\n",
    "### 8.1 Pickle Serialization\n",
    "\n",
    "The simplest method, preserving all buffer state including trajectory metadata:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create and populate a buffer\n",
    "save_buf = ReplayBuffer(size=50)\n",
    "for i in range(30):\n",
    "    save_buf.add(\n",
    "        Batch(\n",
    "            obs=np.array([i, i + 1]),\n",
    "            act=i % 4,\n",
    "            rew=float(i),\n",
    "            terminated=(i + 1) % 10 == 0,\n",
    "            truncated=False,\n",
    "            obs_next=np.array([i + 1, i + 2]),\n",
    "            info={\"step\": i},\n",
    "        )\n",
    "    )\n",
    "\n",
    "print(f\"Original buffer: {len(save_buf)} transitions\")\n",
    "\n",
    "# Serialize with pickle\n",
    "pickled_data = pickle.dumps(save_buf)\n",
    "print(f\"Serialized size: {len(pickled_data)} bytes\")\n",
    "\n",
    "# Deserialize\n",
    "loaded_buf = pickle.loads(pickled_data)\n",
    "print(f\"Loaded buffer: {len(loaded_buf)} transitions\")\n",
    "print(f\"Data preserved: obs[0] = {loaded_buf.obs[0]}\")\n",
    "print(f\"Metadata preserved: info[0] = {loaded_buf.info[0]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 8.2 HDF5 Serialization\n",
    "\n",
    "HDF5 is recommended for large datasets and cross-platform compatibility:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Save to HDF5\n",
    "with tempfile.NamedTemporaryFile(suffix=\".hdf5\", delete=False) as tmp:\n",
    "    hdf5_path = tmp.name\n",
    "\n",
    "save_buf.save_hdf5(hdf5_path, compression=\"gzip\")\n",
    "print(f\"Saved to HDF5: {hdf5_path}\")\n",
    "\n",
    "# Load from HDF5\n",
    "loaded_hdf5_buf = ReplayBuffer.load_hdf5(hdf5_path)\n",
    "print(f\"Loaded from HDF5: {len(loaded_hdf5_buf)} transitions\")\n",
    "print(f\"Data matches: {np.array_equal(save_buf.obs, loaded_hdf5_buf.obs)}\")\n",
    "\n",
    "# Clean up\n",
    "import os\n",
    "\n",
    "os.unlink(hdf5_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**When to Use HDF5**:\n",
    "- Large datasets (> 1GB)\n",
    "- Offline RL with pre-collected data\n",
    "- Sharing data across platforms\n",
    "- Need for compression\n",
    "- Integration with external tools (many scientific tools read HDF5)\n",
    "\n",
    "**When to Use Pickle**:\n",
    "- Quick saves during development\n",
    "- Small buffers\n",
    "- Python-only workflow\n",
    "- Simpler serialization needs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 8.3 Loading from Raw Data with from_data()\n",
    "\n",
    "For offline RL, you can create a buffer from raw arrays:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Simulate pre-collected offline dataset\n",
    "import h5py\n",
    "\n",
    "# Create temporary HDF5 file with raw data\n",
    "with tempfile.NamedTemporaryFile(suffix=\".hdf5\", delete=False) as tmp:\n",
    "    offline_path = tmp.name\n",
    "\n",
    "with h5py.File(offline_path, \"w\") as f:\n",
    "    # Create datasets\n",
    "    n = 100\n",
    "    f.create_dataset(\"obs\", data=np.random.randn(n, 4))\n",
    "    f.create_dataset(\"act\", data=np.random.randint(0, 2, n))\n",
    "    f.create_dataset(\"rew\", data=np.random.randn(n))\n",
    "    f.create_dataset(\"terminated\", data=np.random.random(n) < 0.1)\n",
    "    f.create_dataset(\"truncated\", data=np.zeros(n, dtype=bool))\n",
    "    f.create_dataset(\"done\", data=np.random.random(n) < 0.1)\n",
    "    f.create_dataset(\"obs_next\", data=np.random.randn(n, 4))\n",
    "\n",
    "# Load into buffer\n",
    "with h5py.File(offline_path, \"r\") as f:\n",
    "    offline_buf = ReplayBuffer.from_data(\n",
    "        obs=f[\"obs\"],\n",
    "        act=f[\"act\"],\n",
    "        rew=f[\"rew\"],\n",
    "        terminated=f[\"terminated\"],\n",
    "        truncated=f[\"truncated\"],\n",
    "        done=f[\"done\"],\n",
    "        obs_next=f[\"obs_next\"],\n",
    "    )\n",
    "\n",
    "print(f\"Loaded offline dataset: {len(offline_buf)} transitions\")\n",
    "print(f\"Observation shape: {offline_buf.obs.shape}\")\n",
    "\n",
    "# Clean up\n",
    "os.unlink(offline_path)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is the standard approach for offline RL where you have pre-collected datasets from other sources."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 9. Integration with the RL Pipeline\n",
    "\n",
    "Understanding how buffers integrate with other Tianshou components is essential for effective usage.\n",
    "\n",
    "### 9.1 Data Flow in RL Training\n",
    "\n",
    "```mermaid\n",
    "graph LR\n",
    "    ENV[Vectorized<br/>Environments] -->|observations| COL[Collector]\n",
    "    POL[Policy] -->|actions| COL\n",
    "    COL -->|transitions| BUF[Buffer]\n",
    "    BUF -->|sampled batches| POL\n",
    "    POL -->|forward pass| ALG[Algorithm]\n",
    "    ALG -->|loss & gradients| POL\n",
    "    \n",
    "    style ENV fill:#e1f5ff\n",
    "    style COL fill:#fff4e1\n",
    "    style BUF fill:#ffe1f5\n",
    "    style POL fill:#e8f5e1\n",
    "    style ALG fill:#f5e1e1\n",
    "```\n",
    "\n",
    "### 9.2 Typical Training Loop Pattern\n",
    "\n",
    "Here's how buffers are typically used in a training loop:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Pseudocode for typical RL training loop\n",
    "# (This is illustrative; actual implementation would use Trainer)\n",
    "\n",
    "\n",
    "def training_loop_pseudocode():\n",
    "    \"\"\"\n",
    "    Illustrative training loop showing buffer integration.\n",
    "\n",
    "    In practice, use Tianshou's Trainer class which handles this.\n",
    "    \"\"\"\n",
    "    # Setup (illustration only)\n",
    "    # env = make_vectorized_env(num_envs=8)\n",
    "    # policy = make_policy()\n",
    "    # buffer = VectorReplayBuffer(total_size=100000, buffer_num=8)\n",
    "    # collector = Collector(policy, env, buffer)\n",
    "\n",
    "    # Training loop\n",
    "    # for epoch in range(num_epochs):\n",
    "    #     # 1. Collect data from environments\n",
    "    #     collect_result = collector.collect(n_step=1000)\n",
    "    #     # Collector automatically adds transitions to buffer with correct buffer_ids\n",
    "    #\n",
    "    #     # 2. Train on multiple batches\n",
    "    #     for _ in range(update_per_collect):\n",
    "    #         # Sample batch from buffer\n",
    "    #         batch, indices = buffer.sample(batch_size=256)\n",
    "    #\n",
    "    #         # Compute loss and update policy\n",
    "    #         loss = policy.learn(batch)\n",
    "    #\n",
    "    #         # For prioritized buffers, update priorities\n",
    "    #         # if isinstance(buffer, PrioritizedReplayBuffer):\n",
    "    #         #     buffer.update_weight(indices, td_errors)\n",
    "\n",
    "    print(\"This pseudocode illustrates the buffer's role:\")\n",
    "    print(\"1. Collector fills buffer from environment interaction\")\n",
    "    print(\"2. Buffer provides random samples for training\")\n",
    "    print(\"3. Policy learns from sampled batches\")\n",
    "    print(\"\\nIn practice, use Tianshou's Trainer for this workflow\")\n",
    "\n",
    "\n",
    "training_loop_pseudocode()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 9.3 Collector Integration\n",
    "\n",
    "The Collector class handles the complexity of:\n",
    "- Calling policy to get actions\n",
    "- Stepping environments\n",
    "- Adding transitions to buffer with correct buffer_ids\n",
    "- Tracking episode statistics\n",
    "\n",
    "When you create a Collector, you pass it a buffer, and it automatically:\n",
    "- Uses VectorReplayBuffer for vectorized environments\n",
    "- Sets buffer_ids based on which environments are ready\n",
    "- Handles episode resets and boundary tracking\n",
    "\n",
    "See the Collector tutorial for detailed examples of this integration."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 10. Advanced Topics and Edge Cases\n",
    "\n",
    "### 10.1 Buffer Overflow and Episode Boundaries\n",
    "\n",
    "What happens when the buffer fills up mid-episode?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Small buffer to demonstrate overflow\n",
    "overflow_buf = ReplayBuffer(size=8)\n",
    "\n",
    "# Add a long episode (12 steps, buffer size is only 8)\n",
    "print(\"Adding 12-step episode to buffer with size 8:\")\n",
    "for i in range(12):\n",
    "    idx, ep_rew, ep_len, ep_start = overflow_buf.add(\n",
    "        Batch(\n",
    "            obs=i,\n",
    "            act=0,\n",
    "            rew=1.0,\n",
    "            terminated=i == 11,\n",
    "            truncated=False,\n",
    "            obs_next=i + 1,\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "    if i in [7, 11]:\n",
    "        print(f\"  Step {i}: idx={idx}, buffer_len={len(overflow_buf)}\")\n",
    "\n",
    "print(\"\\nFinal buffer contents (most recent 8 steps):\")\n",
    "print(f\"Observations: {overflow_buf.obs[: len(overflow_buf)]}\")\n",
    "print(f\"Episode return: {ep_rew[0]} (sum of all 12 steps, tracked correctly!)\")\n",
    "print(\"\\nNote: Buffer overwrote old data but episode statistics are still correct\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Important**: Episode returns and lengths are tracked internally and remain correct even when the episode spans buffer overflows. The buffer maintains `_ep_return`, `_ep_len`, and `_ep_start_idx` to track ongoing episodes."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 10.2 Episode Spanning Subbuffer Edges\n",
    "\n",
    "In VectorReplayBuffer, episodes can wrap around within their subbuffer:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create small VectorReplayBuffer to demonstrate edge crossing\n",
    "edge_buf = VectorReplayBuffer(total_size=20, buffer_num=2)  # 10 per subbuffer\n",
    "\n",
    "print(f\"Subbuffer edges: {edge_buf.subbuffer_edges}\")\n",
    "print(\"Subbuffer 0: indices 0-9, Subbuffer 1: indices 10-19\\n\")\n",
    "\n",
    "# Fill subbuffer 0 with 12 steps (wraps around since capacity is 10)\n",
    "for i in range(12):\n",
    "    batch = Batch(\n",
    "        obs=np.array([[i]]),\n",
    "        act=np.array([0]),\n",
    "        rew=np.array([1.0]),\n",
    "        terminated=np.array([i == 11]),\n",
    "        truncated=np.array([False]),\n",
    "        obs_next=np.array([[i + 1]]),\n",
    "        info=np.array([{}], dtype=object),\n",
    "    )\n",
    "    idx, _, _, _ = edge_buf.add(batch, buffer_ids=[0])\n",
    "    if i >= 10:\n",
    "        print(f\"Step {i} added at index {idx[0]} (wrapped around in subbuffer 0)\")\n",
    "\n",
    "# get_buffer_indices handles this correctly\n",
    "episode_indices = edge_buf.get_buffer_indices(start=8, stop=2)  # Crosses edge\n",
    "print(f\"\\nEpisode spanning edge (from 8 to 1): {episode_indices}\")\n",
    "print(\"Correctly retrieves [8, 9, 0, 1] within subbuffer 0\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 10.3 ignore_obs_next Memory Optimization\n",
    "\n",
    "For memory-constrained scenarios, you can avoid storing obs_next:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Buffer that doesn't store obs_next\n",
    "memory_buf = ReplayBuffer(size=10, ignore_obs_next=True)\n",
    "\n",
    "# Add transitions (obs_next is ignored)\n",
    "for i in range(5):\n",
    "    memory_buf.add(\n",
    "        Batch(\n",
    "            obs=np.array([i, i + 1]),\n",
    "            act=i,\n",
    "            rew=1.0,\n",
    "            terminated=False,\n",
    "            truncated=False,\n",
    "            obs_next=np.array([i + 1, i + 2]),  # Provided but not stored\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "\n",
    "# When sampling, obs_next is reconstructed from next obs\n",
    "sample, _ = memory_buf.sample(batch_size=1)\n",
    "print(f\"Sampled obs: {sample.obs}\")\n",
    "print(f\"Sampled obs_next: {sample.obs_next}\")\n",
    "print(\"\\nobs_next was reconstructed, not stored directly\")\n",
    "print(\"This saves memory at the cost of slightly more complex retrieval\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is particularly useful for Atari environments with large observation spaces (84x84x4 frames)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 11. Surprising Behaviors and Gotchas\n",
    "\n",
    "### 11.1 Most Common Mistake: buffer_ids Confusion\n",
    "\n",
    "The buffer_ids parameter is the most common source of errors:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# COMMON ERROR 1: Forgetting buffer_ids with VectorReplayBuffer\n",
    "vec_demo = VectorReplayBuffer(total_size=100, buffer_num=4)\n",
    "\n",
    "parallel_data = Batch(\n",
    "    obs=np.random.randn(4, 2),\n",
    "    act=np.array([0, 1, 0, 1]),\n",
    "    rew=np.array([1.0, 2.0, 3.0, 4.0]),\n",
    "    terminated=np.array([False, False, False, False]),\n",
    "    truncated=np.array([False, False, False, False]),\n",
    "    obs_next=np.random.randn(4, 2),\n",
    "    info=np.array([{}, {}, {}, {}], dtype=object),\n",
    ")\n",
    "\n",
    "# WRONG: Omitting buffer_ids (defaults to [0,1,2,3] which is OK here)\n",
    "# But if you have partial data, this will fail\n",
    "vec_demo.add(parallel_data)  # Works by default\n",
    "\n",
    "# CORRECT: Always explicit\n",
    "vec_demo.add(parallel_data, buffer_ids=[0, 1, 2, 3])\n",
    "print(\"Always specify buffer_ids explicitly for clarity\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# COMMON ERROR 2: Shape mismatch with buffer_ids\n",
    "try:\n",
    "    # Trying to add 2 transitions but specifying 4 buffer_ids\n",
    "    wrong_batch = Batch(\n",
    "        obs=np.random.randn(2, 2),  # Only 2 transitions!\n",
    "        act=np.array([0, 1]),\n",
    "        rew=np.array([1.0, 2.0]),\n",
    "        terminated=np.array([False, False]),\n",
    "        truncated=np.array([False, False]),\n",
    "        obs_next=np.random.randn(2, 2),\n",
    "        info=np.array([{}, {}], dtype=object),\n",
    "    )\n",
    "    vec_demo.add(wrong_batch, buffer_ids=[0, 1, 2, 3])  # MISMATCH!\n",
    "except (IndexError, ValueError) as e:\n",
    "    print(f\"Error caught: {type(e).__name__}\")\n",
    "    print(\"Lesson: buffer_ids length must match batch size\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 11.2 Done Flag Confusion\n",
    "\n",
    "Never manually set the `done` flag:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# WRONG: Manually setting done\n",
    "wrong_batch = Batch(\n",
    "    obs=1,\n",
    "    act=0,\n",
    "    rew=1.0,\n",
    "    terminated=True,\n",
    "    truncated=False,\n",
    "    # done=True,  # DON'T DO THIS! It will be overwritten anyway\n",
    "    obs_next=2,\n",
    "    info={},\n",
    ")\n",
    "\n",
    "# CORRECT: Only set terminated and truncated\n",
    "# done is automatically computed as (terminated OR truncated)\n",
    "correct_batch = Batch(\n",
    "    obs=1,\n",
    "    act=0,\n",
    "    rew=1.0,\n",
    "    terminated=True,  # Episode ended naturally\n",
    "    truncated=False,  # Not cut off\n",
    "    obs_next=2,\n",
    "    info={},\n",
    ")\n",
    "\n",
    "demo = ReplayBuffer(size=10)\n",
    "demo.add(correct_batch)\n",
    "print(f\"Terminated: {demo.terminated[0]}\")\n",
    "print(f\"Truncated: {demo.truncated[0]}\")\n",
    "print(f\"Done (auto-computed): {demo.done[0]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 11.3 Sampling from Empty or Near-Empty Buffers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Edge case: Sampling more than available\n",
    "small_buf = ReplayBuffer(size=100)\n",
    "for i in range(5):  # Only 5 transitions\n",
    "    small_buf.add(\n",
    "        Batch(obs=i, act=0, rew=1.0, terminated=False, truncated=False, obs_next=i + 1, info={})\n",
    "    )\n",
    "\n",
    "# Request 20 but only 5 available - samples with replacement\n",
    "batch, indices = small_buf.sample(batch_size=20)\n",
    "print(f\"Requested 20, buffer has {len(small_buf)}, got {len(batch)}\")\n",
    "print(f\"Indices: {indices}\")\n",
    "print(\"Notice: Some indices repeat (sampling with replacement)\")\n",
    "\n",
    "# Defensive pattern: Check buffer size\n",
    "if len(small_buf) >= 128:\n",
    "    batch, _ = small_buf.sample(128)\n",
    "else:\n",
    "    print(f\"Buffer has {len(small_buf)} < 128, waiting for more data\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 11.4 Frame Stacking Valid Indices\n",
    "\n",
    "With stack_num > 1, not all indices are valid for sampling:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# With frame stacking, early indices can't form complete stacks\n",
    "stack_demo = ReplayBuffer(size=20, stack_num=4, sample_avail=True)\n",
    "\n",
    "for i in range(10):\n",
    "    stack_demo.add(\n",
    "        Batch(\n",
    "            obs=np.array([i]),\n",
    "            act=0,\n",
    "            rew=1.0,\n",
    "            terminated=i == 9,\n",
    "            truncated=False,\n",
    "            obs_next=np.array([i + 1]),\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "\n",
    "# With sample_avail=True, only valid indices are sampled\n",
    "sampled, indices = stack_demo.sample(batch_size=5)\n",
    "print(f\"Sampled indices with stack_num=4, sample_avail=True: {indices}\")\n",
    "print(\"All indices >= 3 (can form complete 4-frame stacks)\")\n",
    "\n",
    "# Without sample_avail, any index can be sampled (may have incomplete stacks)\n",
    "stack_demo2 = ReplayBuffer(size=20, stack_num=4, sample_avail=False)\n",
    "for i in range(10):\n",
    "    stack_demo2.add(\n",
    "        Batch(\n",
    "            obs=np.array([i]),\n",
    "            act=0,\n",
    "            rew=1.0,\n",
    "            terminated=False,\n",
    "            truncated=False,\n",
    "            obs_next=np.array([i + 1]),\n",
    "            info={},\n",
    "        )\n",
    "    )\n",
    "\n",
    "sampled2, indices2 = stack_demo2.sample(batch_size=5)\n",
    "print(f\"\\nSampled indices with sample_avail=False: {indices2}\")\n",
    "print(\"May include indices < 3 (incomplete stacks repeated from boundary)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 12. Best Practices\n",
    "\n",
    "### 12.1 Choosing the Right Buffer\n",
    "\n",
    "**Decision Tree**:\n",
    "\n",
    "1. Are you using parallel environments?\n",
    "   - Yes → Use `VectorReplayBuffer`\n",
    "   - No → Continue to 2\n",
    "\n",
    "2. Do you need prioritized experience replay?\n",
    "   - Yes → Use `PrioritizedReplayBuffer` or `PrioritizedVectorReplayBuffer`\n",
    "   - No → Continue to 3\n",
    "\n",
    "3. Is it goal-conditioned RL with sparse rewards?\n",
    "   - Yes → Use `HERReplayBuffer` or `HERVectorReplayBuffer`\n",
    "   - No → Continue to 4\n",
    "\n",
    "4. Do you need separate expert and agent buffers?\n",
    "   - Yes → Use `CachedReplayBuffer`\n",
    "   - No → Use `ReplayBuffer` (single env) or `VectorReplayBuffer` (standard choice)\n",
    "\n",
    "**Most Common Setup**: `VectorReplayBuffer` for production training"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 12.2 Buffer Sizing Guidelines\n",
    "\n",
    "**Rule of Thumb by Domain**:\n",
    "\n",
    "- **Atari games**: 1,000,000 transitions (1e6)\n",
    "- **Continuous control (MuJoCo)**: 100,000-1,000,000 (1e5-1e6)\n",
    "- **Robotics**: 100,000-500,000 (1e5-5e5)\n",
    "- **Simple environments (CartPole)**: 10,000-50,000 (1e4-5e4)\n",
    "\n",
    "**Factors to Consider**:\n",
    "- Available RAM (each transition ~observation_size * 2 + metadata)\n",
    "- Training time vs sample efficiency tradeoff\n",
    "- Algorithm requirements (some need larger buffers)\n",
    "\n",
    "**Memory Estimation**:\n",
    "```python\n",
    "# For environments with observation shape (84, 84, 4) (Atari):\n",
    "# Each transition: 2 * 84 * 84 * 4 bytes (obs + obs_next) + ~100 bytes overhead\n",
    "# = ~56KB per transition\n",
    "# 1M transitions = ~56GB (use ignore_obs_next to halve this!)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 12.3 Configuration Best Practices\n",
    "\n",
    "**When to use stack_num > 1**:\n",
    "- RNN/LSTM policies need temporal context\n",
    "- Frame-based policies (Atari with 4-frame stacking)\n",
    "- Velocity estimation from positions\n",
    "\n",
    "**When to use ignore_obs_next=True**:\n",
    "- Memory-constrained environments\n",
    "- Atari (large observation spaces)\n",
    "- When obs_next can be reconstructed from next obs\n",
    "\n",
    "**When to use save_only_last_obs=True**:\n",
    "- Atari with temporal stacking in environment wrapper\n",
    "- When observations already contain frame history\n",
    "\n",
    "**When to use sample_avail=True**:\n",
    "- Always use with stack_num > 1 for correctness\n",
    "- Ensures samples have complete frame stacks\n",
    "- Small performance cost but worth it for data quality"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 12.4 Integration Patterns\n",
    "\n",
    "**Pattern 1: Standard Off-Policy Setup**\n",
    "```python\n",
    "# env = make_vectorized_env(num_envs=8)\n",
    "# buffer = VectorReplayBuffer(total_size=100000, buffer_num=8)\n",
    "# policy = SACPolicy(...)\n",
    "# collector = Collector(policy, env, buffer)\n",
    "# \n",
    "# # Collect and train\n",
    "# collector.collect(n_step=1000)\n",
    "# for _ in range(10):\n",
    "#     batch, indices = buffer.sample(256)\n",
    "#     policy.learn(batch)\n",
    "```\n",
    "\n",
    "**Pattern 2: Pre-fill Buffer Before Training**\n",
    "```python\n",
    "# # Collect random exploration data\n",
    "# collector.collect(n_step=10000)  # Fill buffer\n",
    "# \n",
    "# # Then start training\n",
    "# while not converged:\n",
    "#     collector.collect(n_step=100)\n",
    "#     for _ in range(10):\n",
    "#         batch = buffer.sample(256)\n",
    "#         policy.learn(batch)\n",
    "```\n",
    "\n",
    "**Pattern 3: Offline RL**\n",
    "```python\n",
    "# # Load pre-collected dataset\n",
    "# buffer = ReplayBuffer.load_hdf5(\"expert_data.hdf5\")\n",
    "# \n",
    "# # Train without further collection\n",
    "# for epoch in range(num_epochs):\n",
    "#     for _ in range(updates_per_epoch):\n",
    "#         batch = buffer.sample(256)\n",
    "#         policy.learn(batch)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 12.5 Performance Tips\n",
    "\n",
    "**Tip 1: Pre-allocate buffer size appropriately**\n",
    "- Don't make buffer too large (wastes memory)\n",
    "- Don't make it too small (loses important old experiences)\n",
    "- Start with domain defaults and adjust based on performance\n",
    "\n",
    "**Tip 2: Use HDF5 for large offline datasets**\n",
    "- Compression saves disk space\n",
    "- Faster loading than pickle for large files\n",
    "- Better for sharing across systems\n",
    "\n",
    "**Tip 3: Batch sampling efficiently**\n",
    "- Sample once and use multiple times if possible\n",
    "- Don't sample more than you need\n",
    "- For multi-GPU training, sample once and split\n",
    "\n",
    "**Tip 4: Monitor buffer usage**\n",
    "```python\n",
    "# print(f\"Buffer usage: {len(buffer)}/{buffer.maxsize}\")\n",
    "# if len(buffer) < batch_size:\n",
    "#     print(\"Warning: Sampling with replacement!\")\n",
    "```\n",
    "\n",
    "**Tip 5: Consider ignore_obs_next for large observation spaces**\n",
    "- Can halve memory usage\n",
    "- Small computational overhead on sampling\n",
    "- Especially valuable for image-based RL"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": "## 13. Quick Reference\n\n### Method Summary\n\n| Method | Purpose | Returns | Notes |\n|--------|---------|---------|-------|\n| `add(batch, buffer_ids)` | Add transition(s) | `(idx, ep_rew, ep_len, ep_start)` | ep_rew/ep_len only non-zero when done=True |\n| `sample(size)` | Random sample | `(batch, indices)` | size=None for all (random), 0 for all (ordered) |\n| `prev(idx)` | Previous in episode | `indices` | Stops at episode boundaries |\n| `next(idx)` | Next in episode | `indices` | Stops at episode boundaries |\n| `get(idx, key, stack_num)` | Get with stacking | `data` | Returns stacked frames if stack_num > 1 |\n| `get_buffer_indices(start, stop)` | Episode range | `indices` | Handles edge-crossing episodes |\n| `unfinished_index()` | Ongoing episodes | `indices` | Returns last step of unfinished episodes |\n| `save_hdf5(path)` | Save to HDF5 | - | Recommended for large datasets |\n| `load_hdf5(path)` | Load from HDF5 | `buffer` | Class method |\n| `from_data(...)` | Create from arrays | `buffer` | For offline RL datasets |\n| `reset()` | Clear buffer | - | Optionally keep episode statistics |\n| `sample_indices(size)` | Get indices only | `indices` | For custom sampling logic |\n\n### Common Patterns Cheatsheet\n\n**Single Environment**:\n```python\nbuffer = ReplayBuffer(size=10000)\nbuffer.add(Batch(obs=..., act=..., rew=..., terminated=..., truncated=..., obs_next=..., info={}))\nbatch, indices = buffer.sample(batch_size=256)\n```\n\n**Parallel Environments**:\n```python\nbuffer = VectorReplayBuffer(total_size=100000, buffer_num=8)\nbuffer.add(parallel_batch, buffer_ids=[0,1,2,3,4,5,6,7])\nbatch, indices = buffer.sample(batch_size=256)\n```\n\n**Frame Stacking**:\n```python\nbuffer = ReplayBuffer(size=100000, stack_num=4, sample_avail=True)\nstacked_obs = buffer.get(index=50, key=\"obs\")  # Returns 4 stacked frames\n```\n\n**Prioritized Replay**:\n```python\nbuffer = PrioritizedReplayBuffer(size=100000, alpha=0.6, beta=0.4)\nbatch, indices = buffer.sample(batch_size=256)\nweights = batch.weight  # Importance weights are inside the batch\n# ... compute TD errors ...\nbuffer.update_weight(indices, td_errors)\n```\n\n**Offline RL**:\n```python\nbuffer = ReplayBuffer.load_hdf5(\"dataset.hdf5\")\n# Or:\nwith h5py.File(\"dataset.hdf5\", \"r\") as f:\n    buffer = ReplayBuffer.from_data(obs=f[\"obs\"], act=f[\"act\"], ...)\n```\n\n**Episode Retrieval**:\n```python\n# Find episode boundaries, then:\nepisode_indices = buffer.get_buffer_indices(start=ep_start_idx, stop=ep_end_idx+1)\nepisode = buffer[episode_indices]\n```"
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Summary and Next Steps\n",
    "\n",
    "This tutorial covered Tianshou's buffer system comprehensively:\n",
    "\n",
    "1. **Buffer fundamentals**: Why buffers are essential for RL\n",
    "2. **Buffer hierarchy**: Understanding different buffer types\n",
    "3. **Basic operations**: Construction, configuration, and data management\n",
    "4. **Trajectory management**: Episode tracking and boundary navigation\n",
    "5. **Sampling strategies**: Basic sampling and frame stacking\n",
    "6. **VectorReplayBuffer**: Critical for parallel environments\n",
    "7. **Specialized buffers**: Prioritized, cached, and HER variants\n",
    "8. **Serialization**: Pickle and HDF5 persistence\n",
    "9. **Integration**: How buffers fit in the RL pipeline\n",
    "10. **Advanced topics**: Edge cases and overflow handling\n",
    "11. **Gotchas**: Common mistakes and how to avoid them\n",
    "12. **Best practices**: Configuration, sizing, and performance\n",
    "13. **Quick reference**: Method summary and common patterns\n",
    "\n",
    "### Next Steps\n",
    "\n",
    "- **Collector Deep Dive**: Learn how Collector fills buffers from environments\n",
    "- **Policy Tutorial**: Understand how policies sample from buffers for training\n",
    "- **Algorithm Examples**: See buffer usage in specific algorithms (DQN, SAC, PPO)\n",
    "- **API Reference**: Full details at [Buffer API documentation](https://tianshou.org/en/stable/api/tianshou.data.html)\n",
    "\n",
    "### Further Resources\n",
    "\n",
    "- [Tianshou GitHub](https://github.com/thu-ml/tianshou) for source code and examples\n",
    "- [Gymnasium Documentation](https://gymnasium.farama.org/) for environment conventions\n",
    "- Research papers on experience replay and prioritized sampling"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
