{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Environments\n",
    "\n",
    "In reinforcement learning, agents interact with environments to improve their performance through trial and error. This tutorial explores how Tianshou handles environments, from basic single-environment setups to advanced vectorized and parallel configurations.\n",
    "\n",
    "<div style=\"text-align: center; padding: 1rem;\">\n",
    "<img src=\"../_static/images/rl-loop.jpg\" style=\"width: 60%; padding-bottom: 1rem;\"><br>\n",
    "The agent-environment interaction loop\n",
    "</div>\n",
    "\n",
    "Tianshou maintains full compatibility with the [Gymnasium](https://gymnasium.farama.org/) API (formerly OpenAI Gym), making it easy to use any Gymnasium-compatible environment."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Bottleneck Problem\n",
    "\n",
    "In a standard Gymnasium environment, each interaction follows a sequential pattern:\n",
    "\n",
    "1. Agent selects an action\n",
    "2. Environment processes the action and returns observation and reward\n",
    "3. Repeat\n",
    "\n",
    "This sequential process can become a significant bottleneck in deep reinforcement learning experiments, especially when:\n",
    "- The environment simulation is computationally intensive\n",
    "- Network training is fast but data collection is slow\n",
    "- You have multiple CPU cores available but aren't using them\n",
    "\n",
    "Tianshou addresses this bottleneck through **vectorized environments**, which allow parallel sampling across multiple CPU cores."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Vectorized Environments\n",
    "\n",
    "Vectorized environments enable you to run multiple environment instances in parallel, dramatically accelerating data collection. Let's see this in action."
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "import time\n",
    "\n",
    "import gymnasium as gym\n",
    "import numpy as np\n",
    "\n",
    "from tianshou.env import DummyVectorEnv, SubprocVectorEnv"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Performance Comparison\n",
    "\n",
    "Let's compare the sampling speed with different numbers of parallel environments:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "num_cpus = [1, 2, 5]\n",
    "\n",
    "for num_cpu in num_cpus:\n",
    "    # Create vectorized environment with multiple processes\n",
    "    env = SubprocVectorEnv([lambda: gym.make(\"CartPole-v1\") for _ in range(num_cpu)])\n",
    "    env.reset()\n",
    "\n",
    "    sampled_steps = 0\n",
    "    time_start = time.time()\n",
    "\n",
    "    # Sample 1000 steps\n",
    "    while sampled_steps < 1000:\n",
    "        act = np.random.choice(2, size=num_cpu)\n",
    "        obs, rew, terminated, truncated, info = env.step(act)\n",
    "\n",
    "        # Reset terminated environments\n",
    "        if np.sum(terminated):\n",
    "            env.reset(np.where(terminated)[0])\n",
    "\n",
    "        sampled_steps += num_cpu\n",
    "\n",
    "    time_used = time.time() - time_start\n",
    "    print(f\"Sampled 1000 steps in {time_used:.3f}s using {num_cpu} CPU(s)\")\n",
    "    print(f\"  → Speed: {1000 / time_used:.1f} steps/second\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Understanding the Results\n",
    "\n",
    "You might notice that the speedup isn't perfectly linear with the number of CPUs. Several factors contribute to this:\n",
    "\n",
    "1. **Straggler Effect**: In synchronous mode, all environments must complete before the next batch begins. Slower environments hold back faster ones.\n",
    "2. **Communication Overhead**: Inter-process communication has costs, especially for fast environments.\n",
    "3. **Environment Complexity**: For simple environments like CartPole, the overhead may outweigh the benefits.\n",
    "\n",
    "> **Important**: `SubprocVectorEnv` should only be used when environment execution is slow. For simple, fast environments like CartPole, `DummyVectorEnv` (or even raw Gymnasium environments) can be more efficient because they avoid both the straggler effect and inter-process communication overhead."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Types of Vectorized Environments\n",
    "\n",
    "Tianshou provides several vectorized environment implementations, each optimized for different scenarios:\n",
    "\n",
    "### 1. DummyVectorEnv\n",
    "**Pseudo-parallel simulation using a for-loop**\n",
    "- Best for: Simple/fast environments, debugging\n",
    "- Pros: No overhead, deterministic execution\n",
    "- Cons: No actual parallelization\n",
    "\n",
    "### 2. SubprocVectorEnv\n",
    "**Multiple processes for true parallel simulation**\n",
    "- Best for: Most parallel simulation scenarios\n",
    "- Pros: True parallelization, good balance\n",
    "- Cons: Inter-process communication overhead\n",
    "\n",
    "### 3. ShmemVectorEnv\n",
    "**Shared memory optimization of SubprocVectorEnv**\n",
    "- Best for: Environments with large observations (e.g., images)\n",
    "- Pros: Reduced memory footprint, faster for large states\n",
    "- Cons: More complex implementation\n",
    "\n",
    "### 4. RayVectorEnv\n",
    "**Ray-based distributed simulation**\n",
    "- Best for: Cluster computing with multiple machines\n",
    "- Pros: Scales to multiple machines\n",
    "- Cons: Requires Ray installation and setup\n",
    "\n",
    "All these classes share the same API through their base class `BaseVectorEnv`, making it easy to switch between them."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Basic Usage\n",
    "\n",
    "### Creating a Vectorized Environment"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Standard Gymnasium environment\n",
    "gym_env = gym.make(\"CartPole-v1\")\n",
    "\n",
    "\n",
    "# Tianshou vectorized environment\n",
    "def create_cartpole_env() -> gym.Env:\n",
    "    return gym.make(\"CartPole-v1\")\n",
    "\n",
    "\n",
    "# Create 5 parallel environments\n",
    "vector_env = DummyVectorEnv([create_cartpole_env for _ in range(5)])\n",
    "\n",
    "print(f\"Created vectorized environment with {vector_env.env_num} environments\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Environment Interaction\n",
    "\n",
    "The key difference from standard Gymnasium is that actions, observations, and rewards are all vectorized:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Standard Gymnasium: reset() returns a single observation\n",
    "print(\"Standard Gymnasium reset:\")\n",
    "single_obs, info = gym_env.reset()\n",
    "print(f\"  Shape: {single_obs.shape}\")\n",
    "print(f\"  Value: {single_obs}\")\n",
    "\n",
    "print(\"\\n\" + \"=\" * 50 + \"\\n\")\n",
    "\n",
    "# Vectorized environment: reset() returns stacked observations\n",
    "print(\"Vectorized environment reset:\")\n",
    "vector_obs, info = vector_env.reset()\n",
    "print(f\"  Shape: {vector_obs.shape}\")\n",
    "print(f\"  Value:\\n{vector_obs}\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Taking Vectorized Steps"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Take random actions in all environments\n",
    "actions = np.random.choice(2, size=vector_env.env_num)\n",
    "obs, rew, terminated, truncated, info = vector_env.step(actions)\n",
    "\n",
    "print(f\"Actions taken: {actions}\")\n",
    "print(f\"Rewards received: {rew}\")\n",
    "print(f\"Terminated flags: {terminated}\")\n",
    "print(\"Info\", info)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Selective Environment Execution\n",
    "\n",
    "You can interact with specific environments using the `id` parameter:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Execute only environments 0, 1, and 3\n",
    "selected_actions = np.random.choice(2, size=3)\n",
    "obs, rew, terminated, truncated, info = vector_env.step(selected_actions, id=[0, 1, 3])\n",
    "\n",
    "print(\"Executed actions in environments [0, 1, 3]\")\n",
    "print(f\"Received {len(rew)} results\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Parallel Sampling: Synchronous vs Asynchronous\n",
    "\n",
    "### Synchronous Mode (Default)\n",
    "\n",
    "By default, vectorized environments operate synchronously: a step completes only after **all** environments finish their step. This works well when all environments take roughly the same time per step.\n",
    "\n",
    "### Asynchronous Mode\n",
    "\n",
    "When environment step times vary significantly (e.g., 90% of steps take 1s, but 10% take 10s), asynchronous mode can help. It allows faster environments to continue without waiting for slower ones.\n",
    "\n",
    "<div style=\"text-align: center; padding: 1rem;\">\n",
    "<img src=\"../_static/images/async.png\" style=\"width: 70%; padding-bottom: 1rem;\"><br>\n",
    "Comparison of synchronous and asynchronous vectorized environments<br>\n",
    "(Steps with the same color are processed together)\n",
    "</div>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Enabling Asynchronous Mode\n",
    "\n",
    "Use the `wait_num` or `timeout` parameters (or both):"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "from functools import partial\n",
    "\n",
    "\n",
    "# Create environments with varying step times\n",
    "class SlowEnv(gym.Env):\n",
    "    \"\"\"Environment with variable step duration.\"\"\"\n",
    "\n",
    "    def __init__(self, sleep_time):\n",
    "        self.sleep_time = sleep_time\n",
    "        self.observation_space = gym.spaces.Box(low=0, high=1, shape=(4,))\n",
    "        self.action_space = gym.spaces.Discrete(2)\n",
    "        super().__init__()\n",
    "\n",
    "    def reset(self, seed=None, options=None):\n",
    "        super().reset(seed=seed)\n",
    "        return np.random.rand(4), {}\n",
    "\n",
    "    def step(self, action):\n",
    "        time.sleep(self.sleep_time)  # Simulate slow computation\n",
    "        return np.random.rand(4), 0.0, False, False, {}\n",
    "\n",
    "\n",
    "# Create async vectorized environment\n",
    "env_fns = [partial(SlowEnv, sleep_time=0.01 * i) for i in [1, 2, 3, 4]]\n",
    "async_env = SubprocVectorEnv(env_fns, wait_num=3, timeout=0.1)\n",
    "\n",
    "print(\"Asynchronous environment created\")\n",
    "print(\"  wait_num=3: Returns after 3 environments complete\")\n",
    "print(\"  timeout=0.1: Or after 0.1 seconds, whichever comes first\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### How Async Parameters Work\n",
    "\n",
    "- **`wait_num`**: Minimum number of environments to wait for (e.g., `wait_num=3` means each step returns results from at least 3 environments)\n",
    "- **`timeout`**: Maximum time to wait in seconds (acts as a dynamic `wait_num`—returns whatever is ready after timeout)\n",
    "- If no environment finishes within the timeout, the system waits until at least one completes\n",
    "\n",
    "> **Warning**: Asynchronous collectors can cause exceptions when used as `test_collector` in trainers. Always use synchronous mode for test collectors."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## EnvPool Integration\n",
    "\n",
    "[EnvPool](https://github.com/sail-sg/envpool/) is a C++-based vectorized environment library that provides significant performance improvements over Python-based solutions for many of the standard environments. Tianshou fully supports EnvPool with minimal code changes.\n",
    "\n",
    "### Why EnvPool?\n",
    "\n",
    "- **Performance**: 10x-100x faster than standard vectorized environments for supported environments\n",
    "- **Memory Efficient**: Optimized memory usage through shared buffers\n",
    "- **Drop-in Replacement**: Nearly identical API to Tianshou's vectorized environments\n",
    "\n",
    "### Supported Environments\n",
    "\n",
    "EnvPool currently supports:\n",
    "- Atari games\n",
    "- MuJoCo physics simulations\n",
    "- VizDoom 3D environments\n",
    "- Classic control environments\n",
    "- Toy text environments"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using EnvPool\n",
    "\n",
    "First, install EnvPool:\n",
    "\n",
    "```bash\n",
    "pip install envpool\n",
    "```\n",
    "\n",
    "Then use it directly with Tianshou:\n",
    "\n",
    "```python\n",
    "import envpool\n",
    "\n",
    "# Create EnvPool vectorized environment\n",
    "envs = envpool.make_gymnasium(\"CartPole-v1\", num_envs=10)\n",
    "\n",
    "print(f\"Created EnvPool environment with {envs.spec.config.num_envs} environments\")\n",
    "print(\"Ready to use with Tianshou collectors!\")\n",
    "\n",
    "# Use directly with Tianshou\n",
    "collector = Collector(algorithm, envs, buffer)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### EnvPool Examples\n",
    "\n",
    "For complete examples of using EnvPool with Tianshou:\n",
    "- [Atari with EnvPool](https://github.com/thu-ml/tianshou/tree/master/examples/atari#envpool)\n",
    "- [MuJoCo with EnvPool](https://github.com/thu-ml/tianshou/tree/master/examples/mujoco#envpool)\n",
    "- [VizDoom with EnvPool](https://github.com/thu-ml/tianshou/tree/master/examples/vizdoom#envpool)\n",
    "- [More EnvPool Examples](https://github.com/sail-sg/envpool/tree/master/examples/tianshou_examples)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Custom Environments and State Representations\n",
    "\n",
    "Tianshou works seamlessly with custom environments as long as they follow the Gymnasium API. Let's explore how to handle different state representations."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Required Gymnasium API\n",
    "\n",
    "Your custom environment must implement:\n",
    "\n",
    "```python\n",
    "class MyEnv(gym.Env):\n",
    "    def reset(self, seed=None, options=None) -> Tuple[observation, info]:\n",
    "        \"\"\"Reset environment to initial state.\"\"\"\n",
    "        pass\n",
    "    \n",
    "    def step(self, action) -> Tuple[observation, reward, terminated, truncated, info]:\n",
    "        \"\"\"Execute one step in the environment.\"\"\"\n",
    "        pass\n",
    "    \n",
    "    def seed(self, seed: int) -> List[int]:\n",
    "        \"\"\"Set random seed.\"\"\"\n",
    "        pass\n",
    "    \n",
    "    def render(self, mode='human') -> Any:\n",
    "        \"\"\"Render the environment.\"\"\"\n",
    "        pass\n",
    "    \n",
    "    def close(self) -> None:\n",
    "        \"\"\"Clean up resources.\"\"\"\n",
    "        pass\n",
    "    \n",
    "    # Required spaces\n",
    "    observation_space: gym.Space\n",
    "    action_space: gym.Space\n",
    "```\n",
    "\n",
    "> **Important**: Make sure your `seed()` method is implemented correctly:\n",
    "> ```python\n",
    "> def seed(self, seed):\n",
    ">     np.random.seed(seed)\n",
    ">     # Also seed other random generators used in your environment\n",
    "> ```\n",
    "> Without proper seeding, parallel environments may produce identical outputs!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Dictionary Observations\n",
    "\n",
    "Many environments return observations as dictionaries rather than simple arrays. Tianshou's `Batch` class handles this elegantly.\n",
    "\n",
    "Example with the FetchReach environment:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "from tianshou.data import Batch, ReplayBuffer\n",
    "\n",
    "# Example: Creating a mock observation similar to FetchReach\n",
    "observation = {\n",
    "    \"observation\": np.array([1.34, 0.75, 0.53, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]),\n",
    "    \"achieved_goal\": np.array([1.34, 0.75, 0.53]),\n",
    "    \"desired_goal\": np.array([1.24, 0.78, 0.63]),\n",
    "}\n",
    "\n",
    "# Store in replay buffer\n",
    "buffer = ReplayBuffer(size=10)\n",
    "buffer.add(Batch(obs=observation, act=0, rew=0.0, terminated=False, truncated=False))\n",
    "\n",
    "print(\"Stored observation structure:\")\n",
    "print(buffer.obs)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Accessing Dictionary Observations\n",
    "\n",
    "When sampling from the buffer, you can access nested dictionary values in multiple ways:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "# Sample a batch\n",
    "batch, indices = buffer.sample(batch_size=1)\n",
    "\n",
    "print(\"Batch keys:\", list(batch.keys()))\n",
    "print(\"\\nAccessing nested observation:\")\n",
    "\n",
    "# Recommended way: access through batch first\n",
    "print(\"batch.obs.desired_goal[0]:\", batch.obs.desired_goal[0])\n",
    "\n",
    "# Alternative ways (not recommended)\n",
    "print(\"batch.obs[0].desired_goal:\", batch.obs[0].desired_goal)\n",
    "print(\"batch[0].obs.desired_goal:\", batch[0].obs.desired_goal)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using Dictionary Observations in Networks\n",
    "\n",
    "When designing networks for environments with dictionary observations:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "\n",
    "class CustomNetwork(nn.Module):\n",
    "    \"\"\"Network that processes dictionary observations.\"\"\"\n",
    "\n",
    "    def __init__(self, obs_dim, goal_dim, hidden_dim, action_dim):\n",
    "        super().__init__()\n",
    "\n",
    "        # Separate processing for different observation components\n",
    "        self.obs_encoder = nn.Linear(obs_dim, hidden_dim)\n",
    "        self.goal_encoder = nn.Linear(goal_dim * 2, hidden_dim)  # achieved + desired\n",
    "\n",
    "        # Combined processing\n",
    "        self.fc = nn.Sequential(\n",
    "            nn.Linear(hidden_dim * 2, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, action_dim)\n",
    "        )\n",
    "\n",
    "    def forward(self, obs_batch, **kwargs):\n",
    "        # Extract components from the batch\n",
    "        observation = obs_batch.observation\n",
    "        achieved_goal = obs_batch.achieved_goal\n",
    "        desired_goal = obs_batch.desired_goal\n",
    "\n",
    "        # Process each component\n",
    "        obs_feat = self.obs_encoder(observation)\n",
    "        goal_feat = self.goal_encoder(torch.cat([achieved_goal, desired_goal], dim=-1))\n",
    "\n",
    "        # Combine and output\n",
    "        combined = torch.cat([obs_feat, goal_feat], dim=-1)\n",
    "        return self.fc(combined)\n",
    "\n",
    "\n",
    "# Example usage\n",
    "net = CustomNetwork(obs_dim=10, goal_dim=3, hidden_dim=64, action_dim=4)\n",
    "print(\"Network created for dictionary observations\")\n",
    "print(\"  Input: observation (10D) + achieved_goal (3D) + desired_goal (3D)\")\n",
    "print(\"  Output: actions (4D)\")"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Custom Object States\n",
    "\n",
    "For more complex state representations (e.g., graphs, custom objects), Tianshou stores references in numpy arrays. However, you must ensure deep copies to avoid state aliasing:"
   ]
  },
  {
   "cell_type": "code",
   "metadata": {},
   "source": [
    "import copy\n",
    "\n",
    "import networkx as nx\n",
    "\n",
    "\n",
    "class GraphEnv(gym.Env):\n",
    "    \"\"\"Example environment with graph-based states.\"\"\"\n",
    "\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.graph = nx.Graph()\n",
    "        self.action_space = gym.spaces.Discrete(5)\n",
    "        self.observation_space = gym.spaces.Box(low=0, high=1, shape=(10,))  # for compatibility\n",
    "\n",
    "    def reset(self, seed=None, options=None):\n",
    "        super().reset(seed=seed)\n",
    "        self.graph = nx.erdos_renyi_graph(10, 0.3)\n",
    "        # IMPORTANT: Return deep copy to avoid reference issues\n",
    "        return copy.deepcopy(self.graph), {}\n",
    "\n",
    "    def step(self, action):\n",
    "        # Modify graph based on action\n",
    "        if action < 4 and len(self.graph.nodes) > 0:\n",
    "            nodes = list(self.graph.nodes)\n",
    "            if len(nodes) >= 2:\n",
    "                self.graph.add_edge(nodes[0], nodes[1])\n",
    "\n",
    "        # IMPORTANT: Return deep copy\n",
    "        return copy.deepcopy(self.graph), 0.0, False, False, {}\n",
    "\n",
    "\n",
    "# Test storing graph objects\n",
    "graph_buffer = ReplayBuffer(size=5)\n",
    "env = GraphEnv()\n",
    "obs, _ = env.reset()\n",
    "graph_buffer.add(Batch(obs=obs, act=0, rew=0.0, terminated=False, truncated=False))\n",
    "\n",
    "print(\"Graph objects stored in buffer:\")\n",
    "print(graph_buffer.obs)"
   ],
   "outputs": [],
   "execution_count": null
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> **Important**: When using custom objects as states:\n",
    "> 1. Always return `copy.deepcopy(state)` in both `reset()` and `step()`\n",
    "> 2. Ensure the object is numpy-compatible: `np.array([your_object])` should not result in an empty array\n",
    "> 3. The object may be stored as a shallow copy in the buffer—deep copying prevents state aliasing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Best Practices Summary\n",
    "\n",
    "### Choosing the Right Environment Wrapper\n",
    "\n",
    "| Scenario | Recommended Wrapper | Why |\n",
    "|----------|-------------------|-----|\n",
    "| Simple/fast environments | `DummyVectorEnv` or raw Gym | Minimal overhead |\n",
    "| Most parallel scenarios | `SubprocVectorEnv` | Good balance of speed and simplicity |\n",
    "| Large observations (images) | `ShmemVectorEnv` | Optimized memory usage |\n",
    "| Multi-machine clusters | `RayVectorEnv` | Distributed computing support |\n",
    "| Maximum performance | EnvPool | C++-based, 10x-100x speedup |\n",
    "\n",
    "### Performance Tips\n",
    "\n",
    "1. **Profile First**: Measure whether environment or training is your bottleneck before optimizing\n",
    "2. **Start Simple**: Begin with `DummyVectorEnv` for debugging, then upgrade to parallel versions\n",
    "3. **Use EnvPool**: If your environment is supported, EnvPool offers the best performance\n",
    "4. **Async for Variable Times**: Use asynchronous mode only when environment step times vary significantly\n",
    "5. **Proper Seeding**: Always implement the `seed()` method correctly in custom environments\n",
    "\n",
    "### Common Pitfalls\n",
    "\n",
    "- ❌ Using `SubprocVectorEnv` for fast environments → Use `DummyVectorEnv` instead\n",
    "- ❌ Forgetting to deep-copy custom states → States will be aliased in the buffer\n",
    "- ❌ Not implementing `seed()` properly → Parallel environments produce identical results\n",
    "- ❌ Using async collectors for testing → Causes exceptions in trainers\n",
    "- ❌ Assuming linear speedup → Account for communication overhead and straggler effects"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Further Reading\n",
    "\n",
    "- **Tianshou Documentation**: [Environment API Reference](https://tianshou.org/en/master/03_api/env/venvs.html)\n",
    "- **EnvPool**: [Official Documentation](https://envpool.readthedocs.io/)\n",
    "- **Gymnasium**: [Environment Creation Tutorial](https://gymnasium.farama.org/tutorials/gymnasium_basics/environment_creation/)\n",
    "- **Ray**: [Distributed RL with Ray](https://docs.ray.io/en/latest/rllib/index.html)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
