{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Different types of environments\n",
    "\n",
    "We learned that the environment is the world of the agent and the agent lives/stays within the environment. We can categorize the environment into different types as follows:\n",
    "\n",
    "## Deterministic and Stochastic environment\n",
    "\n",
    "__Deterministic environment__ - In a deterministic environment, we can be sure that when an agent performs an action $a$ in the state $s$ then it always reaches the state $s'$ exactly. For example, let's consider our grid world environment. Say the agent is in state A when we perform action down in the state A we always reach the state D and so it is called deterministic environment:\n",
    "\n",
    "\n",
    "\n",
    "![title](Images/35.png)\n",
    "\n",
    "__Stochastic environment__ - In a stochastic environment, we cannot say that by performing some action $a$ in the state $s$ the agent always reaches the state $s'$ exactly because there will be some randomness associated with the stochastic environment. For example, let's suppose our grid world environment is a stochastic environment. Say our agent is in state A, now if we perform action down in the state A then the agent doesn't always reach the state D  instead it reaches the state D for 70% of the time and the state B for 30 % of the time. That is, if we perform action down in the state A then the agent reaches the state D with 70% probability and the state B with 30% probability as shown below:\n",
    "\n",
    "\n",
    "![title](Images/36.png)\n",
    "\n",
    "## Discrete and continuous environment \n",
    "\n",
    "__Discrete Environment__ - When the action space of the environment is discrete then our environment is called a discrete environment. For instance, in the grid world environment, we have discrete action space which includes actions such as [up, down, left, right] and thus our grid world environment is called the discrete environment. \n",
    "\n",
    "__Continuous environment__ - When the action space of the environment is continuous then our environment is called a continuous environment. For instance, suppose, we are training an agent to drive a car then our action space will be continuous with several continuous actions such as speed in which we need to drive the car, the number of degrees we need to rotate the wheel and so on. In such a case where our action space of the environment is continuous, it is called continuous environment. \n",
    "\n",
    "## Episodic and non-episodic environment \n",
    "\n",
    "__Episodic environment__ - In an episodic environment, an agent's current action will not affect future action and thus the episodic environment is also called the non-sequential environment. \n",
    "\n",
    "__Non-episodic environment__ - In a non-episodic environment, an agent's current action will affect future action and thus the non-episodic environment is also called the sequential environment. Example: The chessboard is a sequential environment since the agent's current action will affect future action in a chess game.\n",
    "\n",
    "## Single and multi-agent environment\n",
    "\n",
    "__Single-agent environment__ - When our environment consists of only a single agent then it is called a single-agent environment. \n",
    "\n",
    "__Multi-agent environment__ - When our environment consists of multiple agents then it is called a multi-agent environment. "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
