{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Hindsight Experience Replay"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have seen how experience replay is used in DQN to avoid the correlated experience.\n",
    "Also, we learned about prioritized experience replay as an improvement to the vanilla\n",
    "experience replay by prioritizing each experience with TD error. Now we will see a new\n",
    "technique called hindsight experience replay (HER) proposed by OpenAI researchers for\n",
    "dealing with sparse rewards.\n",
    "\n",
    "Do you remember how you learned to ride a bike? At your\n",
    "first try, you wouldn't have balanced the bike properly. You would have failed several\n",
    "times to balance correctly. But all the failures doesn't mean you haven't learned\n",
    "anything. The failures would have taught you how not to balance a bike. Even though you have\n",
    "not learned to ride a bike (goal), you have learned a different goal i.e you have learned how\n",
    "not to balance a bike. This is how we humans learn right? we learn from failures and this is\n",
    "the idea behind hindsight experience replay.\n",
    "\n",
    "\n",
    "Let us consider the same example given in the paper. Look at the FetchSlide environment\n",
    "as shown in the below figure, the goal in this environment is to move the robotic arm and\n",
    "slide a puck across the table hit the target (small red circle).\n",
    "\n",
    "\n",
    "Image source: https://blog.openai.com/ingredients-for-robotics-research/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![title](images/B09792_13_01.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In few first trails, the agent could not definitely achieve the goal. So the agent will only\n",
    "receive -1 as rewards which tell the agent it was doing wrong and not attained the goal. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![title](images/B09792_13_02.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " But this doesn't mean that agent has not learned anything. The agent has learned a different\n",
    "goal i.e it has learned to move closer to our actual goal. So instead of considering it as a\n",
    "failure, we consider it has a different goal.\n",
    "\n",
    "So if we repeat this process over several\n",
    "iterations, the agent will learn to achieve our actual goal. HER can be applied to any off\n",
    "policy algorithms. The performance of HER is compared by DDPG without HER and\n",
    "DDPG with her and the results shows that DDPG with HER converges quickly than DDPG without HER.\n",
    "\n",
    "<br>\n",
    "You can see the performance of HER in this video https://youtu.be/Dz_HuzgMxzo."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:anaconda]",
   "language": "python",
   "name": "conda-env-anaconda-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
