{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Project Continuous Control - Report\n",
    "\n",
    "###  DDPG  Algorithm\n",
    "\n",
    "In this project we use _Algorithm DDPG_ (_Deep Deterministic Policy Gradient_).  _DDPG_ is an algorithm  which   \n",
    "concurrently learns a Q-function and a policy.  It uses off-policy data and the Bellman equation  to learn    \n",
    "the Q-function, and uses the Q-function to learn the policy. This dual mechanism is the _actor-critic method_. \n",
    "The DDPG algorithm uses two additional mechanisms: _Replay Buffer_ and _Soft Updates_. \n",
    "\n",
    "### Goal of DDPG Agent \n",
    "\n",
    "The environment for this project involves controlling a **double-jointed arm**, to reach target locations.     \n",
    "A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of      \n",
    "this agent is to maintain its position at the target location for as many time steps as possible. \n",
    "\n",
    "The observation space (i.e., state space) has 33 dimensions corresponding to position, rotation, velocity,    \n",
    "and angular velocities of the arm. The action space has 4 dimensions corresponding to torque applicable to    \n",
    "two joints. Every entry in the action vector should be a number between -1 and 1.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Target networks"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The target network used for slow tracking of the learned network. We create a copy of the _actor_    and _critic_ networks:    \n",
    "_actor_\\__target_ (say, with the parameter vector _p'_) and _critic_\\__target_   (say, with the parameter vector _w'_). The weights of    \n",
    "these _target networks_ are updated by having   them the following track:    \n",
    "\n",
    "    p'  <--  p * \\tau + p' * (1 - \\tau)  \n",
    "    w'  <--  w * \\tau + w' * (1 - \\tau)\n",
    "\n",
    "We put the very small value for _\\tau_ (= 0.001). This means that the target values are constrained  to change slowly, greatly improving the stability of learning. This update is performed by function  _soft_\\__update_.   \n",
    "\n",
    "_\"This may slow learning, since the target network delays the propagation of value estimations.   \n",
    "However, in practice we found this was greatly outweighed by the stability of learning.\"     \n",
    "(\"Continuous control with deep reinforcement learning\", Lillicrap et al.,2015, arXiv:1509.02971)_  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### DDPG Neural Networks\n",
    "\n",
    "The DDPG algorithm uses 4 neural networks: _actor_\\__target_, _actor_\\__local_, _critic_\\__target_ and _critic_\\__local_:\n",
    "\n",
    "    actor_local = Actor(state_size, action_size, random_seed).to(device)\n",
    "    actor_target = Actor(state_size, action_size, random_seed).to(device)\n",
    "\n",
    "    critic_local = Critic(state_size, action_size, random_seed).to(device)\n",
    "    critic_target = Critic(state_size, action_size, random_seed).to(device)\n",
    "\n",
    "classes _Actor_ and _Critic_ are provided by model.py. The typical behavior of _the actor_ and _the critic_\n",
    "is as follows:\n",
    "\n",
    "    actor_target(state) -> action\n",
    "    critic_target(state, action) -> Q-value\n",
    "    \n",
    "    actor_local(states) -> actions_pred\n",
    "    -critic_local(states, actions_pred) -> actor_loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "###     Actor-Critic dual mechanism\n",
    "\n",
    "For each timestep _t,_ we do the following operations:\n",
    "\n",
    "Let __*S&nbsp;*__ be the current state. It is the  input for the  _Actor NN_.  The output is the action-value \n",
    "\n",
    "![](images/policy_pi.png)\n",
    "\n",
    "where \\pi is the policy function,  i.e., the distribution of the actions. The _Critic NN_  gets the state __*S&nbsp;*__ as input and outputs      \n",
    "the state-value function __*v(S,w)*__ , that is the _expected total reward_ for the agent starting from state __*S&nbsp;*__. Here, _\\theta_ is    \n",
    "the vector parameter of the _Actor NN_, _w&nbsp;_ - the vector parameter of the _Critic NN_. The task is to train both networks, i.e.,   \n",
    "to find the optimal values for _\\theta_ and _w&nbsp;_.  By policy _\\pi_ we get the action _A&nbsp;_,  from the environment we get reward _R&nbsp;_   \n",
    "and the next state __*S'&nbsp;*__. Then we get _TD-estimate_: \n",
    " \n",
    "![](images/TD_estimate.png)\n",
    "\t\t \n",
    "Next, we use the _Critic_ to calculate the _advantage function_ _A(s, a)_:\n",
    "\n",
    "![](images/calc_advantage.png)\n",
    "\t\t\t\t \n",
    "Here, _\\gamma_ is the _discount factor_. The parameter _\\theta_ is updated by gradient ascent as follows:\n",
    "\n",
    "![](images/update_theta.png)\n",
    "\n",
    "The parameter _w&nbsp;_ is updated as follows:\n",
    "\n",
    "![](images/update_w.png)\n",
    "\t\t\n",
    "Here, \\alpha (resp. \\beta) is the learning rate for the _Actor NN_ (resp. _Critic NN_).  Before we return to the next timestep we update the state _S&nbsp;_ and the operator _I&nbsp;_ by _discount factor_ \\gamma:\n",
    "\n",
    "![](images/next_state.png)\n",
    "\n",
    "At the start of the algorithm the operator _I_ should be initialized to the identity opeartor. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Update critic_local neural network (pseudocode)\n",
    "  \n",
    "     1. Get predicted next-state actions and Q-values from the actor and critic target neural networks.\n",
    "        actions_next = actor_target(next_states)\n",
    "        Q_targets_next = critic_target(next_states, actions_next)\n",
    "\n",
    "     2. Compute Q-targets by for current states (by Bellman equation)\n",
    "        Q_targets = rewards + (gamma * Q_targets_next * (1 - dones))\n",
    "\n",
    "     3. Compute Q_expected and critic loss\n",
    "        Q_expected = critic_local(states, actions)\n",
    "        critic_loss = MSE_loss(Q_expected, Q_targets)\n",
    "\n",
    "     4. Minimize the critic_loss. By Gradient Descent and Backward Propogation the weights of \n",
    "        the critic_local network are updated. \n",
    "        \n",
    "### Update actor_local neural network (pseudocode )\n",
    "\n",
    "    1. Compute actor loss\n",
    "        actions_pred = actor_local(states)\n",
    "        actor_loss = -critic_local(states, actions_pred).mean()\n",
    "\n",
    "    2. Minimize the actor_loss. By Gradient Descent and Backward Propogation the weights of \n",
    "       the actor_local network are updated.\n",
    "       \n",
    " See method _learn()_ in  _ddpg_\\__agent.py_  \n",
    " \n",
    "### Architecture of the actor and critic networks\n",
    "\n",
    "Both the _actor_ and _critic_ classes implement the neural network    \n",
    "with 3 fully-connected layers and 2 rectified nonlinear layers. These networks are realized in the framework   \n",
    "of package PyTorch. Such a network is used in Udacity _model.py_ code for the Pendulum model using DDPG.   \n",
    "The number of neurons of the fully-connected layers are as follows:\n",
    "\n",
    "for the _actor_:   \n",
    "Layer fc1, number of neurons: state_size x fc1_units,   \n",
    "Layer fc2, number of neurons: fc1_units x fc2_units,    \n",
    "Layer fc3, number of neurons: fc2_units x action_size,\n",
    "\n",
    "for the _critic_:   \n",
    "Layer fcs1, number of neurons: state_size x fcs1_units,  \n",
    "Layer fc2, number of neurons: (fcs1_units+action_size) x fc2_units,   \n",
    "Layer fc3, number of neurons: fc2_units x 1. \n",
    "\n",
    "Here, state_size = 33, action_size = 4. The input parameters fc1_units, fc2_units, fcs1_units are all taken = 128. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Hyperparameters\n",
    "\n",
    "     BUFFER_SIZE = int(1e6)  # replay buffer size    \n",
    "     BATCH_SIZE = 256        # minibatch size    \n",
    "     GAMMA = 0.99            # discount factor    \n",
    "     TAU = 1e-3              # for soft update of target parameters   \n",
    "     LR_ACTOR = 1e-3         # learning rate of the actor    \n",
    "     LR_CRITIC = 1e-3        # learning rate of the critic   \n",
    "     WEIGHT_DECAY = 0        # L2 weight decay   \n",
    "     EPSILON = 1.0           # epsilon noise parameter   \n",
    "     EPSILON_DECAY = 1e-6    # decay parameter of epsilon    \n",
    "     LEARNING_PERIOD = 20    # learning frequency      \n",
    "     UPDATE_FACTOR   = 10    # how much to learn    \n",
    "     \n",
    "Note that parameters LEARNING_PERIOD and UPDATE_FACTOR are critical for the **convergence** of the algorithm.    \n",
    "The corresponding code is in the function _step()_.    \n",
    "     \n",
    "     if len(self.memory) > BATCH_SIZE and timestep % LEARNING_PERIOD == 0:\n",
    "            for _ in range(UPDATE_FACTOR):\n",
    "                experiences = self.memory.sample()\n",
    "                self.learn(experiences, GAMMA)\n",
    "\n",
    "Thanks to Amita K. from the Udacity Knowledge forum for this great tip !.\n",
    "\n",
    "### Training the Agent\n",
    "\n",
    "On my local machine with GPU, the desired average reward was achieved in 195 episodes in 1 hour and 11 minutes.\n",
    "\n",
    "![](score_graph.png) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Full log over all episodes\n",
    "\n",
    "Episode: 1, Score: 0.71, Max: 2.04, Min: 0.02     \n",
    "Episode: 2, Score: 0.80, Max: 2.12, Min: 0.10      \n",
    "Episode: 3, Score: 0.76, Max: 1.59, Min: 0.00    \n",
    "Episode: 4, Score: 0.72, Max: 1.46, Min: 0.13     \n",
    "Episode: 5, Score: 1.17, Max: 2.48, Min: 0.26     \n",
    "Episode: 6, Score: 0.70, Max: 1.75, Min: 0.14     \n",
    "Episode: 7, Score: 1.12, Max: 2.21, Min: 0.09    \n",
    "Episode: 8, Score: 1.13, Max: 2.09, Min: 0.15     \n",
    "Episode: 9, Score: 1.26, Max: 2.90, Min: 0.00     \n",
    "Episode: 10, Score: 1.09, Max: 2.44, Min: 0.13     \n",
    "*** Episode 10\tAverage Score: 0.95, Time: 00:02:42 ***    \n",
    "Episode: 11, Score: 1.88, Max: 3.97, Min: 0.31      \n",
    "Episode: 12, Score: 1.51, Max: 2.79, Min: 0.18      \n",
    "Episode: 13, Score: 1.51, Max: 3.66, Min: 0.21      \n",
    "Episode: 14, Score: 1.43, Max: 2.62, Min: 0.39      \n",
    "Episode: 15, Score: 1.78, Max: 3.42, Min: 0.75      \n",
    "Episode: 16, Score: 1.93, Max: 3.17, Min: 0.90      \n",
    "Episode: 17, Score: 1.90, Max: 3.69, Min: 0.73    \n",
    "Episode: 18, Score: 1.97, Max: 4.75, Min: 0.45    \n",
    "Episode: 19, Score: 1.95, Max: 4.05, Min: 0.16    \n",
    "Episode: 20, Score: 1.91, Max: 4.26, Min: 0.16    \n",
    "*** Episode 20\tAverage Score: 1.36, Time: 00:05:31 ***    \n",
    "Episode: 21, Score: 1.96, Max: 3.98, Min: 0.52    \n",
    "Episode: 22, Score: 2.28, Max: 3.93, Min: 0.38    \n",
    "Episode: 23, Score: 2.47, Max: 5.23, Min: 0.64    \n",
    "Episode: 24, Score: 2.27, Max: 6.58, Min: 0.53    \n",
    "Episode: 25, Score: 2.36, Max: 3.80, Min: 0.54     \n",
    "Episode: 26, Score: 2.76, Max: 4.68, Min: 1.11    \n",
    "Episode: 27, Score: 3.21, Max: 5.76, Min: 1.26    \n",
    "Episode: 28, Score: 3.72, Max: 5.98, Min: 1.74    \n",
    "Episode: 29, Score: 3.72, Max: 8.61, Min: 1.60    \n",
    "Episode: 30, Score: 3.54, Max: 5.42, Min: 1.83    \n",
    "*** Episode 30\tAverage Score: 1.85, Time: 00:08:33 ***      \n",
    "Episode: 31, Score: 4.06, Max: 6.46, Min: 1.91    \n",
    "Episode: 32, Score: 4.48, Max: 6.67, Min: 2.78     \n",
    "Episode: 33, Score: 4.61, Max: 8.28, Min: 1.54    \n",
    "Episode: 34, Score: 4.35, Max: 9.52, Min: 1.05    \n",
    "Episode: 35, Score: 4.73, Max: 6.91, Min: 3.07    \n",
    "Episode: 36, Score: 5.08, Max: 9.59, Min: 1.78    \n",
    "Episode: 37, Score: 4.67, Max: 8.35, Min: 2.13     \n",
    "Episode: 38, Score: 3.87, Max: 6.79, Min: 0.83    \n",
    "Episode: 39, Score: 5.08, Max: 14.40, Min: 1.86    \n",
    "Episode: 40, Score: 4.47, Max: 6.14, Min: 1.73    \n",
    "*** Episode 40\tAverage Score: 2.52, Time: 00:12:13 ***    \n",
    "Episode: 41, Score: 4.76, Max: 8.42, Min: 2.32    \n",
    "Episode: 42, Score: 5.85, Max: 10.02, Min: 2.01    \n",
    "Episode: 43, Score: 4.86, Max: 8.24, Min: 1.36   \n",
    "Episode: 44, Score: 5.56, Max: 8.69, Min: 4.16  \n",
    "Episode: 45, Score: 5.86, Max: 11.20, Min: 2.66  \n",
    "Episode: 46, Score: 5.50, Max: 8.03, Min: 2.99  \n",
    "Episode: 47, Score: 5.51, Max: 8.64, Min: 3.27  \n",
    "Episode: 48, Score: 6.51, Max: 9.83, Min: 3.18  \n",
    "Episode: 49, Score: 5.97, Max: 7.99, Min: 3.57  \n",
    "Episode: 50, Score: 6.85, Max: 13.71, Min: 3.88  \n",
    "*** Episode 50\tAverage Score: 3.16, Time: 00:15:53 ***  \n",
    "Episode: 51, Score: 6.53, Max: 9.68, Min: 3.14  \n",
    "Episode: 52, Score: 7.99, Max: 16.33, Min: 3.70  \n",
    "Episode: 53, Score: 7.04, Max: 10.94, Min: 3.69  \n",
    "Episode: 54, Score: 7.78, Max: 10.47, Min: 5.41  \n",
    "Episode: 55, Score: 8.55, Max: 12.90, Min: 5.83  \n",
    "Episode: 56, Score: 8.47, Max: 15.28, Min: 4.61  \n",
    "Episode: 57, Score: 8.42, Max: 11.66, Min: 5.74  \n",
    "Episode: 58, Score: 8.30, Max: 12.44, Min: 3.46  \n",
    "Episode: 59, Score: 7.26, Max: 10.25, Min: 1.86  \n",
    "Episode: 60, Score: 7.76, Max: 12.65, Min: 3.42  \n",
    "*** Episode 60\tAverage Score: 3.94, Time: 00:19:40 ***  \n",
    "Episode: 61, Score: 9.27, Max: 12.87, Min: 6.00  \n",
    "Episode: 62, Score: 7.94, Max: 14.42, Min: 5.63  \n",
    "Episode: 63, Score: 7.71, Max: 15.13, Min: 4.35  \n",
    "Episode: 64, Score: 8.10, Max: 13.90, Min: 2.92  \n",
    "Episode: 65, Score: 8.86, Max: 12.54, Min: 5.28  \n",
    "Episode: 66, Score: 9.26, Max: 13.45, Min: 4.51  \n",
    "Episode: 67, Score: 8.43, Max: 15.48, Min: 4.05  \n",
    "Episode: 68, Score: 9.31, Max: 14.45, Min: 4.41  \n",
    "Episode: 69, Score: 9.57, Max: 14.37, Min: 5.75  \n",
    "Episode: 70, Score: 10.49, Max: 17.76, Min: 6.09  \n",
    "*** Episode 70\tAverage Score: 4.65, Time: 00:23:29 ***  \n",
    "Episode: 71, Score: 9.73, Max: 16.96, Min: 2.50  \n",
    "Episode: 72, Score: 9.23, Max: 15.43, Min: 5.28  \n",
    "Episode: 73, Score: 9.34, Max: 12.93, Min: 4.95  \n",
    "Episode: 74, Score: 9.48, Max: 14.23, Min: 2.60  \n",
    "Episode: 75, Score: 10.19, Max: 18.09, Min: 5.57  \n",
    "Episode: 76, Score: 8.83, Max: 16.50, Min: 2.15  \n",
    "Episode: 77, Score: 11.30, Max: 31.48, Min: 5.08  \n",
    "Episode: 78, Score: 11.26, Max: 19.91, Min: 4.80  \n",
    "Episode: 79, Score: 10.47, Max: 15.64, Min: 1.81  \n",
    "Episode: 80, Score: 12.48, Max: 25.26, Min: 6.37  \n",
    "*** Episode 80\tAverage Score: 5.34, Time: 00:27:15 ***   \n",
    "Episode: 81, Score: 9.90, Max: 15.15, Min: 0.00   \n",
    "Episode: 82, Score: 13.04, Max: 37.81, Min: 5.46    \n",
    "Episode: 83, Score: 12.24, Max: 15.81, Min: 7.61  \n",
    "Episode: 84, Score: 11.99, Max: 16.75, Min: 3.53  \n",
    "Episode: 85, Score: 11.20, Max: 16.94, Min: 2.18  \n",
    "Episode: 86, Score: 12.56, Max: 17.36, Min: 7.74  \n",
    "Episode: 87, Score: 12.38, Max: 30.75, Min: 2.89  \n",
    "Episode: 88, Score: 12.60, Max: 22.23, Min: 6.41  \n",
    "Episode: 89, Score: 12.16, Max: 25.83, Min: 2.36  \n",
    "Episode: 90, Score: 12.74, Max: 19.56, Min: 6.18  \n",
    "*** Episode 90\tAverage Score: 6.09, Time: 00:31:03 ***  \n",
    "Episode: 91, Score: 15.16, Max: 22.37, Min: 8.05  \n",
    "Episode: 92, Score: 16.26, Max: 31.84, Min: 8.09  \n",
    "Episode: 93, Score: 15.31, Max: 20.67, Min: 1.89  \n",
    "Episode: 94, Score: 15.67, Max: 22.35, Min: 6.24  \n",
    "Episode: 95, Score: 16.77, Max: 22.11, Min: 8.84  \n",
    "Episode: 96, Score: 14.93, Max: 26.66, Min: 1.62  \n",
    "Episode: 97, Score: 16.58, Max: 22.84, Min: 8.56  \n",
    "Episode: 98, Score: 16.64, Max: 21.95, Min: 10.09  \n",
    "Episode: 99, Score: 19.01, Max: 39.24, Min: 7.93  \n",
    "Episode: 100, Score: 17.08, Max: 24.32, Min: 3.53  \n",
    "*** Episode 100\tAverage Score: 7.12, Time: 00:34:51 ***  \n",
    "Episode: 101, Score: 17.95, Max: 22.91, Min: 11.00  \n",
    "Episode: 102, Score: 16.56, Max: 24.24, Min: 1.67  \n",
    "Episode: 103, Score: 21.07, Max: 38.31, Min: 13.45  \n",
    "Episode: 104, Score: 18.64, Max: 27.59, Min: 4.35  \n",
    "Episode: 105, Score: 20.56, Max: 25.43, Min: 14.37  \n",
    "Episode: 106, Score: 19.86, Max: 26.74, Min: 13.93  \n",
    "Episode: 107, Score: 19.58, Max: 29.77, Min: 12.95  \n",
    "Episode: 108, Score: 20.27, Max: 35.10, Min: 10.09  \n",
    "Episode: 109, Score: 20.81, Max: 29.72, Min: 4.26  \n",
    "Episode: 110, Score: 21.52, Max: 28.49, Min: 11.96  \n",
    "*** Episode 110\tAverage Score: 8.99, Time: 00:38:39 ***   \n",
    "Episode: 111, Score: 20.78, Max: 32.40, Min: 10.49  \n",
    "Episode: 112, Score: 21.36, Max: 29.38, Min: 14.16  \n",
    "Episode: 113, Score: 21.41, Max: 39.38, Min: 10.43  \n",
    "Episode: 114, Score: 23.81, Max: 30.33, Min: 16.39  \n",
    "Episode: 115, Score: 25.42, Max: 38.99, Min: 18.17  \n",
    "Episode: 116, Score: 23.45, Max: 32.54, Min: 14.20  \n",
    "Episode: 117, Score: 24.80, Max: 39.39, Min: 13.74  \n",
    "Episode: 118, Score: 25.33, Max: 39.52, Min: 9.36  \n",
    "Episode: 119, Score: 24.78, Max: 39.53, Min: 12.81  \n",
    "Episode: 120, Score: 26.61, Max: 33.77, Min: 16.30  \n",
    "*** Episode 120\tAverage Score: 11.19, Time: 00:42:28 ***  \n",
    "Episode: 121, Score: 24.61, Max: 31.65, Min: 12.96  \n",
    "Episode: 122, Score: 25.64, Max: 33.35, Min: 14.13  \n",
    "Episode: 123, Score: 25.77, Max: 32.62, Min: 14.79  \n",
    "Episode: 124, Score: 26.46, Max: 39.47, Min: 11.65  \n",
    "Episode: 125, Score: 26.07, Max: 31.84, Min: 17.42  \n",
    "Episode: 126, Score: 25.32, Max: 38.31, Min: 16.42  \n",
    "Episode: 127, Score: 26.99, Max: 33.45, Min: 20.29  \n",
    "Episode: 128, Score: 26.67, Max: 34.61, Min: 18.10  \n",
    "Episode: 129, Score: 26.91, Max: 37.99, Min: 14.14  \n",
    "Episode: 130, Score: 25.93, Max: 38.60, Min: 6.53  \n",
    "*** Episode 130\tAverage Score: 13.51, Time: 00:46:15 ***  \n",
    "Episode: 131, Score: 27.38, Max: 32.30, Min: 21.22  \n",
    "Episode: 132, Score: 27.39, Max: 35.68, Min: 15.50  \n",
    "Episode: 133, Score: 29.01, Max: 37.08, Min: 18.37  \n",
    "Episode: 134, Score: 27.32, Max: 35.65, Min: 18.57  \n",
    "Episode: 135, Score: 28.04, Max: 34.76, Min: 19.73  \n",
    "Episode: 136, Score: 29.73, Max: 33.93, Min: 23.85  \n",
    "Episode: 137, Score: 29.27, Max: 33.62, Min: 18.22  \n",
    "Episode: 138, Score: 32.11, Max: 38.77, Min: 18.49  \n",
    "Episode: 139, Score: 30.12, Max: 34.20, Min: 19.54  \n",
    "Episode: 140, Score: 30.16, Max: 33.26, Min: 24.46  \n",
    "*** Episode 140\tAverage Score: 15.96, Time: 00:50:02 ***  \n",
    "Episode: 141, Score: 30.81, Max: 39.37, Min: 16.92  \n",
    "Episode: 142, Score: 29.94, Max: 37.91, Min: 20.18  \n",
    "Episode: 143, Score: 31.89, Max: 38.72, Min: 24.29  \n",
    "Episode: 144, Score: 32.81, Max: 36.73, Min: 27.04  \n",
    "Episode: 145, Score: 31.48, Max: 36.08, Min: 26.59  \n",
    "Episode: 146, Score: 32.75, Max: 36.67, Min: 27.19  \n",
    "Episode: 147, Score: 31.96, Max: 37.55, Min: 26.55  \n",
    "Episode: 148, Score: 32.50, Max: 38.13, Min: 25.44  \n",
    "Episode: 149, Score: 32.21, Max: 37.37, Min: 23.54  \n",
    "Episode: 150, Score: 31.09, Max: 36.01, Min: 20.85  \n",
    "*** Episode 150\tAverage Score: 18.56, Time: 00:53:50 ***  \n",
    "Episode: 151, Score: 30.04, Max: 35.32, Min: 22.66  \n",
    "Episode: 152, Score: 32.84, Max: 36.18, Min: 25.95  \n",
    "Episode: 153, Score: 32.63, Max: 36.52, Min: 24.48  \n",
    "Episode: 154, Score: 35.48, Max: 39.50, Min: 25.82  \n",
    "Episode: 155, Score: 34.71, Max: 38.77, Min: 29.35  \n",
    "Episode: 156, Score: 34.20, Max: 39.17, Min: 27.48  \n",
    "Episode: 157, Score: 36.20, Max: 39.27, Min: 22.97  \n",
    "Episode: 158, Score: 36.65, Max: 39.30, Min: 33.21  \n",
    "Episode: 159, Score: 35.61, Max: 38.84, Min: 27.36  \n",
    "Episode: 160, Score: 35.07, Max: 39.04, Min: 28.34  \n",
    "*** Episode 160\tAverage Score: 21.22, Time: 00:57:38 ***  \n",
    "Episode: 161, Score: 36.24, Max: 39.57, Min: 29.96  \n",
    "Episode: 162, Score: 35.79, Max: 39.55, Min: 29.40  \n",
    "Episode: 163, Score: 36.97, Max: 39.54, Min: 29.14  \n",
    "Episode: 164, Score: 35.53, Max: 39.65, Min: 21.76  \n",
    "Episode: 165, Score: 35.76, Max: 39.02, Min: 27.52  \n",
    "Episode: 166, Score: 37.04, Max: 39.62, Min: 31.07  \n",
    "Episode: 167, Score: 36.38, Max: 39.30, Min: 25.66  \n",
    "Episode: 168, Score: 36.98, Max: 39.48, Min: 25.20  \n",
    "Episode: 169, Score: 36.67, Max: 39.61, Min: 28.32  \n",
    "Episode: 170, Score: 38.70, Max: 39.66, Min: 35.71  \n",
    "*** Episode 170\tAverage Score: 23.99, Time: 01:01:27 ***  \n",
    "Episode: 171, Score: 35.93, Max: 39.45, Min: 25.32  \n",
    "Episode: 172, Score: 37.77, Max: 39.54, Min: 32.79  \n",
    "Episode: 173, Score: 37.36, Max: 39.40, Min: 33.56  \n",
    "Episode: 174, Score: 36.26, Max: 39.14, Min: 32.26  \n",
    "Episode: 175, Score: 37.81, Max: 39.20, Min: 36.06  \n",
    "Episode: 176, Score: 37.71, Max: 39.45, Min: 32.99  \n",
    "Episode: 177, Score: 38.18, Max: 39.52, Min: 33.94  \n",
    "Episode: 178, Score: 36.87, Max: 39.55, Min: 34.29  \n",
    "Episode: 179, Score: 37.56, Max: 39.53, Min: 32.90  \n",
    "Episode: 180, Score: 37.20, Max: 39.42, Min: 33.03  \n",
    "*** Episode 180\tAverage Score: 26.69, Time: 01:05:14 ***  \n",
    "Episode: 181, Score: 37.42, Max: 39.45, Min: 32.59  \n",
    "Episode: 182, Score: 36.25, Max: 39.34, Min: 30.82  \n",
    "Episode: 183, Score: 37.19, Max: 39.49, Min: 31.03  \n",
    "Episode: 184, Score: 37.23, Max: 39.47, Min: 32.66  \n",
    "Episode: 185, Score: 37.10, Max: 39.50, Min: 34.44  \n",
    "Episode: 186, Score: 36.28, Max: 39.19, Min: 30.47  \n",
    "Episode: 187, Score: 36.13, Max: 39.39, Min: 33.07  \n",
    "Episode: 188, Score: 34.60, Max: 37.87, Min: 29.54  \n",
    "Episode: 189, Score: 35.65, Max: 39.04, Min: 24.96  \n",
    "Episode: 190, Score: 35.49, Max: 39.23, Min: 29.52  \n",
    "*** Episode 190\tAverage Score: 29.12, Time: 01:09:03 ***  \n",
    "Episode: 191, Score: 36.78, Max: 39.39, Min: 33.28  \n",
    "Episode: 192, Score: 36.76, Max: 39.52, Min: 32.55  \n",
    "Episode: 193, Score: 37.60, Max: 39.50, Min: 34.84  \n",
    "Episode: 194, Score: 38.00, Max: 39.30, Min: 35.72  \n",
    "Episode: 195, Score: 38.69, Max: 39.49, Min: 36.85  \n",
    "*** Episode 195\tAverage Score: 30.20, Time: 01:10:57 ***  \n",
    "Environment solved ! \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Future Ideas for Improving"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. Possibly, the improve can be achieved by adding some layers to the neural networks Actor and Critic. Some papers state     \n",
    "   that Batch Normalization can accelerate Deep Network Training, \n",
    "   for example, [here](https://medium.com/@ilango100/batch-normalization-speed-up-neural-network-training-245e39a62f85) and [here](https://arxiv.org/pdf/1502.03167.pdf).\n",
    "\n",
    "2. Check different values for hyperparameters such as BATCH_SIZE, LR_ACTOR,  LR_CRITIC, LEARNING_PERIOD, UPDATE_FACTOR.    \n",
    " \n",
    "3. Instead of DDPG, other models can be considered, such as [PPO](https://openai.com/blog/openai-baselines-ppo/), \n",
    "   [A3C](https://blog.goodaudience.com/a3c-what-it-is-what-i-built-6b91fe5ec09c) and others."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:Anaconda_3]",
   "language": "python",
   "name": "conda-env-Anaconda_3-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
