{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Intro & Resources\n",
    "* [Sutton/Barto ebook](https://goo.gl/7utZaz); [Silver online course](https://goo.gl/AWcMFW)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Learning to Optimize Rewards\n",
    "* Definitions: software *agents* make *observations* & take *actions* within an *environment*. In return they can receive *rewards* (positive or negative)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Policy Search\n",
    "* **Policy**: the algorithm used by an agent to determine a next action."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### OpenAI Gym ([link:](https://gym.openai.com/))\n",
    "* A toolkit for various simulated environments."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already up-to-date: gym in /home/bjpcjp/anaconda3/lib/python3.5/site-packages\n",
      "Requirement already up-to-date: requests>=2.0 in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\n",
      "Requirement already up-to-date: pyglet>=1.2.0 in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\n",
      "Requirement already up-to-date: six in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\n",
      "Requirement already up-to-date: numpy>=1.10.4 in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\n"
     ]
    }
   ],
   "source": [
    "!pip3 install --upgrade gym"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[2017-04-27 13:05:47,311] Making new env: CartPole-v0\n"
     ]
    }
   ],
   "source": [
    "import gym\n",
    "env = gym.make(\"CartPole-v0\")\n",
    "obs = env.reset()\n",
    "obs\n",
    "env.render()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* **make()** creates environment\n",
    "* **reset()** returns a 1st env't\n",
    "* **CartPole()** - each observation = 1D numpy array (hposition, velocity, angle, angularvelocity)\n",
    "![cartpole](pics/cartpole.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(1, 1, 3)"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "img = env.render(mode=\"rgb_array\")\n",
    "img.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Discrete(2)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# what actions are possible?\n",
    "# in this case: 0 = accelerate left, 1 = accelerate right\n",
    "env.action_space"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(array([-0.04061536,  0.1486962 , -0.01966318, -0.29249162]), 1.0, False, {})"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# pole is leaning right. let's go further to the right.\n",
    "action = 1\n",
    "obs, reward, done, info = env.step(action)\n",
    "obs, reward, done, info"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* new observation:\n",
    "    * hpos = obs[0]<0\n",
    "    * velocity = obs[1]>0 = moving to the right\n",
    "    * angle    = obs[2]>0 = leaning right\n",
    "    * ang velocity = obs[3]<0 = slowing down?\n",
    "* reward = 1.0\n",
    "* done = False (episode not over)\n",
    "* info = (empty)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(41.579999999999998, 8.5249985337242151, 25.0, 62.0)"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# example policy: \n",
    "# (1) accelerate left when leaning left, (2) accelerate right when leaning right\n",
    "# average reward over 500 episodes?\n",
    "\n",
    "def basic_policy(obs):\n",
    "    angle = obs[2]\n",
    "    return 0 if angle < 0 else 1\n",
    "\n",
    "totals = []\n",
    "for episode in range(500):\n",
    "    episode_rewards = 0\n",
    "    obs = env.reset()\n",
    "    for step in range(1000): # 1000 steps max, we don't want to run forever\n",
    "        action = basic_policy(obs)\n",
    "        obs, reward, done, info = env.step(action)\n",
    "        episode_rewards += reward\n",
    "        if done:\n",
    "            break\n",
    "    totals.append(episode_rewards)\n",
    "\n",
    "import numpy as np\n",
    "np.mean(totals), np.std(totals), np.min(totals), np.max(totals)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### NN Policies\n",
    "* observations as inputs - actions to be executed as outputs - determined by p(action)\n",
    "* approach lets agent find best balance between **exploring new actions** & **reusing known good actions**.\n",
    "\n",
    "### Evaluating Actions: Credit Assignment problem\n",
    "* Reinforcement Learning (RL) training not like supervised learning. \n",
    "* RL feedback is via rewards (often sparse & delayed)\n",
    "* How to determine which previous steps were \"good\" or \"bad\"? (aka \"*credit assigmnment problem*\")\n",
    "* Common tactic: applying a **discount rate** to older rewards.\n",
    "\n",
    "* Use normalization across many episodes to increase score reliability. \n",
    "\n",
    "NN Policy | Discounts & Rewards\n",
    "- | -\n",
    "![nn-policy](pics/nn-policy.png) | ![discount-rewards](pics/discount-rewards.png)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow.contrib.layers import fully_connected\n",
    "\n",
    "# 1. Specify the neural network architecture\n",
    "n_inputs = 4                            # == env.observation_space.shape[0]\n",
    "n_hidden = 4                            # simple task, don't need more hidden neurons\n",
    "n_outputs = 1                           # only output prob(accelerating left)\n",
    "initializer = tf.contrib.layers.variance_scaling_initializer()\n",
    "\n",
    "# 2. Build the neural network\n",
    "X = tf.placeholder(\n",
    "    tf.float32, shape=[None, n_inputs])\n",
    "\n",
    "hidden = fully_connected(\n",
    "    X, n_hidden, \n",
    "    activation_fn=tf.nn.elu,\n",
    "    weights_initializer=initializer)\n",
    "\n",
    "logits = fully_connected(\n",
    "    hidden, n_outputs, \n",
    "    activation_fn=None,\n",
    "    weights_initializer=initializer)\n",
    "\n",
    "outputs = tf.nn.sigmoid(logits)          # logistic (sigmoid) ==> return 0.0-1.0\n",
    "\n",
    "# 3. Select a random action based on the estimated probabilities\n",
    "p_left_and_right = tf.concat(\n",
    "    axis=1, values=[outputs, 1 - outputs])\n",
    "\n",
    "action = tf.multinomial(\n",
    "    tf.log(p_left_and_right), \n",
    "    num_samples=1)\n",
    "\n",
    "init = tf.global_variables_initializer()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Policy Gradient (PG) algorithms\n",
    "* example: [\"reinforce\" algo, 1992](https://goo.gl/tUe4Sh)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Markov Decision processes (MDPs)\n",
    "\n",
    "* Markov chains = stochastic processes, no memory, fixed #states, random transitions\n",
    "* Markov decision processes = similar to MCs - agent can choose action; transition probabilities depend on the action; transitions can return reward/punishment.\n",
    "* Goal: find policy with maximum rewards over time.\n",
    "\n",
    "Markov Chain | Markov Decision Process\n",
    "- | -\n",
    "![markov-chain](pics/markov-chain.png) | ![alt](pics/markov-decision-process.png)\n",
    "\n",
    "* **Bellman Optimality Equation**: a method to estimate optimal state value of any state *s*.\n",
    "* Knowing optimal states = useful, but doesn't tell agent what to do. **Q-Value algorithm** helps solve this problem. Optimal Q-Value of a state-action pair = sum of discounted future rewards the agent can expect on average.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Q: \n",
      " [[ 21.88646117  20.79149867  16.854807  ]\n",
      " [  1.10804034         -inf   1.16703135]\n",
      " [        -inf  53.8607061          -inf]]\n",
      "Optimal action for each state:\n",
      " [0 2 1]\n"
     ]
    }
   ],
   "source": [
    "# Define MDP:\n",
    "\n",
    "nan=np.nan # represents impossible actions\n",
    "T = np.array([ # shape=[s, a, s']\n",
    "        [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]],\n",
    "        [[0.0, 1.0, 0.0], [nan, nan, nan], [0.0, 0.0, 1.0]],\n",
    "        [[nan, nan, nan], [0.8, 0.1, 0.1], [nan, nan, nan]],\n",
    "        ])\n",
    "\n",
    "R = np.array([ # shape=[s, a, s']\n",
    "        [[10., 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]],\n",
    "        [[10., 0.0, 0.0], [nan, nan, nan], [0.0, 0.0, -50.]],\n",
    "        [[nan, nan, nan], [40., 0.0, 0.0], [nan, nan, nan]],\n",
    "        ])\n",
    "\n",
    "possible_actions = [[0, 1, 2], [0, 2], [1]]\n",
    "\n",
    "# run Q-Value Iteration algo\n",
    "\n",
    "Q = np.full((3, 3), -np.inf)\n",
    "for state, actions in enumerate(possible_actions):\n",
    "    Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions\n",
    "\n",
    "learning_rate = 0.01\n",
    "discount_rate = 0.95\n",
    "n_iterations = 100\n",
    "\n",
    "for iteration in range(n_iterations):\n",
    "    Q_prev = Q.copy()\n",
    "    for s in range(3):\n",
    "        for a in possible_actions[s]:\n",
    "            Q[s, a] = np.sum([\n",
    "                T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp]))\n",
    "                for sp in range(3)\n",
    "                ])\n",
    "            \n",
    "print(\"Q: \\n\",Q)\n",
    "print(\"Optimal action for each state:\\n\",np.argmax(Q, axis=1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Q: \n",
      " [[  1.89189499e+01   1.70270580e+01   1.36216526e+01]\n",
      " [  3.09979853e-05             -inf  -4.87968388e+00]\n",
      " [            -inf   5.01336811e+01             -inf]]\n",
      "Optimal action for each state:\n",
      " [0 0 1]\n"
     ]
    }
   ],
   "source": [
    "# change discount rate to 0.9, see how policy changes:\n",
    "\n",
    "discount_rate = 0.90\n",
    "\n",
    "for iteration in range(n_iterations):\n",
    "    Q_prev = Q.copy()\n",
    "    for s in range(3):\n",
    "        for a in possible_actions[s]:\n",
    "            Q[s, a] = np.sum([\n",
    "                T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp]))\n",
    "                for sp in range(3)\n",
    "                ])\n",
    "            \n",
    "print(\"Q: \\n\",Q)\n",
    "print(\"Optimal action for each state:\\n\",np.argmax(Q, axis=1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Temporal Difference Learning & Q-Learning\n",
    "* In general - agent has no knowledge of transition probabilities or rewards\n",
    "* **Temporal Difference Learning** (TD Learning) similar to value iteration, but accounts for this lack of knowlege.\n",
    "* Algorithm tracks running average of most recent awards & anticipated rewards.\n",
    "\n",
    "* **Q-Learning** algorithm adaptation of Q-Value Iteration where initial transition probabilities & rewards are unknown."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Q: \n",
      " [[             -inf   2.47032823e-323              -inf]\n",
      " [  0.00000000e+000              -inf   0.00000000e+000]\n",
      " [             -inf   0.00000000e+000              -inf]]\n",
      "Optimal action for each state:\n",
      " [1 0 1]\n"
     ]
    }
   ],
   "source": [
    "import numpy.random as rnd\n",
    "\n",
    "learning_rate0 = 0.05\n",
    "learning_rate_decay = 0.1\n",
    "n_iterations = 20000\n",
    "\n",
    "s = 0                         # start in state 0\n",
    "Q = np.full((3, 3), -np.inf)  # -inf for impossible actions\n",
    "\n",
    "for state, actions in enumerate(possible_actions):\n",
    "    Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions\n",
    "    for iteration in range(n_iterations):\n",
    "        a = rnd.choice(possible_actions[s]) # choose an action (randomly)\n",
    "        sp = rnd.choice(range(3), p=T[s, a]) # pick next state using T[s, a]\n",
    "        reward = R[s, a, sp]\n",
    "        \n",
    "        learning_rate = learning_rate0 / (1 + iteration * learning_rate_decay)\n",
    "        \n",
    "        Q[s, a] = learning_rate * Q[s, a] + (1 - learning_rate) * (reward + discount_rate * np.max(Q[sp]))\n",
    "\n",
    "s = sp # move to next state\n",
    "\n",
    "print(\"Q: \\n\",Q)\n",
    "print(\"Optimal action for each state:\\n\",np.argmax(Q, axis=1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Exploration Policies\n",
    "* Q-Learning works only if exploration is thorough - not always possible.\n",
    "* Better alternative: explore more interesting routes using a *sigma* probability"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Approximate Q-Learning\n",
    "* TODO"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Ms Pac-Man with Deep Q-Learning"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "[2017-04-27 13:06:21,861] Making new env: MsPacman-v0\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "((210, 160, 3), Discrete(9))"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "env = gym.make('MsPacman-v0')\n",
    "obs = env.reset()\n",
    "obs.shape, env.action_space\n",
    "\n",
    "# action_space = 9 possible joystick actions\n",
    "# observations = atari screenshots as 3D NumPy arrays"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "mspacman_color = np.array([210, 164, 74]).mean()\n",
    "\n",
    "# crop image, shrink to 88x80 pixels, convert to grayscale, improve contrast\n",
    "\n",
    "def preprocess_observation(obs):\n",
    "    img = obs[1:176:2, ::2] # crop and downsize\n",
    "    img = img.mean(axis=2) # to greyscale\n",
    "    img[img==mspacman_color] = 0 # improve contrast\n",
    "    img = (img - 128) / 128 - 1 # normalize from -1. to 1.\n",
    "    return img.reshape(88, 80, 1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Ms PacMan Observation | Deep-Q net\n",
    "- | -\n",
    "![observation](pics/mspacman-before-after.png) | ![alt](pics/mspacman-deepq.png)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Create DQN\n",
    "# 3 convo layers, then 2 FC layers including output layer\n",
    "\n",
    "from tensorflow.contrib.layers import convolution2d, fully_connected\n",
    "\n",
    "input_height      = 88\n",
    "input_width       = 80\n",
    "input_channels    = 1\n",
    "conv_n_maps       = [32, 64, 64]\n",
    "conv_kernel_sizes = [(8,8), (4,4), (3,3)]\n",
    "conv_strides      = [4, 2, 1]\n",
    "conv_paddings     = [\"SAME\"]*3\n",
    "conv_activation   = [tf.nn.relu]*3\n",
    "n_hidden_in       = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each\n",
    "n_hidden          = 512\n",
    "hidden_activation = tf.nn.relu\n",
    "n_outputs         = env.action_space.n # 9 discrete actions are available\n",
    "\n",
    "initializer = tf.contrib.layers.variance_scaling_initializer()\n",
    "\n",
    "# training will need ***TWO*** DQNs:\n",
    "# one to train the actor\n",
    "# another to learn from trials & errors (critic)\n",
    "# q_network is our net builder.\n",
    "\n",
    "def q_network(X_state, scope):\n",
    "    prev_layer = X_state\n",
    "    conv_layers = []\n",
    "\n",
    "    with tf.variable_scope(scope) as scope:\n",
    "    \n",
    "        for n_maps, kernel_size, stride, padding, activation in zip(\n",
    "            conv_n_maps, \n",
    "            conv_kernel_sizes, \n",
    "            conv_strides,\n",
    "            conv_paddings, \n",
    "            conv_activation):\n",
    "            \n",
    "            prev_layer = convolution2d(\n",
    "                prev_layer, \n",
    "                num_outputs=n_maps, \n",
    "                kernel_size=kernel_size,\n",
    "                stride=stride, \n",
    "                padding=padding, \n",
    "                activation_fn=activation,\n",
    "                weights_initializer=initializer)\n",
    "            \n",
    "            conv_layers.append(prev_layer)\n",
    "\n",
    "        last_conv_layer_flat = tf.reshape(\n",
    "            prev_layer, \n",
    "            shape=[-1, n_hidden_in])\n",
    "            \n",
    "        hidden = fully_connected(\n",
    "            last_conv_layer_flat, \n",
    "            n_hidden, \n",
    "            activation_fn=hidden_activation,\n",
    "            weights_initializer=initializer)\n",
    "        \n",
    "        outputs = fully_connected(\n",
    "            hidden, \n",
    "            n_outputs, \n",
    "            activation_fn=None,\n",
    "            weights_initializer=initializer)\n",
    "        \n",
    "    trainable_vars = tf.get_collection(\n",
    "        tf.GraphKeys.TRAINABLE_VARIABLES,\n",
    "        scope=scope.name)\n",
    "    \n",
    "    trainable_vars_by_name = {var.name[len(scope.name):]: var\n",
    "        for var in trainable_vars}\n",
    "\n",
    "    return outputs, trainable_vars_by_name\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# create input placeholders & two DQNs\n",
    "\n",
    "X_state = tf.placeholder(\n",
    "    tf.float32, \n",
    "    shape=[None, input_height, input_width,\n",
    "    input_channels])\n",
    "\n",
    "actor_q_values, actor_vars   = q_network(X_state, scope=\"q_networks/actor\")\n",
    "critic_q_values, critic_vars = q_network(X_state, scope=\"q_networks/critic\")\n",
    "\n",
    "copy_ops = [actor_var.assign(critic_vars[var_name])\n",
    "            for var_name, actor_var in actor_vars.items()]\n",
    "\n",
    "\n",
    "# op to copy all trainable vars of critic DQN to actor DQN...\n",
    "# use tf.group() to group all assignment ops together\n",
    "\n",
    "copy_critic_to_actor = tf.group(*copy_ops)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Critic DQN learns by matching Q-Value predictions \n",
    "# to actor's Q-Value estimations during game play\n",
    "\n",
    "# Actor will use a \"replay memory\" (5 tuples):\n",
    "# state, action, next-state, reward, (0=over/1=continue)\n",
    "\n",
    "# use normal supervised training ops\n",
    "# occasionally copy critic DQN to actor DQN\n",
    "\n",
    "# DQN normally returns one Q-Value for every poss. action\n",
    "# only need Q-Value of action actually chosen\n",
    "# So, convert action to one-hot vector [0...1...0], multiple by Q-values\n",
    "# then sum over 1st axis.\n",
    "\n",
    "X_action = tf.placeholder(\n",
    "    tf.int32, shape=[None])\n",
    "\n",
    "q_value = tf.reduce_sum(\n",
    "    critic_q_values * tf.one_hot(X_action, n_outputs),\n",
    "    axis=1, keep_dims=True)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "ename": "ValueError",
     "evalue": "Tensor(\"Sum_1:0\", shape=(?, 1), dtype=float32) must be from the same graph as Tensor(\"Placeholder:0\", shape=(?, 1), dtype=float32).",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mValueError\u001b[0m                                Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-54-ae5a849b8026>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      7\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      8\u001b[0m cost = tf.reduce_mean(\n\u001b[0;32m----> 9\u001b[0;31m     tf.square(y - q_value))\n\u001b[0m\u001b[1;32m     10\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     11\u001b[0m \u001b[0;31m# non-trainable. minimize() op will manage incrementing it\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py\u001b[0m in \u001b[0;36mbinary_op_wrapper\u001b[0;34m(x, y)\u001b[0m\n\u001b[1;32m    879\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    880\u001b[0m   \u001b[0;32mdef\u001b[0m \u001b[0mbinary_op_wrapper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 881\u001b[0;31m     \u001b[0;32mwith\u001b[0m \u001b[0mops\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mname_scope\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mop_name\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0my\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    882\u001b[0m       \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msparse_tensor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSparseTensor\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    883\u001b[0m         \u001b[0my\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mops\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mconvert_to_tensor\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdtype\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbase_dtype\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"y\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/contextlib.py\u001b[0m in \u001b[0;36m__enter__\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m     57\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m__enter__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     58\u001b[0m         \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 59\u001b[0;31m             \u001b[0;32mreturn\u001b[0m \u001b[0mnext\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgen\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     60\u001b[0m         \u001b[0;32mexcept\u001b[0m \u001b[0mStopIteration\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     61\u001b[0m             \u001b[0;32mraise\u001b[0m \u001b[0mRuntimeError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"generator didn't yield\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\u001b[0m in \u001b[0;36mname_scope\u001b[0;34m(name, default_name, values)\u001b[0m\n\u001b[1;32m   4217\u001b[0m   \u001b[0;32mif\u001b[0m \u001b[0mvalues\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   4218\u001b[0m     \u001b[0mvalues\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 4219\u001b[0;31m   \u001b[0mg\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_get_graph_from_inputs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   4220\u001b[0m   \u001b[0;32mwith\u001b[0m \u001b[0mg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mas_default\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mg\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mname_scope\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mn\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mscope\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   4221\u001b[0m     \u001b[0;32myield\u001b[0m \u001b[0mscope\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\u001b[0m in \u001b[0;36m_get_graph_from_inputs\u001b[0;34m(op_input_list, graph)\u001b[0m\n\u001b[1;32m   3966\u001b[0m         \u001b[0mgraph\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mgraph_element\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgraph\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3967\u001b[0m       \u001b[0;32melif\u001b[0m \u001b[0moriginal_graph_element\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 3968\u001b[0;31m         \u001b[0m_assert_same_graph\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0moriginal_graph_element\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mgraph_element\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   3969\u001b[0m       \u001b[0;32melif\u001b[0m \u001b[0mgraph_element\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgraph\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mgraph\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3970\u001b[0m         raise ValueError(\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\u001b[0m in \u001b[0;36m_assert_same_graph\u001b[0;34m(original_item, item)\u001b[0m\n\u001b[1;32m   3905\u001b[0m   \u001b[0;32mif\u001b[0m \u001b[0moriginal_item\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgraph\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mitem\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgraph\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3906\u001b[0m     raise ValueError(\n\u001b[0;32m-> 3907\u001b[0;31m         \"%s must be from the same graph as %s.\" % (item, original_item))\n\u001b[0m\u001b[1;32m   3908\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3909\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mValueError\u001b[0m: Tensor(\"Sum_1:0\", shape=(?, 1), dtype=float32) must be from the same graph as Tensor(\"Placeholder:0\", shape=(?, 1), dtype=float32)."
     ]
    }
   ],
   "source": [
    "# training setup\n",
    "\n",
    "tf.reset_default_graph()\n",
    "\n",
    "y = tf.placeholder(\n",
    "    tf.float32, shape=[None, 1])\n",
    "\n",
    "cost = tf.reduce_mean(\n",
    "    tf.square(y - q_value))\n",
    "\n",
    "# non-trainable. minimize() op will manage incrementing it\n",
    "global_step = tf.Variable(\n",
    "    0, \n",
    "    trainable=False, \n",
    "    name='global_step')\n",
    "\n",
    "optimizer = tf.train.AdamOptimizer(learning_rate)\n",
    "\n",
    "training_op = optimizer.minimize(cost, global_step=global_step)\n",
    "\n",
    "init = tf.global_variables_initializer()\n",
    "\n",
    "saver = tf.train.Saver()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# use a deque list to build the replay memory\n",
    "\n",
    "from collections import deque\n",
    "\n",
    "replay_memory_size = 10000\n",
    "replay_memory = deque(\n",
    "    [], maxlen=replay_memory_size)\n",
    "\n",
    "def sample_memories(batch_size):\n",
    "    indices = rnd.permutation(\n",
    "        len(replay_memory))[:batch_size]\n",
    "    cols = [[], [], [], [], []] # state, action, reward, next_state, continue\n",
    "\n",
    "    for idx in indices:\n",
    "        memory = replay_memory[idx]\n",
    "        for col, value in zip(cols, memory):\n",
    "            col.append(value)\n",
    "\n",
    "    cols = [np.array(col) for col in cols]\n",
    "    return (cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# create an actor\n",
    "# use epsilon-greedy policy\n",
    "# gradually decrease epsilon from 1.0 to 0.05 across 50K training steps\n",
    "\n",
    "eps_min = 0.05\n",
    "eps_max = 1.0\n",
    "eps_decay_steps = 50000\n",
    "\n",
    "def epsilon_greedy(q_values, step):\n",
    "    epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps)\n",
    "    if rnd.rand() < epsilon:\n",
    "        return rnd.randint(n_outputs) # random action\n",
    "    else:\n",
    "        return np.argmax(q_values) # optimal action"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# training setup: the variables\n",
    "\n",
    "n_steps = 100000 # total number of training steps\n",
    "training_start = 1000 # start training after 1,000 game iterations\n",
    "training_interval = 3 # run a training step every 3 game iterations\n",
    "save_steps = 50 # save the model every 50 training steps\n",
    "copy_steps = 25 # copy the critic to the actor every 25 training steps\n",
    "discount_rate = 0.95\n",
    "skip_start = 90 # skip the start of every game (it's just waiting time)\n",
    "batch_size = 50\n",
    "iteration = 0 # game iterations\n",
    "checkpoint_path = \"./my_dqn.ckpt\"\n",
    "done = True # env needs to be reset\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      " 1.09000234097\n",
      "\n",
      " 1.35392784142\n",
      "\n",
      " 1.56906713688\n",
      "\n",
      " 2.5765440191\n",
      "\n",
      " 1.57079289043\n",
      "\n",
      " 1.75170834792\n",
      "\n",
      " 1.97005553639\n",
      "\n",
      " 1.97246688247\n",
      "\n",
      " 2.16126081383\n",
      "\n",
      " 1.550295331\n",
      "\n",
      " 1.75750140131\n",
      "\n",
      " 1.56052656734\n",
      "\n",
      " 1.7519523176\n",
      "\n",
      " 1.74495741558\n",
      "\n",
      " 1.95223849511\n",
      "\n",
      " 1.35289915931\n",
      "\n",
      " 1.56913152564\n",
      "\n",
      " 2.96387254691\n",
      "\n",
      " 1.76067311585\n",
      "\n",
      " 1.35536773229\n",
      "\n",
      " 1.54768545294\n",
      "\n",
      " 1.53594982147\n",
      "\n",
      " 1.56104325151\n",
      "\n",
      " 1.96987313104\n",
      "\n",
      " 2.35546155441\n",
      "\n",
      " 1.5688166486\n",
      "\n",
      " 3.08286282682\n",
      "\n",
      " 3.28864161086\n",
      "\n",
      " 3.2878398273\n",
      "\n",
      " 3.09510449028\n",
      "\n",
      " 3.09807873964\n",
      "\n",
      " 3.90697311211\n",
      "\n",
      " 3.07757974195\n",
      "\n",
      " 3.09214673901\n",
      "\n",
      " 3.28402029777\n",
      "\n",
      " 3.28337000942\n",
      "\n",
      " 3.4255889504\n",
      "\n",
      " 3.49763186431\n",
      "\n",
      " 2.85764229989\n",
      "\n",
      " 3.04482784653\n",
      "\n",
      " 2.68228099513\n",
      "\n",
      " 3.28635532999\n",
      "\n",
      " 3.29647485089\n",
      "\n",
      " 3.07898310328\n",
      "\n",
      " 3.10530596256\n",
      "\n",
      " 3.27691918874\n",
      "\n",
      " 3.09561720395\n",
      "\n",
      " 2.67830030346\n",
      "\n",
      " 3.09576807404\n",
      "\n",
      " 3.288335078\n",
      "\n",
      " 3.0956065948\n",
      "\n",
      " 5.21222548962\n",
      "\n",
      " 4.21721751595\n",
      "\n",
      " 4.7905973649\n",
      "\n",
      " 4.59864345837\n",
      "\n",
      " 4.39875211382\n",
      "\n",
      " 4.51839643717\n",
      "\n",
      " 4.59503188992\n",
      "\n",
      " 5.01186150789\n",
      "\n",
      " 4.77968219852\n",
      "\n",
      " 4.78787856865\n",
      "\n",
      " 4.20382899523\n",
      "\n",
      " 4.20432999897\n",
      "\n",
      " 5.0028930707\n",
      "\n",
      " 5.20069698572\n",
      "\n",
      " 4.80375980473\n",
      "\n",
      " 5.19750945711\n",
      "\n",
      " 4.20367767668\n",
      "\n",
      " 4.19593407536\n",
      "\n",
      " 4.40061367989\n",
      "\n",
      " 4.6054182477\n",
      "\n",
      " 4.79921974087\n",
      "\n",
      " 4.38844807434\n",
      "\n",
      " 4.20397897291\n",
      "\n",
      " 4.60095557356\n",
      "\n",
      " 4.59488785553\n",
      "\n",
      " 5.75924422598\n",
      "\n",
      " 5.75949315596\n",
      "\n",
      " 5.16320213652\n",
      "\n",
      " 5.36019721937\n",
      "\n",
      " 5.56076610899\n",
      "\n",
      " 5.16949163198\n",
      "\n",
      " 5.75895399189\n",
      "\n",
      " 5.96050115204\n",
      "\n",
      " 5.97032629395\n"
     ]
    },
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-44-d0da605267f3>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m     46\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     47\u001b[0m         next_q_values = actor_q_values.eval(\n\u001b[0;32m---> 48\u001b[0;31m             feed_dict={X_state: X_next_state_val})\n\u001b[0m\u001b[1;32m     49\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     50\u001b[0m         max_next_q_values = np.max(\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\u001b[0m in \u001b[0;36meval\u001b[0;34m(self, feed_dict, session)\u001b[0m\n\u001b[1;32m    579\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    580\u001b[0m     \"\"\"\n\u001b[0;32m--> 581\u001b[0;31m     \u001b[0;32mreturn\u001b[0m \u001b[0m_eval_using_default_session\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgraph\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0msession\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m    582\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    583\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\u001b[0m in \u001b[0;36m_eval_using_default_session\u001b[0;34m(tensors, feed_dict, graph, session)\u001b[0m\n\u001b[1;32m   3795\u001b[0m                        \u001b[0;34m\"the tensor's graph is different from the session's \"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3796\u001b[0m                        \"graph.\")\n\u001b[0;32m-> 3797\u001b[0;31m   \u001b[0;32mreturn\u001b[0m \u001b[0msession\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtensors\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   3798\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   3799\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m    765\u001b[0m     \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    766\u001b[0m       result = self._run(None, fetches, feed_dict, options_ptr,\n\u001b[0;32m--> 767\u001b[0;31m                          run_metadata_ptr)\n\u001b[0m\u001b[1;32m    768\u001b[0m       \u001b[0;32mif\u001b[0m \u001b[0mrun_metadata\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    769\u001b[0m         \u001b[0mproto_data\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtf_session\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTF_GetBuffer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrun_metadata_ptr\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_run\u001b[0;34m(self, handle, fetches, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m    963\u001b[0m     \u001b[0;32mif\u001b[0m \u001b[0mfinal_fetches\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0mfinal_targets\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    964\u001b[0m       results = self._do_run(handle, final_targets, final_fetches,\n\u001b[0;32m--> 965\u001b[0;31m                              feed_dict_string, options, run_metadata)\n\u001b[0m\u001b[1;32m    966\u001b[0m     \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m    967\u001b[0m       \u001b[0mresults\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_do_run\u001b[0;34m(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)\u001b[0m\n\u001b[1;32m   1013\u001b[0m     \u001b[0;32mif\u001b[0m \u001b[0mhandle\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1014\u001b[0m       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,\n\u001b[0;32m-> 1015\u001b[0;31m                            target_list, options, run_metadata)\n\u001b[0m\u001b[1;32m   1016\u001b[0m     \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1017\u001b[0m       return self._do_call(_prun_fn, self._session, handle, feed_dict,\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_do_call\u001b[0;34m(self, fn, *args)\u001b[0m\n\u001b[1;32m   1020\u001b[0m   \u001b[0;32mdef\u001b[0m \u001b[0m_do_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1021\u001b[0m     \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1022\u001b[0;31m       \u001b[0;32mreturn\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1023\u001b[0m     \u001b[0;32mexcept\u001b[0m \u001b[0merrors\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mOpError\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1024\u001b[0m       \u001b[0mmessage\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mcompat\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mas_text\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmessage\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;32m/home/bjpcjp/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py\u001b[0m in \u001b[0;36m_run_fn\u001b[0;34m(session, feed_dict, fetch_list, target_list, options, run_metadata)\u001b[0m\n\u001b[1;32m   1002\u001b[0m         return tf_session.TF_Run(session, options,\n\u001b[1;32m   1003\u001b[0m                                  \u001b[0mfeed_dict\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfetch_list\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtarget_list\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1004\u001b[0;31m                                  status, run_metadata)\n\u001b[0m\u001b[1;32m   1005\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1006\u001b[0m     \u001b[0;32mdef\u001b[0m \u001b[0m_prun_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msession\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mhandle\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfeed_dict\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfetch_list\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "# let's get busy\n",
    "import os\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    \n",
    "    # restore models if checkpoint file exists\n",
    "    if os.path.isfile(checkpoint_path):\n",
    "        saver.restore(sess, checkpoint_path)\n",
    "        \n",
    "        # otherwise normally initialize variables\n",
    "    else:\n",
    "        init.run()\n",
    "        \n",
    "    while True:\n",
    "        step = global_step.eval()\n",
    "        if step >= n_steps:\n",
    "            break\n",
    "\n",
    "        # iteration = total number of game steps from beginning\n",
    "        \n",
    "        iteration += 1\n",
    "        if done: # game over, start again\n",
    "            obs = env.reset()\n",
    "\n",
    "            for skip in range(skip_start): # skip the start of each game\n",
    "                obs, reward, done, info = env.step(0)\n",
    "            state = preprocess_observation(obs)\n",
    "\n",
    "        # Actor evaluates what to do\n",
    "        q_values = actor_q_values.eval(feed_dict={X_state: [state]})\n",
    "        action   = epsilon_greedy(q_values, step)\n",
    "\n",
    "        # Actor plays\n",
    "        obs, reward, done, info = env.step(action)\n",
    "        next_state = preprocess_observation(obs)\n",
    "\n",
    "        # Let's memorize what just happened\n",
    "        replay_memory.append((state, action, reward, next_state, 1.0 - done))\n",
    "        state = next_state\n",
    "        if iteration < training_start or iteration % training_interval != 0:\n",
    "            continue\n",
    "\n",
    "        # Critic learns\n",
    "        X_state_val, X_action_val, rewards, X_next_state_val, continues = (\n",
    "            sample_memories(batch_size))\n",
    "\n",
    "        next_q_values = actor_q_values.eval(\n",
    "            feed_dict={X_state: X_next_state_val})\n",
    "\n",
    "        max_next_q_values = np.max(\n",
    "            next_q_values, axis=1, keepdims=True)\n",
    "\n",
    "        y_val = rewards + continues * discount_rate * max_next_q_values\n",
    "\n",
    "        training_op.run(\n",
    "            feed_dict={X_state: X_state_val, X_action: X_action_val, y: y_val})\n",
    "\n",
    "        # Regularly copy critic to actor\n",
    "        if step % copy_steps == 0:\n",
    "            copy_critic_to_actor.run()\n",
    "\n",
    "        # And save regularly\n",
    "        if step % save_steps == 0:\n",
    "            saver.save(sess, checkpoint_path)\n",
    "            \n",
    "        print(\"\\n\",np.average(y_val))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [Root]",
   "language": "python",
   "name": "Python [Root]"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
