{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Overview\n",
    "\n",
    "This project demonstrates how to use Reinforcement Learning Algorithms (DQN, DDPG, etc.) with Pytorch to play games. \n",
    "\n",
    "\n",
    "## Installation Dependencies\n",
    "\n",
    "System: Ubuntu 16.04, 4vCPU, 8G, 2.5GHz, Aliyun ECS\n",
    "\n",
    "\n",
    "### Pip3\n",
    "\n",
    "```\n",
    "apt-get update\n",
    "apt-get install python3-pip\n",
    "```\n",
    "\n",
    "### Pytorch, Gym\n",
    "\n",
    "``` bash\n",
    "pip3 install torch torchvision\n",
    "pip3 install gym_ple pygame\n",
    "apt-get install -y python-pygame\n",
    "```\n",
    "\n",
    "> https://github.com/lusob/gym-ple\n",
    "> gym_ple requires PLE, to install PLE clone the repo and install with pip.\n",
    "\n",
    "``` bash\n",
    "git clone https://github.com/ntasfi/PyGame-Learning-Environment.git\n",
    "cd PyGame-Learning-Environment/\n",
    "pip install -e .\n",
    "```\n",
    "\n",
    "**Box2D**\n",
    "\n",
    "``` bash\n",
    "apt-get install swig git\n",
    "git clone https://github.com/pybox2d/pybox2d.git\n",
    "cd pybox2d\n",
    "python setup.py clean\n",
    "python setup.py install\n",
    "```\n",
    "\n",
    "### Xvfb (Fake screen)\n",
    "\n",
    "> xvfb should be installed when using linux server (env.render()) \n",
    "\n",
    "HINT: make sure you have OpenGL install. On Ubuntu, you can run 'apt-get install python-opengl'. If you're running on a server, you may need a virtual frame buffer; something like this should work: 'xvfb-run -s \"-screen 0 1400x900x24\" python <your_script.py>'\n",
    "\n",
    "``` bash\n",
    "apt-get install xvfb, python-opengl\n",
    "apt-get install libav-tools\n",
    "```\n",
    "\n",
    "### Jupyter\n",
    "\n",
    "> It is easy to coding in Jupyter Web UI http://0.0.0.0:8888/.\n",
    "\n",
    "``` bash\n",
    "pip3 install jupyter\n",
    "jupyter notebook --generate-config  # ~/.jupyter/jupyter_notebook_config.py\n",
    "```\n",
    "\n",
    "**Generate passward using jupyter-console**\n",
    "\n",
    "```\n",
    "In [1]: from notebook.auth import passwd\n",
    "In [2]: passwd()\n",
    "```\n",
    "\n",
    "**Jupyter config**\n",
    "\n",
    "```\n",
    "## The IP address the notebook server will listen on.\n",
    "c.NotebookApp.ip = '*'   # allow ALL\n",
    "  \n",
    "#  The string should be of the form type:salt:hashed-password.\n",
    "c.NotebookApp.password = u'sha1:96d749b4e109:17c2968d3bc899fcd41b87eb0853a42ceb48c521'\n",
    "  \n",
    "## The port the notebook server will listen on.\n",
    "c.NotebookApp.port = 8888\n",
    " \n",
    "c.NotebookApp.open_browser = False\n",
    "```\n",
    "\n",
    "### Issues\n",
    "\n",
    "**locale.Error: unsupported locale setting**\n",
    "\n",
    "`export LC_ALL=C`\n",
    "\n",
    "**opengl-libs xvfb-run conflict**\n",
    "\n",
    "https://davidsanwald.github.io/2016/11/13/building-tensorflow-with-gpu-support.html\n",
    "\n",
    "    What @pemami4911 wrote on #366 (THANKS!) finally pointed me into the right direction.\n",
    "\n",
    "    I didn't xvfb and sadly also X-dummy to work for a long time but when I followed pemami4911's hint and installed the Nvidia driver with --no-opengl-files option and CUDA with --no-opengl-libs xvfb worked right away.\n",
    "    I did not have to do anything complicated, just installing drivers with --no-opengl-files and CUDA with --no-opengl-libs. Just in case I documented the necessary steps here\n",
    "\n",
    "\n",
    "----\n",
    "\n",
    "## FlappyBird\n",
    "\n",
    "\n",
    "\n",
    "输入为根据图像提取的关键点 (鸟,拐角) 的坐标, 采用经典的 DQN 算法, 经过 380 步就已经能够收敛到比较好的结果了. 下图示例中的就是 380 迭代后的结果 (只截取了前30秒的数据, 实际持续了 5 分钟).\n",
    "\n",
    "![](FlappyBird-v0/dqn_agent.gif)\n",
    "\n",
    "\n",
    "![](FlappyBird-v0/training_rewards.png)\n",
    "\n",
    "\n",
    "### 如何训练\n",
    "\n",
    "`python3 training-lr_1_e-3.py`\n",
    "\n",
    "### 训练关键点\n",
    "\n",
    "1. 积累4帧的图像特征作为网络的输入, 网络可以推测速度\n",
    "2. 学习率的设置成关键, 此处设为 1e-3 (1e-2根本无法收敛)\n",
    "\n",
    "### 实现细节\n",
    "\n",
    "**特征工程**\n",
    "\n",
    "> FlappyBird-v0/feature_engineering/main.ipynb\n",
    "\n",
    "1. 环境状态为一个RGB图像, 截取 `[0:400,:,:]` 范围的内容, 将无效区域去除\n",
    "2. RGB 转为灰度图像 GRAY, 求得像素值的众数去除 (背景是一大块的同颜色区域), 得到图像 GRAY1\n",
    "3. GRAY1 作二值化处理得到 IMG_BIT\n",
    "4. 获取连通域, 计算鸟的坐标位置 (bx, by)\n",
    "5. 根据二值图像 IMG_BIT 计算小鸟前方障碍物的上下坐标 (ox1, oy1) (ox2, oy2), 其中 ox1=ox2=ox\n",
    "6. 计算坐标间的差值 `f(t) = (ox-bx, oy1-by, oy2-by)`\n",
    "7. 叠加4帧图像以该方式提取的特征拼接成网络的输入 `[f(t-2), f(t-1), f(t)]`\n",
    "\n",
    "![](FlappyBird-v0/flappybird_fe.png)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
