{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Q1: Summarize the reasons of overfitting and underfitting.  \n",
    "\n",
    "### No.1: https://stats.stackexchange.com/questions/395197/overfitting-and-underfitting\n",
    "### Overfitting: Data is noisy, meaning that there are some deviations from reality (because of measurement errors, influentially random factors, unobserved variables and rubbish correlations) that makes it harder for us to see their true relationship with our explaining factors. Also, it is usually not complete (we don't have examples of everything).The underlying reason for it all is trusting too much in training data.\n",
    "### Underfitting is the opposite problem, in which the model fails to recognize the real complexities in our data (i.e. the non-random changes in our data). The model assumes that noise is greater than it really is and thus uses a too simplistic shape.The model didn't trust enough in data and it just assumed that deviations are all noise.\n",
    "\n",
    "### Reasons:\n",
    "#### 1.We don't have complete information.  \n",
    "#### 2.We don't know how noisy the data is (we don't know how much we should trust it).  \n",
    "#### 3.We don't know in advance the underlying function that generated our data, and thus the optimal model complexity.  \n",
    "\n",
    "### No.2: https://machinelearningmastery.com/overfitting-and-underfitting-with-machine-learning-algorithms/\n",
    "### Overfitting happens when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. This means that the noise or random fluctuations in the training data is picked up and learned as concepts by the model. The problem is that these concepts do not apply to new data and negatively impact the models ability to generalize.\n",
    "### Underfitting refers to a model that can neither model the training data nor generalize to new data.\n",
    "\n",
    "### No.3: https://tensorflow.google.cn/beta/tutorials/keras/overfit_and_underfit\n",
    "\n",
    "### No.4: https://blog.csdn.net/linzhineng44/article/details/50867276  \n",
    "\n",
    "过拟合：数据集有噪点，数据量少，参数太多，将当前的数据集拟合的很好，但是一旦出现新的数据会出现期望值与实际值有偏差。  \n",
    "  \n",
    "欠拟合：数据量少，参数少，去除了太多参数，拟合能力一般。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''\n",
    "Q2: install the numpy, scikit-learning, keras, tensorflow\n",
    "'''\n",
    "#Test the lib\n",
    "import numpy\n",
    "import sklearn\n",
    "import keras\n",
    "import tensorflow"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Q3: Writing down three sceneries that machine learning has been used now.\n",
    "### Ans: 自动驾驶；股价预测；基于用户兴趣的推荐；人脸识别；智能客服；AI游戏助手。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Q4: Come out with three new sceneries with which machine learning may be applied.\n",
    "### Ans: 代码补全；机器寿命预测；社会群体行为分析；寻找地外生命（吼吼吼，外星人怕不怕）。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
