{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 机器读心术之神经网络与深度学习第1课书面作业\n",
    "学号：207567\n",
    "\n",
    "**书面作业：**  \n",
    "1. 如果把单层感知器的激活函数改为tansig（幻灯片第71页）可以解决异或问题吗？说明理由。  \n",
    "2. 自己编写一段程序（编程语言任意）实现幻灯片第46页例子中单层感知器的训练，要求使用幻灯片第45页所表述的感知器学习规则。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第1题\n",
    "如果把单层感知器的激活函数改为tansig（幻灯片第71页）可以解决异或问题吗？说明理由。  \n",
    "**答**：  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "不能解决异或问题。理由是，tansig激活函数还是单调连续函数，没有改变原来线性输出性质。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 第2题\n",
    "自己编写一段程序（编程语言任意）实现幻灯片第46页例子中单层感知器的训练，要求使用幻灯片第45页所表述的感知器学习规则。\n",
    "\n",
    "**答：**  \n",
    "源代码实现如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "id": "59lYUgUEGaJ1"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[-1.]\n",
      " [ 1.]\n",
      " [ 1.]\n",
      " [ 1.]\n",
      " [-1.]\n",
      " [-1.]\n",
      " [ 1.]\n",
      " [-1.]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "X = np.array([\n",
    "              [1,0,0],\n",
    "              [1,0,1],\n",
    "              [1,1,0],\n",
    "              [1,1,1],\n",
    "              [0,0,1],\n",
    "              [0,1,0],\n",
    "              [0,1,1],\n",
    "              [0,0,0]\n",
    "],dtype='float32')\n",
    "\n",
    "Y = np.array([\n",
    "              [-1],\n",
    "              [1],\n",
    "              [1],\n",
    "              [1],\n",
    "              [-1],\n",
    "              [-1],\n",
    "              [1],\n",
    "              [-1]\n",
    "],dtype='float32')\n",
    "\n",
    "\n",
    "class nueralnetwork:\n",
    "  def __init__(self, inputs=3, outputs=1, lr=0.8, epsilon=0.001):\n",
    "    self.weights = np.random.randn(inputs+1, outputs)\n",
    "    # self.bias = np.random.randn(outputs)\n",
    "    self.lr = lr\n",
    "    self.epsilon = epsilon\n",
    "\n",
    "  def hardlimit(self, v):\n",
    "    t = (v >= 0).astype('float32')\n",
    "    t = t*2-1\n",
    "    return t\n",
    "  \n",
    "  def forward(self, x):\n",
    "    #增加一个新维度用来表示偏置值\n",
    "    n = np.ones((x.shape[0],1),dtype='float32')\n",
    "    x1 = np.concatenate((x,n),axis=1)\n",
    "    # print(x1)\n",
    "    y = self.hardlimit(np.matmul(x1, self.weights))\n",
    "    return x1, y\n",
    "\n",
    "  def train(self, x, y):\n",
    "    for i in range(x.shape[0]):\n",
    "      x1, out = self.forward(np.array([x[i]]))\n",
    "      # print(out[0][0])\n",
    "      loss = y[i][0] - out[0][0]\n",
    "      oldweights = self.weights\n",
    "      self.weights = self.weights + np.transpose(self.lr*loss*x1)\n",
    "      delta = np.linalg.norm(self.weights - oldweights)\n",
    "    return delta\n",
    "  \n",
    "  def predict(self, x):\n",
    "    _, v = self.forward(x)\n",
    "    return v\n",
    "\n",
    "# 创新神经网络对象\n",
    "net = nueralnetwork()\n",
    "# 训练6个epoch\n",
    "for i in range(6):\n",
    "    net.train(X,Y)\n",
    "#测试一下输出\n",
    "print(net.predict(X))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4TsOEUnQUbEz"
   },
   "source": [
    "从上面结果看，训练结果符合预期。"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "mind01.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
