{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "3e7b6b2e",
   "metadata": {},
   "source": [
    "# 基于SpringBoot+Python的多语言前后端智能多人聊天系统第3课书面作业\n",
    "学号：114498  \n",
    "\n",
    "**作业内容：**  \n",
    "参考课程，再自己的环境跑下RNN和LSTM（代码和结果截图即可）。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0c2e7394",
   "metadata": {},
   "source": [
    "**解：**  \n",
    "课程中讲解的RNN及LSTM实现，基于的tensorflow太老旧，相关的模块在2.0之后都调整了。下面的内容还是基于tensorflow 2.0之后版本的实现。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77ea19d1",
   "metadata": {},
   "source": [
    "tensorflow 2.0之后将keras集成进来了。对于神经网络的构建通过keras后方便很多。在keras当前版本（我的版本是2.4.3）中，对于RNN神经网络专门定义一个layer，即Recurrent layers，包含如下：\n",
    "* LSTM layer\n",
    "* GRU layer\n",
    "* SimpleRNN layer\n",
    "* TimeDistributed layer\n",
    "* Bidirectional layer\n",
    "* ConvLSTM2D layer\n",
    "* Base RNN layer"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "7363da11",
   "metadata": {},
   "source": [
    "本次作业我将使用环境中的tensorflow和keras尝试一下RNN（SimpleRNN）和LSTM模型，并成功运行一下。  \n",
    "\n",
    "## 1、运行RNN实例\n",
    "用tensorflow中的SimpleRNN对象构建RNN。  \n",
    "运行的例子取自《Keras深度学习实战》中第六章循环神经网络–RNN中的第一个例子，要处理的任务是生成文本，即给出一段文本，生成后续的固定长度的文本。\n",
    "![class03-3](https://gitee.com/dotzhen/cloud-notes/raw/master/6rZB4nus32KCdtE.png)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "b30ceb82",
   "metadata": {},
   "source": [
    "这里文本取自爱丽丝梦想仙境的英文小说，根据小说自动生成训练集与验证集。构建的RNN模型如下：\n",
    "![class03-1](https://gitee.com/dotzhen/cloud-notes/raw/master/class03-1.PNG)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63b51356",
   "metadata": {},
   "source": [
    "实现代码如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "b3cea6be",
   "metadata": {},
   "outputs": [],
   "source": [
    "from keras.layers import Dense, Activation\n",
    "from keras.layers.recurrent import SimpleRNN\n",
    "from keras.layers.recurrent import LSTM\n",
    "from keras.models import Sequential\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "29ee5854",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "simple_rnn (SimpleRNN)       (None, 128)               23936     \n",
      "_________________________________________________________________\n",
      "dense (Dense)                (None, 58)                7482      \n",
      "_________________________________________________________________\n",
      "activation (Activation)      (None, 58)                0         \n",
      "=================================================================\n",
      "Total params: 31,418\n",
      "Trainable params: 31,418\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n",
      "====================================================================================================\n",
      "Iteration #: 0\n",
      "787/787 [==============================] - 35s 6ms/step - loss: 2.8028\n",
      "Generating from seed: ds to musi\n",
      "ds to musi"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\Administrator\\anaconda3\\lib\\site-packages\\keras\\engine\\sequential.py:450: UserWarning: `model.predict_classes()` is deprecated and will be removed after 2021-01-01. Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype(\"int32\")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).\n",
      "  warnings.warn('`model.predict_classes()` is deprecated and '\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ng the the the the the the the the the the the the the the the the the the the the the the the the t\n",
      "====================================================================================================\n",
      "Iteration #: 1\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 2.1925\n",
      "Generating from seed: een [_furi\n",
      "een [_furin the the the the the the the the the the the the the the the the the the the the the the the the th\n",
      "====================================================================================================\n",
      "Iteration #: 2\n",
      "787/787 [==============================] - 4s 6ms/step - loss: 2.0969\n",
      "Generating from seed: n a strang\n",
      "n a strang theres are inge ar alice in the sore ion the sore ion the sore ion the sore ion the sore ion the so\n",
      "====================================================================================================\n",
      "Iteration #: 3\n",
      "787/787 [==============================] - 4s 6ms/step - loss: 2.0302\n",
      "Generating from seed: ave off, b\n",
      "ave off, bouthe to to tore mo to toon the warks and and of and and of and and of and and of and and of and and\n",
      "====================================================================================================\n",
      "Iteration #: 4\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.9715\n",
      "Generating from seed: ice i beg \n",
      "ice i beg the look at all of and the the the the the the the the the the the the the the the the the the the t\n",
      "====================================================================================================\n",
      "Iteration #: 5\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.9191\n",
      "Generating from seed:  alone, yo\n",
      " alone, you know you don't kere the dore the project gutenberg the project gutenberg the project gutenberg the\n",
      "====================================================================================================\n",
      "Iteration #: 6\n",
      "787/787 [==============================] - 4s 6ms/step - loss: 1.8731\n",
      "Generating from seed: you can't \n",
      "you can't her the warks and the warks and the warks and the warks and the warks and the warks and the warks an\n",
      "====================================================================================================\n",
      "Iteration #: 7\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.8325\n",
      "Generating from seed: lectronic \n",
      "lectronic works and the sare the for on the project gutenberg-tm and the sare the for on the project gutenberg\n",
      "====================================================================================================\n",
      "Iteration #: 8\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.7960\n",
      "Generating from seed:  an advant\n",
      " an advant a wart rown the the sare the for on the the sare the for on the the sare the for on the the sare th\n",
      "====================================================================================================\n",
      "Iteration #: 9\n",
      "787/787 [==============================] - 4s 5ms/step - loss: 1.7634\n",
      "Generating from seed: , and that\n",
      ", and that so the roushe the was of the reans of crous the red queen i don't know you make the reathe the was \n",
      "====================================================================================================\n",
      "Iteration #: 10\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.7325\n",
      "Generating from seed: following \n",
      "following and the ser of the project gutenberg-tm all the warks and the ser of the project gutenberg-tm all th\n",
      "====================================================================================================\n",
      "Iteration #: 11\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.7061\n",
      "Generating from seed: ! where am\n",
      "! where am i sing the what i can't the wart in the sare in the sare in the sare in the sare in the sare in the\n",
      "====================================================================================================\n",
      "Iteration #: 12\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.6811\n",
      "Generating from seed: t; leave o\n",
      "t; leave of the doon at it ste found and mone to the work in the sare the doon at it ste found and mone to the\n",
      "====================================================================================================\n",
      "Iteration #: 13\n",
      "787/787 [==============================] - 4s 5ms/step - loss: 1.6574\n",
      "Generating from seed: oy the pep\n",
      "oy the peppration of on the project gutenberg-tm electronic works in a dond and the march hare is out of the p\n",
      "====================================================================================================\n",
      "Iteration #: 14\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.6380\n",
      "Generating from seed: y paid by \n",
      "y paid by the courd by the courd by the courd by the courd by the courd by the courd by the courd by the courd\n",
      "====================================================================================================\n",
      "Iteration #: 15\n",
      "787/787 [==============================] - 4s 5ms/step - loss: 1.6182\n",
      "Generating from seed: uch and th\n",
      "uch and the court that with the project gutenberg-tm electronic works in the some that do a thing but i can't \n",
      "====================================================================================================\n",
      "Iteration #: 16\n",
      "787/787 [==============================] - 4s 6ms/step - loss: 1.6016\n",
      "Generating from seed: uick about\n",
      "uick about it formony on the care in you can and reating to the and and and and and and and and and and and an\n",
      "====================================================================================================\n",
      "Iteration #: 17\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.5860\n",
      "Generating from seed: f it? red \n",
      "f it? red queen of the for of the for of the for of the for of the for of the for of the for of the for of the\n",
      "====================================================================================================\n",
      "Iteration #: 18\n",
      "787/787 [==============================] - 5s 6ms/step - loss: 1.5712\n",
      "Generating from seed: ooks uneas\n",
      "ooks unease the from the beat to the frog to the frog to the frog to the frog to the frog to the frog to the f\n",
      "====================================================================================================\n",
      "Iteration #: 19\n",
      "787/787 [==============================] - 4s 6ms/step - loss: 1.5570\n",
      "Generating from seed:  she stick\n",
      " she sticks of the project gutenberg-tm electronic works. alice i seen the dormouse to the project gutenberg-t\n"
     ]
    }
   ],
   "source": [
    "# 读取文件，并将文件内容解码为ASCII码\n",
    "fin = open('alice_in_wonderland.txt', 'rb')\n",
    "lines = []#\n",
    "for line in fin:# 遍历每行数据\n",
    "    line = line.strip().lower()# 去除每行两端空格\n",
    "    line = line.decode('ascii', 'ignore')# 解码为ASCII码\n",
    "    if len(line) == 0:\n",
    "        continue\n",
    "    lines.append(line)\n",
    "fin.close()\n",
    "text = ' '.join(lines)\n",
    "\n",
    "chars = set([c for c in text])# 获取字符串中不同字符组成的集合\n",
    "nb_chars = len(chars)# 获取集合中不同元素数量\n",
    "char2index = dict((c, i) for i, c in enumerate(chars))# 创建字符到索引的字典\n",
    "index2char = dict((i, c) for i, c in enumerate(chars))# 创建索引到字符的字典\n",
    "\n",
    "SEQLEN = 10# 超参数，输入字符串长度\n",
    "STEP = 1# 输出字符串长度\n",
    "input_chars = []# 输入字符串列表\n",
    "label_chars = []# 标签列表\n",
    "for i in range(0, len(text) - SEQLEN, STEP):\n",
    "    input_chars.append(text[i: i + SEQLEN])\n",
    "    label_chars.append(text[i + SEQLEN])\n",
    "\n",
    "# 将输入文本和标签文本向量化: one-hot编码\n",
    "X = np.zeros((len(input_chars), SEQLEN, nb_chars), dtype=np.bool)# 输入文本张量\n",
    "Y = np.zeros((len(input_chars), nb_chars), dtype=np.bool)# 标签文本张量\n",
    "for i, input_char in enumerate(input_chars):# 遍历所有输入样本\n",
    "    for j, ch in enumerate(input_char):# 对于每个输入样本\n",
    "        X[i, j, char2index[ch]] = 1\n",
    "    Y[i, char2index[label_chars[i]]] = 1\n",
    "\n",
    "# 构建模型\n",
    "HIDDEN_SIZE = 128\n",
    "BATCH_SIZE = 128\n",
    "NUM_ITERATIONS = 20\n",
    "NUM_EPOCH_PER_ITERATIONS = 1\n",
    "NUM_PREDS_PER_EPOCH = 100\n",
    "\n",
    "model = Sequential()\n",
    "model.add(SimpleRNN(HIDDEN_SIZE, return_sequences=False, input_shape=(SEQLEN, nb_chars), unroll=True))\n",
    "model.add(Dense(nb_chars))\n",
    "model.add(Activation('softmax'))\n",
    "\n",
    "# 编译模型\n",
    "model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n",
    "model.summary()\n",
    "\n",
    "for iteration in range(NUM_ITERATIONS):\n",
    "    print('=' * 100)\n",
    "    print('Iteration #: %d' % (iteration))# 打印迭代次数\n",
    "    # 训练模型\n",
    "    model.fit(X, Y, batch_size=BATCH_SIZE, epochs=NUM_EPOCH_PER_ITERATIONS)# 每次迭代训练一个周期\n",
    "    # 使用模型进行预测\n",
    "    test_idx = np.random.randint(len(input_chars))# 随机抽取样本\n",
    "    test_chars = input_chars[test_idx]\n",
    "    print('Generating from seed: %s' % (test_chars))\n",
    "    print(test_chars, end='')# 不换行输出\n",
    "    for i in range(NUM_PREDS_PER_EPOCH):# 评估每一次迭代后的结果\n",
    "        Xtest = np.zeros((1, SEQLEN, nb_chars))\n",
    "        for i, ch in enumerate(test_chars):\n",
    "            Xtest[0, i, char2index[ch]] = 1# 样本张量\n",
    "        # pred = model.predict(Xtest, verbose=0)[0]\n",
    "        # ypred = index2char[np.argmax(pred)]# 找对类别标签对应的字符\n",
    "        ypred = index2char[model.predict_classes(Xtest)[0]]#index2char(np.argmax(model.predict(Xtest[0]), axis=-1))\n",
    "        print(ypred, end='')\n",
    "        # 使用test_chars + ypred继续\n",
    "        test_chars = test_chars[1:] + ypred\n",
    "    print()\n",
    "\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "7adf3b55",
   "metadata": {},
   "source": [
    "## 2、LSTM实例\n",
    "\n",
    "延续上面的实例，将模型修改为LSTM。\n",
    "![class03-2](https://gitee.com/dotzhen/cloud-notes/raw/master/class03-2.PNG)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d97f533",
   "metadata": {},
   "source": [
    "代码实现如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "9793e5ab",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"sequential_1\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "lstm (LSTM)                  (None, 128)               95744     \n",
      "_________________________________________________________________\n",
      "dense_1 (Dense)              (None, 58)                7482      \n",
      "_________________________________________________________________\n",
      "activation_1 (Activation)    (None, 58)                0         \n",
      "=================================================================\n",
      "Total params: 103,226\n",
      "Trainable params: 103,226\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n",
      "====================================================================================================\n",
      "Iteration #: 0\n",
      "787/787 [==============================] - 33s 20ms/step - loss: 2.9450\n",
      "Generating from seed:  (a) distr\n",
      " (a) distr an the the the the the the the the the the the the the the the the the the the the the the the the \n",
      "====================================================================================================\n",
      "Iteration #: 1\n",
      "787/787 [==============================] - 16s 20ms/step - loss: 2.2604\n",
      "Generating from seed:  _your_ op\n",
      " _your_ op and the sall alice and the sall alice and the sall alice and the sall alice and the sall alice and \n",
      "====================================================================================================\n",
      "Iteration #: 2\n",
      "787/787 [==============================] - 16s 20ms/step - loss: 2.0666\n",
      "Generating from seed: everal tho\n",
      "everal thous alice i don't the wored and the wored and the wored and the wored and the wored and the wored and\n",
      "====================================================================================================\n",
      "Iteration #: 3\n",
      "787/787 [==============================] - 15s 19ms/step - loss: 1.9296\n",
      "Generating from seed: r of cours\n",
      "r of course the sare the course the sare the course the sare the course the sare the course the sare the cours\n",
      "====================================================================================================\n",
      "Iteration #: 4\n",
      "787/787 [==============================] - 16s 20ms/step - loss: 1.8260\n",
      "Generating from seed: never make\n",
      "never make the say the seat of the more the seat of the more the seat of the more the seat of the more the sea\n",
      "====================================================================================================\n",
      "Iteration #: 5\n",
      "787/787 [==============================] - 15s 20ms/step - loss: 1.7442\n",
      "Generating from seed: t. mock tu\n",
      "t. mock turtle the project gutenberg-tm erechrouse and the corrous of the project gutenberg-tm erechrouse and \n",
      "====================================================================================================\n",
      "Iteration #: 6\n",
      "787/787 [==============================] - 16s 20ms/step - loss: 1.6768\n",
      "Generating from seed: t did you \n",
      "t did you don't know what the say the say the say the say the say the say the say the say the say the say the \n",
      "====================================================================================================\n",
      "Iteration #: 7\n",
      "787/787 [==============================] - 17s 22ms/step - loss: 1.6194\n",
      "Generating from seed: ily on the\n",
      "ily on the crould a course to seat to the work and the cours in a court in a court in a court in a court in a \n",
      "====================================================================================================\n",
      "Iteration #: 8\n",
      "787/787 [==============================] - 18s 22ms/step - loss: 1.5692\n",
      "Generating from seed:  s. fairba\n",
      " s. fairbal and don't be the botter. alice i don't be the botter. alice i don't be the botter. alice i don't b\n",
      "====================================================================================================\n",
      "Iteration #: 9\n",
      "787/787 [==============================] - 15s 19ms/step - loss: 1.5257\n",
      "Generating from seed:  the thing\n",
      " the things and the formations to the for and the formations to the for and the formations to the for and the \n",
      "====================================================================================================\n",
      "Iteration #: 10\n",
      "787/787 [==============================] - 17s 21ms/step - loss: 1.4850\n",
      "Generating from seed: y had to f\n",
      "y had to fit you don't know what i same to the forth the formouse the from the full project gutenberg-tm elect\n",
      "====================================================================================================\n",
      "Iteration #: 11\n",
      "787/787 [==============================] - 15s 19ms/step - loss: 1.4491\n",
      "Generating from seed: abbit and \n",
      "abbit and all the the botter and all the the botter and all the the botter and all the the botter and all the \n",
      "====================================================================================================\n",
      "Iteration #: 12\n",
      "787/787 [==============================] - 16s 20ms/step - loss: 1.4155\n",
      "Generating from seed: schief, or\n",
      "schief, or comated the work in the don't be all the bect if you do not to the books and the dormouse in the do\n",
      "====================================================================================================\n",
      "Iteration #: 13\n",
      "787/787 [==============================] - 15s 19ms/step - loss: 1.3848\n",
      "Generating from seed:  flamingoe\n",
      " flamingoes the things and the the foon at the the foon at the the foon at the the foon at the the foon at the\n",
      "====================================================================================================\n",
      "Iteration #: 14\n",
      "787/787 [==============================] - 16s 20ms/step - loss: 1.3557\n",
      "Generating from seed: duchess! c\n",
      "duchess! carroll have the thire what you may be a little through the groph the groph the groph the groph the g\n",
      "====================================================================================================\n",
      "Iteration #: 15\n",
      "787/787 [==============================] - 15s 19ms/step - loss: 1.3284\n",
      "Generating from seed:  often see\n",
      " often see the words.  the project gutenberg-tm electronic works on the states and the court. alice i don't kn\n",
      "====================================================================================================\n",
      "Iteration #: 16\n",
      "787/787 [==============================] - 15s 19ms/step - loss: 1.3039\n",
      "Generating from seed: e red quee\n",
      "e red queen i have to got you know you know you know you know you know you know you know you know you know you\n",
      "====================================================================================================\n",
      "Iteration #: 17\n",
      "787/787 [==============================] - 15s 19ms/step - loss: 1.2798\n",
      "Generating from seed: looking at\n",
      "looking at in a was and the court in a can you do not with the project gutenberg-tm electronic work in a can y\n",
      "====================================================================================================\n",
      "Iteration #: 18\n",
      "787/787 [==============================] - 15s 20ms/step - loss: 1.2567\n",
      "Generating from seed: e. [_vanis\n",
      "e. [_vanisher with the dormouse the work on a little better with the dormouse the work on a little better with\n",
      "====================================================================================================\n",
      "Iteration #: 19\n",
      "787/787 [==============================] - 16s 20ms/step - loss: 1.2355\n",
      "Generating from seed: t gutenber\n",
      "t gutenberg-tm electronic works in the from doesting the project gutenberg-tm electronic works in the from doe\n"
     ]
    }
   ],
   "source": [
    "model1 = Sequential()\n",
    "model1.add(LSTM(HIDDEN_SIZE,input_shape=(SEQLEN, nb_chars), unroll=True))\n",
    "model1.add(Dense(nb_chars))\n",
    "model1.add(Activation('softmax'))\n",
    "\n",
    "# 编译模型\n",
    "model1.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n",
    "model1.summary()\n",
    "\n",
    "for iteration in range(NUM_ITERATIONS):\n",
    "    print('=' * 100)\n",
    "    print('Iteration #: %d' % (iteration))# 打印迭代次数\n",
    "    # 训练模型\n",
    "    model1.fit(X, Y, batch_size=BATCH_SIZE, epochs=NUM_EPOCH_PER_ITERATIONS)# 每次迭代训练一个周期\n",
    "    # 使用模型进行预测\n",
    "    test_idx = np.random.randint(len(input_chars))# 随机抽取样本\n",
    "    test_chars = input_chars[test_idx]\n",
    "    print('Generating from seed: %s' % (test_chars))\n",
    "    print(test_chars, end='')# 不换行输出\n",
    "    for i in range(NUM_PREDS_PER_EPOCH):# 评估每一次迭代后的结果\n",
    "        Xtest = np.zeros((1, SEQLEN, nb_chars))\n",
    "        for i, ch in enumerate(test_chars):\n",
    "            Xtest[0, i, char2index[ch]] = 1# 样本张量\n",
    "        # pred = model.predict(Xtest, verbose=0)[0]\n",
    "        # ypred = index2char[np.argmax(pred)]# 找对类别标签对应的字符\n",
    "        ypred = index2char[model1.predict_classes(Xtest)[0]]#index2char(np.argmax(model.predict(Xtest[0]), axis=-1))\n",
    "        print(ypred, end='')\n",
    "        # 使用test_chars + ypred继续\n",
    "        test_chars = test_chars[1:] + ypred\n",
    "    print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1b6f0323",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
