{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "4c40b11c",
   "metadata": {},
   "source": [
    "# Tensorflow工程师职场实战技第3课书面作业\n",
    "学号：114764\n",
    "\n",
    "**作业内容：**  \n",
    "目前聊天机器人有很多开源的程序，尝试寻找一个合适的框架，训练一个聊天机器人。过程与结果截图上传。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0bd9e6c",
   "metadata": {},
   "source": [
    "从网上找了一个基于tensorflow 2.x版本的seq2seq+attention聊天机器人程序，做了简单的修改和适配。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5e702407",
   "metadata": {},
   "source": [
    "## 1. 模型简述"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a8ef2c5",
   "metadata": {},
   "source": [
    "### 1.1 训练模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1ea83371",
   "metadata": {},
   "source": [
    "本聊天机器人采用的模型为seq2seq2+Attention机制的神经网络模型，其内部模型我用图描述如下：\n",
    "![tf-model1](https://gitee.com/dotzhen/cloud-notes/raw/master/tf-model1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d208fdb7",
   "metadata": {},
   "source": [
    "* 文本会经过数值化处理（即和数字表示每个词，本例中针对的语料中，大约有7万多个词），然后通过一个嵌入层（embedding layer）连接到神经网络中，嵌入层的转换矩阵会跟随整体神经一同训练。  \n",
    "* 每个样本对应的是一个问题也就是一段文本，最大长度不超过10（maxlen）；对应的标签，也是一段文本。  \n",
    "* 嵌入层采用维度为50（embedding_dim），也可以自己调整。  \n",
    "* 编码器与解码器的主体是LSTM网络，内部隐层单元为128（hidden_units）,同时要求输出完整序列及内部状态。 \n",
    "* 解码器的输入除了编码器的输出外（编码结果及状态），还有自己独立的输入，也就是对应标签部分，标签部分没有长度限制，因此也可以看到在模型定义这个对应长度的维度是None。  \n",
    "* 编码器与解码器的输出会传入Attention层，因为编码中LSTM单元是要求输出完整序列的，即每个词的输出都有，因此Attention层也能看到这所有序列结果，也为此构建了不同的权值，权值的结果会根据训练得到，注意力不同都会体现在这些权值大小上。\n",
    "* 经过Attention层后输出结果，这个结果，再经一个全连接层映射到vocab_size个单元上，通过softmax激活函数后就可以确定输出的词是哪个了。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6f11c18",
   "metadata": {},
   "source": [
    "### 1.2 预测模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f7d68ad3",
   "metadata": {},
   "source": [
    "seq2seq+Attention模型的预测模型在使用上有些与其他模型不同的地方，我用图描述如下："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cdddff41",
   "metadata": {},
   "source": [
    "![tf-model2](https://gitee.com/dotzhen/cloud-notes/raw/master/tf-model2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26a58ce6",
   "metadata": {},
   "source": [
    "1. 先根据输入的问题，调用encoder得到结果，结果分两部分一个全序列的输出结果，一个最终的状态。\n",
    "2. 下面开始使用解码器，它的输出是三部分：\n",
    "  + 如果是首次使用解码器则输入\\<GO\\>，表示是一个句子的开始，如果非首次，则取解码器上一次全序列输出结果中最后一次的输出结果（经过全连接层处理过的）。\n",
    "  + encoder的全序列输出结果。\n",
    "  + 如果是首次使用解码器，则用encoder的输出状态，这个要作为解码器的初始状态用；如果非首次使用解码器，则用解码器上一次的状态初始化本次解码器状态。  \n",
    "  解码器的输出分两部分：\n",
    "  + 全序列结果输出； \n",
    "  + 状态输出。\n",
    "3. 如果解码器的输出结果（经过全连接器softmax输出结果）不为“EOS”（表示句子结束）则继续使用解码器，继续第2步。\n",
    "\n",
    "可以看到，预测时执行的步骤数是不确定的，需要一直检测到<EOS>才截止。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5aa0fd15",
   "metadata": {},
   "source": [
    "### 1.3 语料截图"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "90bcdea4",
   "metadata": {},
   "source": [
    "![tf-3](https://gitee.com/dotzhen/cloud-notes/raw/master/tf-3.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f0b7042",
   "metadata": {},
   "source": [
    "### 1.4 训练结果截图"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e0991ad",
   "metadata": {},
   "source": [
    "![tf-1](https://gitee.com/dotzhen/cloud-notes/raw/master/tf-1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "78f86f87",
   "metadata": {},
   "source": [
    "### 1.5 预测结果截图"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75ec2e45",
   "metadata": {},
   "source": [
    "![tf-2](https://gitee.com/dotzhen/cloud-notes/raw/master/tf-2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8ff9894b",
   "metadata": {},
   "source": [
    "## 附1：训练代码train.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9f991406",
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "from tensorflow.keras import backend as K\n",
    "from tensorflow.keras import activations\n",
    "from tensorflow.keras.layers import Layer, Input, Embedding, LSTM, Dense, Attention\n",
    "from tensorflow.keras.models import Model\n",
    "\n",
    "\n",
    "class Encoder(keras.Model):\n",
    "    def __init__(self, vocab_size, embedding_dim, hidden_units):\n",
    "        super(Encoder, self).__init__()\n",
    "        # Embedding Layer\n",
    "        self.embedding = Embedding(vocab_size, embedding_dim, mask_zero=True)\n",
    "        # Encode LSTM Layer\n",
    "        self.encoder_lstm = LSTM(hidden_units, return_sequences=True, return_state=True, name=\"encode_lstm\")\n",
    "\n",
    "    def call(self, inputs):\n",
    "        encoder_embed = self.embedding(inputs)\n",
    "        encoder_outputs, state_h, state_c = self.encoder_lstm(encoder_embed)\n",
    "        return encoder_outputs, state_h, state_c\n",
    "\n",
    "\n",
    "class Decoder(keras.Model):\n",
    "    def __init__(self, vocab_size, embedding_dim, hidden_units):\n",
    "        super(Decoder, self).__init__()\n",
    "        # Embedding Layer\n",
    "        self.embedding = Embedding(vocab_size, embedding_dim, mask_zero=True)\n",
    "        # Decode LSTM Layer\n",
    "        self.decoder_lstm = LSTM(hidden_units, return_sequences=True, return_state=True, name=\"decode_lstm\")\n",
    "        # Attention Layer\n",
    "        self.attention = Attention()\n",
    "\n",
    "    def call(self, enc_outputs, dec_inputs, states_inputs):\n",
    "        decoder_embed = self.embedding(dec_inputs)\n",
    "        dec_outputs, dec_state_h, dec_state_c = self.decoder_lstm(decoder_embed, initial_state=states_inputs)\n",
    "        attention_output = self.attention([dec_outputs, enc_outputs])\n",
    "\n",
    "        return attention_output, dec_state_h, dec_state_c\n",
    "\n",
    "def Seq2Seq(maxlen, embedding_dim, hidden_units, vocab_size):\n",
    "    \"\"\"\n",
    "    seq2seq model\n",
    "    \"\"\"\n",
    "    # Input Layer\n",
    "    encoder_inputs = Input(shape=(maxlen,), name=\"encode_input\")\n",
    "    decoder_inputs = Input(shape=(None,), name=\"decode_input\")\n",
    "    # Encoder Layer\n",
    "    encoder = Encoder(vocab_size, embedding_dim, hidden_units)\n",
    "    enc_outputs, enc_state_h, enc_state_c = encoder(encoder_inputs)\n",
    "    dec_states_inputs = [enc_state_h, enc_state_c]\n",
    "    # Decoder Layer\n",
    "    decoder = Decoder(vocab_size, embedding_dim, hidden_units)\n",
    "    attention_output, dec_state_h, dec_state_c = decoder(enc_outputs, decoder_inputs, dec_states_inputs)\n",
    "    # Dense Layer\n",
    "    dense_outputs = Dense(vocab_size, activation='softmax', name=\"dense\")(attention_output)\n",
    "    # seq2seq model\n",
    "    model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=dense_outputs)\n",
    "\n",
    "    return model\n",
    "\n",
    "def read_vocab(vocab_path):\n",
    "    vocab_words = []\n",
    "    with open(vocab_path, \"r\", encoding=\"utf8\") as f:\n",
    "        for line in f:\n",
    "            vocab_words.append(line.strip())\n",
    "    return vocab_words\n",
    "\n",
    "def read_data(data_path):\n",
    "    datas = []\n",
    "    with open(data_path, \"r\", encoding=\"utf8\") as f:\n",
    "        for line in f:\n",
    "            words = line.strip().split()\n",
    "            datas.append(words)\n",
    "    return datas\n",
    "\n",
    "def process_data_index(datas, vocab2id):\n",
    "    data_indexs = []\n",
    "    for words in datas:\n",
    "        line_index = [vocab2id[w] if w in vocab2id else vocab2id[\"<UNK>\"] for w in words]\n",
    "        data_indexs.append(line_index)\n",
    "    return data_indexs\n",
    "\n",
    "def process_input_data(source_data_ids, target_indexs, vocab2id):\n",
    "    source_inputs = []\n",
    "    decoder_inputs, decoder_outputs = [], []\n",
    "    for source, target in zip(source_data_ids, target_indexs):\n",
    "        source_inputs.append([vocab2id[\"<GO>\"]] + source + [vocab2id[\"<EOS>\"]])\n",
    "        decoder_inputs.append([vocab2id[\"<GO>\"]] + target)\n",
    "        decoder_outputs.append(target + [vocab2id[\"<EOS>\"]])\n",
    "    return source_inputs, decoder_inputs, decoder_outputs\n",
    "\n",
    "def train(begin_ep,turns):\n",
    "    vocab_words = read_vocab(\"data/ch_word_vocab.txt\")\n",
    "    special_words = [\"<PAD>\", \"<UNK>\", \"<GO>\", \"<EOS>\"]\n",
    "    vocab_words = special_words + vocab_words\n",
    "    vocab2id = {word: i for i, word in enumerate(vocab_words)}\n",
    "    id2vocab = {i: word for i, word in enumerate(vocab_words)}\n",
    "\n",
    "    num_sample = 10000\n",
    "    source_data = read_data(\"data/ch_source_data_seg.txt\")[:num_sample]\n",
    "    source_data_ids = process_data_index(source_data, vocab2id)\n",
    "    target_data = read_data(\"data/ch_target_data_seg.txt\")[:num_sample]\n",
    "    target_data_ids = process_data_index(target_data, vocab2id)\n",
    "\n",
    "    print(\"vocab test: \", [id2vocab[i] for i in range(10)])\n",
    "    print(\"source test: \", source_data[10])\n",
    "    print(\"source index: \", source_data_ids[10])\n",
    "    print(\"target test: \", target_data[10])\n",
    "    print(\"target index: \", target_data_ids[10])\n",
    "\n",
    "    source_input_ids, target_input_ids, target_output_ids = process_input_data(source_data_ids, target_data_ids,\n",
    "                                                                               vocab2id)\n",
    "    print(\"encoder inputs: \", source_input_ids[:2])\n",
    "    print(\"decoder inputs: \", target_input_ids[:2])\n",
    "    print(\"decoder outputs: \", target_output_ids[:2])\n",
    "\n",
    "    maxlen = 10\n",
    "    source_input_ids = keras.preprocessing.sequence.pad_sequences(source_input_ids, padding='post', maxlen=maxlen)\n",
    "    target_input_ids = keras.preprocessing.sequence.pad_sequences(target_input_ids, padding='post', maxlen=maxlen)\n",
    "    target_output_ids = keras.preprocessing.sequence.pad_sequences(target_output_ids, padding='post', maxlen=maxlen)\n",
    "    print(source_data_ids[:5])\n",
    "    print(target_input_ids[:5])\n",
    "    print(target_output_ids[:5])\n",
    "\n",
    "    maxlen = 10\n",
    "    embedding_dim = 50\n",
    "    hidden_units = 128\n",
    "    vocab_size = len(vocab2id)\n",
    "    print('vocab_size: ', vocab_size)\n",
    "\n",
    "    model = Seq2Seq(maxlen, embedding_dim, hidden_units, vocab_size)\n",
    "    if begin_ep > 0:\n",
    "        model_file = \"data/seq2seq_attention_weights_\"+str(begin_ep)+\".h5\"\n",
    "        model.load_weights(model_file)\n",
    "    model.summary()\n",
    "\n",
    "    epochs = turns\n",
    "    batch_size = 2048\n",
    "    val_rate = 0.2\n",
    "\n",
    "    loss_fn = keras.losses.SparseCategoricalCrossentropy()\n",
    "    model.compile(loss=loss_fn, optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3))\n",
    "    model.fit([source_input_ids, target_input_ids], target_output_ids,\n",
    "              batch_size=batch_size, epochs=epochs, validation_split=val_rate,)\n",
    "    model_file = \"data/seq2seq_attention_weights_\"+str(epochs+begin_ep)+\".h5\"\n",
    "    model.save_weights(model_file)\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    train(0,1000)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "09841905",
   "metadata": {},
   "source": [
    "## 附2：预测代码predict.py"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0d2d24a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow import keras\n",
    "from tensorflow.keras import backend as K\n",
    "from tensorflow.keras import activations\n",
    "from tensorflow.keras.layers import Layer, Input, Embedding, LSTM, Dense, Attention\n",
    "from tensorflow.keras.models import Model\n",
    "import numpy as np\n",
    "\n",
    "\n",
    "\n",
    "class Encoder(keras.Model):\n",
    "    def __init__(self, vocab_size, embedding_dim, hidden_units):\n",
    "        super(Encoder, self).__init__()\n",
    "        # Embedding Layer\n",
    "        self.embedding = Embedding(vocab_size, embedding_dim, mask_zero=True)\n",
    "        # Encode LSTM Layer\n",
    "        self.encoder_lstm = LSTM(hidden_units, return_sequences=True, return_state=True, name=\"encode_lstm\")\n",
    "\n",
    "    def call(self, inputs):\n",
    "        encoder_embed = self.embedding(inputs)\n",
    "        encoder_outputs, state_h, state_c = self.encoder_lstm(encoder_embed)\n",
    "        return encoder_outputs, state_h, state_c\n",
    "\n",
    "\n",
    "class Decoder(keras.Model):\n",
    "    def __init__(self, vocab_size, embedding_dim, hidden_units):\n",
    "        super(Decoder, self).__init__()\n",
    "        # Embedding Layer\n",
    "        self.embedding = Embedding(vocab_size, embedding_dim, mask_zero=True)\n",
    "        # Decode LSTM Layer\n",
    "        self.decoder_lstm = LSTM(hidden_units, return_sequences=True, return_state=True, name=\"decode_lstm\")\n",
    "        # Attention Layer\n",
    "        self.attention = Attention()\n",
    "\n",
    "    def call(self, enc_outputs, dec_inputs, states_inputs):\n",
    "        decoder_embed = self.embedding(dec_inputs)\n",
    "        dec_outputs, dec_state_h, dec_state_c = self.decoder_lstm(decoder_embed, initial_state=states_inputs)\n",
    "        attention_output = self.attention([dec_outputs, enc_outputs])\n",
    "\n",
    "        return attention_output, dec_state_h, dec_state_c\n",
    "\n",
    "def Seq2Seq(maxlen, embedding_dim, hidden_units, vocab_size):\n",
    "    \"\"\"\n",
    "    seq2seq model\n",
    "    \"\"\"\n",
    "    # Input Layer\n",
    "    encoder_inputs = Input(shape=(maxlen,), name=\"encode_input\")\n",
    "    decoder_inputs = Input(shape=(None,), name=\"decode_input\")\n",
    "    # Encoder Layer\n",
    "    encoder = Encoder(vocab_size, embedding_dim, hidden_units)\n",
    "    enc_outputs, enc_state_h, enc_state_c = encoder(encoder_inputs)\n",
    "    dec_states_inputs = [enc_state_h, enc_state_c]\n",
    "    # Decoder Layer\n",
    "    decoder = Decoder(vocab_size, embedding_dim, hidden_units)\n",
    "    attention_output, dec_state_h, dec_state_c = decoder(enc_outputs, decoder_inputs, dec_states_inputs)\n",
    "    # Dense Layer\n",
    "    dense_outputs = Dense(vocab_size, activation='softmax', name=\"dense\")(attention_output)\n",
    "    # seq2seq model\n",
    "    model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=dense_outputs)\n",
    "\n",
    "    return model\n",
    "\n",
    "def read_vocab(vocab_path):\n",
    "    vocab_words = []\n",
    "    with open(vocab_path, \"r\", encoding=\"utf8\") as f:\n",
    "        for line in f:\n",
    "            vocab_words.append(line.strip())\n",
    "    return vocab_words\n",
    "\n",
    "maxlen = 10\n",
    "embedding_dim = 50\n",
    "hidden_units = 128\n",
    "maxlen = 10\n",
    "vocab_words = read_vocab(\"data/ch_word_vocab.txt\")\n",
    "special_words = [\"<PAD>\", \"<UNK>\", \"<GO>\", \"<EOS>\"]\n",
    "vocab_words = special_words + vocab_words\n",
    "vocab2id = {word: i for i, word in enumerate(vocab_words)}\n",
    "id2vocab = {i: word for i, word in enumerate(vocab_words)}\n",
    "vocab_size = len(vocab2id)\n",
    "\n",
    "model = Seq2Seq(maxlen, embedding_dim, hidden_units, vocab_size)\n",
    "model.load_weights(\"data/seq2seq_attention_weights_1000.h5\")\n",
    "print(model.summary())\n",
    "\n",
    "\n",
    "def encoder_infer(model):\n",
    "    encoder_model = Model(inputs=model.get_layer('encoder').get_input_at(0),\n",
    "                        outputs=model.get_layer('encoder').get_output_at(0))\n",
    "    return encoder_model\n",
    "\n",
    "encoder_model = encoder_infer(model)\n",
    "print(encoder_model.summary())\n",
    "\n",
    "\n",
    "def decoder_infer(model, encoder_model):\n",
    "    encoder_output = encoder_model.get_layer('encoder').output[0]\n",
    "    maxlen, hidden_units = encoder_output.shape[1:]\n",
    "\n",
    "    dec_input = model.get_layer('decode_input').input\n",
    "    enc_output = Input(shape=(maxlen, hidden_units), name='enc_output')\n",
    "    dec_input_state_h = Input(shape=(hidden_units,), name='input_state_h')\n",
    "    dec_input_state_c = Input(shape=(hidden_units,), name='input_state_c')\n",
    "    dec_input_states = [dec_input_state_h, dec_input_state_c]\n",
    "\n",
    "    decoder = model.get_layer('decoder')\n",
    "    dec_outputs, out_state_h, out_state_c = decoder(enc_output, dec_input, dec_input_states)\n",
    "    dec_output_states = [out_state_h, out_state_c]\n",
    "\n",
    "    decoder_dense = model.get_layer('dense')\n",
    "    dense_output = decoder_dense(dec_outputs)\n",
    "\n",
    "    decoder_model = Model(inputs=[enc_output, dec_input, dec_input_states],\n",
    "                          outputs=[dense_output] + dec_output_states)\n",
    "    return decoder_model\n",
    "\n",
    "\n",
    "decoder_model = decoder_infer(model, encoder_model)\n",
    "print(decoder_model.summary())\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "def infer_predict(input_text, encoder_model, decoder_model):\n",
    "    text_words = input_text.split()[:maxlen]\n",
    "    print(text_words)\n",
    "    input_id = [vocab2id[w] if w in vocab2id else vocab2id[\"<UNK>\"] for w in text_words]\n",
    "    input_id = [vocab2id[\"<GO>\"]] + input_id + [vocab2id[\"<EOS>\"]]\n",
    "    print(input_id)\n",
    "    if len(input_id) < maxlen:\n",
    "        input_id = input_id + [vocab2id[\"<PAD>\"]] * (maxlen - len(input_id))\n",
    "\n",
    "    input_source = np.array([input_id])\n",
    "    input_target = np.array([vocab2id[\"<GO>\"]])\n",
    "\n",
    "    # 编码器encoder预测输出\n",
    "    enc_outputs, enc_state_h, enc_state_c = encoder_model.predict([input_source])\n",
    "    dec_inputs = input_target\n",
    "    dec_states_inputs = [enc_state_h, enc_state_c]\n",
    "\n",
    "    result_id = []\n",
    "    result_text = []\n",
    "    for i in range(maxlen):\n",
    "        # 解码器decoder预测输出\n",
    "        dense_outputs, dec_state_h, dec_state_c = decoder_model.predict([enc_outputs, dec_inputs] + dec_states_inputs)\n",
    "        pred_id = np.argmax(dense_outputs[0][0])\n",
    "        result_id.append(pred_id)\n",
    "        result_text.append(id2vocab[pred_id])\n",
    "        if id2vocab[pred_id] == \"<EOS>\":\n",
    "            break\n",
    "        dec_inputs = np.array([[pred_id]])\n",
    "        dec_states_inputs = [dec_state_h, dec_state_c]\n",
    "    return result_id, result_text\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    input_text = \"今天 天气 感觉 如何\"\n",
    "    result_id, result_text = infer_predict(input_text, encoder_model, decoder_model)\n",
    "\n",
    "    print(\"Input: \", input_text)\n",
    "    print(\"Output: \", result_text, result_id)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
