{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a7861fa7",
   "metadata": {},
   "source": [
    "# Tensorflow工程师职场实战技第1课书面作业\n",
    "学号：114764\n",
    "\n",
    "**作业内容：**  \n",
    "1. 安装tensorflow  \n",
    "2. 运行课程中的第2、3个例子的代码，截图上传"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ae845b6",
   "metadata": {},
   "source": [
    "## 1. 作业1\n",
    "安装Tensorflow"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b39e5642",
   "metadata": {},
   "source": [
    "本机上设置多个虚拟环境安装不同版本的tensorflow:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ac4d9d9",
   "metadata": {},
   "source": [
    "![tensorflow01-2](https://gitee.com/dotzhen/cloud-notes/raw/master/tensorflow01-2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "039b009f",
   "metadata": {},
   "source": [
    "* base环境：tensorflow CPU版本2.7.0\n",
    "* tensorflow环境：tensorflow GPU版本2.5.0\n",
    "* tf19环境：tensorflow CPU版本1.9.0"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f741a2b",
   "metadata": {},
   "source": [
    "![tensorflow01-3](https://gitee.com/dotzhen/cloud-notes/raw/master/tensorflow01-3.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c14e4315",
   "metadata": {},
   "source": [
    "## 2. 作业2\n",
    "运行课程中的第2、3个例子的代码，截图上传"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "027a6625",
   "metadata": {},
   "source": [
    "### 2.1 运行第2个例子代码"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d168d55f",
   "metadata": {},
   "source": [
    "#### 2.1.1 基于tensorflow 1.x运行\n",
    "基于tensorflow 1.9版本运行截图如下。已经训练到10800 step，从训练结果看对验证集的准确率到0.727左右。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11c71229",
   "metadata": {},
   "source": [
    "![tensorflow01-1](https://gitee.com/dotzhen/cloud-notes/raw/master/tensorflow01-1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4cd64c1b",
   "metadata": {},
   "source": [
    "#### 2.1.2 基于tensorflow 2.x运行\n",
    "将代码修改为能够在tensorflow 2.x版本运行，并基于GPU版本加速。  \n",
    "实现代码如下（基于jupyter notebook）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "aaf6d14a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loading data...\n",
      "Vocabulary Size: 18764\n",
      "Train/Dev split: 9596/1066\n",
      "Model: \"model_2\"\n",
      "__________________________________________________________________________________________________\n",
      "Layer (type)                    Output Shape         Param #     Connected to                     \n",
      "==================================================================================================\n",
      "input_3 (InputLayer)            [(None, 56)]         0                                            \n",
      "__________________________________________________________________________________________________\n",
      "embedding_2 (Embedding)         (None, 56, 128)      2401920     input_3[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "tf.expand_dims_2 (TFOpLambda)   (None, 56, 128, 1)   0           embedding_2[0][0]                \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_12 (Conv2D)              (None, 54, 1, 128)   49280       tf.expand_dims_2[0][0]           \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_13 (Conv2D)              (None, 53, 1, 128)   65664       tf.expand_dims_2[0][0]           \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_14 (Conv2D)              (None, 52, 1, 128)   82048       tf.expand_dims_2[0][0]           \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_6 (MaxPooling2D)  (None, 1, 1, 128)    0           conv2d_12[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_7 (MaxPooling2D)  (None, 1, 1, 128)    0           conv2d_13[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_8 (MaxPooling2D)  (None, 1, 1, 128)    0           conv2d_14[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "concatenate (Concatenate)       (None, 1, 1, 384)    0           max_pooling2d_6[0][0]            \n",
      "                                                                 max_pooling2d_7[0][0]            \n",
      "                                                                 max_pooling2d_8[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "flatten_2 (Flatten)             (None, 384)          0           concatenate[0][0]                \n",
      "__________________________________________________________________________________________________\n",
      "dropout_4 (Dropout)             (None, 384)          0           flatten_2[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "dense_6 (Dense)                 (None, 2)            770         dropout_4[0][0]                  \n",
      "==================================================================================================\n",
      "Total params: 2,599,682\n",
      "Trainable params: 2,599,682\n",
      "Non-trainable params: 0\n",
      "__________________________________________________________________________________________________\n",
      "Epoch 1/10\n",
      "10/10 [==============================] - 2s 112ms/step - loss: 0.6895 - accuracy: 0.5634 - val_loss: 0.6838 - val_accuracy: 0.5966\n",
      "Epoch 2/10\n",
      "10/10 [==============================] - 1s 79ms/step - loss: 0.6579 - accuracy: 0.6591 - val_loss: 0.6458 - val_accuracy: 0.6360\n",
      "Epoch 3/10\n",
      "10/10 [==============================] - 1s 78ms/step - loss: 0.5648 - accuracy: 0.7500 - val_loss: 0.5775 - val_accuracy: 0.6876\n",
      "Epoch 4/10\n",
      "10/10 [==============================] - 1s 78ms/step - loss: 0.4129 - accuracy: 0.8364 - val_loss: 0.5204 - val_accuracy: 0.7242\n",
      "Epoch 5/10\n",
      "10/10 [==============================] - 1s 78ms/step - loss: 0.2767 - accuracy: 0.8952 - val_loss: 0.5128 - val_accuracy: 0.7552\n",
      "Epoch 6/10\n",
      "10/10 [==============================] - 1s 78ms/step - loss: 0.1787 - accuracy: 0.9356 - val_loss: 0.5584 - val_accuracy: 0.7430\n",
      "Epoch 7/10\n",
      "10/10 [==============================] - 1s 70ms/step - loss: 0.1109 - accuracy: 0.9656 - val_loss: 0.6204 - val_accuracy: 0.7448\n",
      "Epoch 8/10\n",
      "10/10 [==============================] - 1s 72ms/step - loss: 0.0700 - accuracy: 0.9810 - val_loss: 0.6895 - val_accuracy: 0.7420\n",
      "Epoch 9/10\n",
      "10/10 [==============================] - 1s 71ms/step - loss: 0.0439 - accuracy: 0.9898 - val_loss: 0.7533 - val_accuracy: 0.7420\n",
      "Epoch 10/10\n",
      "10/10 [==============================] - 1s 72ms/step - loss: 0.0286 - accuracy: 0.9948 - val_loss: 0.8158 - val_accuracy: 0.7448\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "import numpy as np\n",
    "import re\n",
    "import os\n",
    "import time\n",
    "import datetime\n",
    "\n",
    "def clean_str(string):\n",
    "    \"\"\"\n",
    "    Tokenization/string cleaning for all datasets except for SST.\n",
    "    Original taken from https://github.com/yoonkim/CNN_sentence/blob/master/process_data.py\n",
    "    \"\"\"\n",
    "    string = re.sub(r\"[^A-Za-z0-9(),!?\\'\\`]\", \" \", string)\n",
    "    string = re.sub(r\"\\'s\", \" \\'s\", string)\n",
    "    string = re.sub(r\"\\'ve\", \" \\'ve\", string)\n",
    "    string = re.sub(r\"n\\'t\", \" n\\'t\", string)\n",
    "    string = re.sub(r\"\\'re\", \" \\'re\", string)\n",
    "    string = re.sub(r\"\\'d\", \" \\'d\", string)\n",
    "    string = re.sub(r\"\\'ll\", \" \\'ll\", string)\n",
    "    string = re.sub(r\",\", \" , \", string)\n",
    "    string = re.sub(r\"!\", \" ! \", string)\n",
    "    string = re.sub(r\"\\(\", \" \\( \", string)\n",
    "    string = re.sub(r\"\\)\", \" \\) \", string)\n",
    "    string = re.sub(r\"\\?\", \" \\? \", string)\n",
    "    string = re.sub(r\"\\s{2,}\", \" \", string)\n",
    "    return string.strip().lower()\n",
    "\n",
    "\n",
    "def load_data_and_labels(positive_data_file, negative_data_file):\n",
    "    \"\"\"\n",
    "    Loads MR polarity data from files, splits the data into words and generates labels.\n",
    "    Returns split sentences and labels.\n",
    "    \"\"\"\n",
    "    # Load data from files\n",
    "    positive_examples = list(open(positive_data_file, \"r\", encoding='utf-8').readlines())\n",
    "    positive_examples = [s.strip() for s in positive_examples]\n",
    "    negative_examples = list(open(negative_data_file, \"r\", encoding='utf-8').readlines())\n",
    "    negative_examples = [s.strip() for s in negative_examples]\n",
    "    # Split by words\n",
    "    x_text = positive_examples + negative_examples\n",
    "    x_text = [clean_str(sent) for sent in x_text]\n",
    "    # Generate labels\n",
    "    positive_labels = [[0, 1] for _ in positive_examples]\n",
    "    negative_labels = [[1, 0] for _ in negative_examples]\n",
    "    y = np.concatenate([positive_labels, negative_labels], 0)\n",
    "    return [x_text, y]\n",
    "\n",
    "\n",
    "def batch_iter(data, batch_size, num_epochs, shuffle=True):\n",
    "    \"\"\"\n",
    "    Generates a batch iterator for a dataset.\n",
    "    \"\"\"\n",
    "    data = np.array(data)\n",
    "    data_size = len(data)\n",
    "    num_batches_per_epoch = int((len(data)-1)/batch_size) + 1\n",
    "    for epoch in range(num_epochs):\n",
    "        # Shuffle the data at each epoch\n",
    "        if shuffle:\n",
    "            shuffle_indices = np.random.permutation(np.arange(data_size))\n",
    "            shuffled_data = data[shuffle_indices]\n",
    "        else:\n",
    "            shuffled_data = data\n",
    "        for batch_num in range(num_batches_per_epoch):\n",
    "            start_index = batch_num * batch_size\n",
    "            end_index = min((batch_num + 1) * batch_size, data_size)\n",
    "            yield shuffled_data[start_index:end_index]\n",
    "\n",
    "import tensorflow as tf\n",
    "import numpy as np\n",
    "\n",
    "\n",
    "class TextCNN(object):\n",
    "    \"\"\"\n",
    "    A CNN for text classification.\n",
    "    Uses an embedding layer, followed by a convolutional, max-pooling and softmax layer.\n",
    "    \"\"\"\n",
    "    def __init__(\n",
    "      self, sequence_length, num_classes, vocab_size,\n",
    "      embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0):\n",
    "        input_shape=(sequence_length)\n",
    "        inputs=tf.keras.Input(shape=input_shape)\n",
    "        x=tf.keras.layers.Embedding(vocab_size,embedding_size,input_length=sequence_length)(inputs)\n",
    "        x = tf.expand_dims(x,-1)\n",
    "        pooled_outputs=[]\n",
    "        for i, filter_size in enumerate(filter_sizes):\n",
    "            xt = tf.keras.layers.Conv2D(num_filters,kernel_size=(filter_size,embedding_size),\n",
    "                                       padding='valid',use_bias=True,activation='relu')(x)\n",
    "            xt = tf.keras.layers.MaxPooling2D(pool_size=(sequence_length-filter_size+1,1),\n",
    "                                             strides=(1,1),padding='valid')(xt)\n",
    "            pooled_outputs.append(xt)\n",
    "        x = tf.keras.layers.Concatenate(axis=3)(pooled_outputs)\n",
    "        x = tf.keras.layers.Flatten()(x)\n",
    "        x = tf.keras.layers.Dropout(FLAGS.dropout_keep_prob)(x)\n",
    "        outputs= tf.keras.layers.Dense(num_classes,activation='softmax')(x)\n",
    "        \n",
    "        self.model = tf.keras.Model(inputs=inputs, outputs=outputs)\n",
    "        self.model.summary()\n",
    "\n",
    "class flags:\n",
    "    def __init__(self):\n",
    "        self.dev_sample_percentage=0.1\n",
    "        self.positive_data_file=\"./data/rt-polaritydata/rt-polarity.pos\"\n",
    "        self.negative_data_file=\"./data/rt-polaritydata/rt-polarity.neg\"\n",
    "        self.embedding_dim=128\n",
    "        self.filter_sizes=\"3,4,5\"\n",
    "        self.num_filters=128\n",
    "        self.dropout_keep_prob=0.5\n",
    "        self.l2_reg_lambda=0.0\n",
    "        self.batch_size=512\n",
    "        self.num_epochs=200\n",
    "        self.evaluate_every=100\n",
    "        self.checkpoint_every=100\n",
    "        self.num_checkpoints=5\n",
    "        self.allow_soft_placement = True\n",
    "        self.log_device_placement=False\n",
    "        \n",
    "FLAGS = flags()\n",
    "\n",
    "def preprocess():\n",
    "    # Data Preparation\n",
    "    # ==================================================\n",
    "\n",
    "    # Load data\n",
    "    print(\"Loading data...\")\n",
    "    x_text, y = load_data_and_labels(FLAGS.positive_data_file, FLAGS.negative_data_file)\n",
    "\n",
    "    # Build vocabulary\n",
    "    max_document_length = max([len(x.split(\" \")) for x in x_text])\n",
    "    vocab_processor = tf.keras.preprocessing.text.Tokenizer(filters='')\n",
    "    vocab_processor.fit_on_texts(x_text)\n",
    "    tensorr = vocab_processor.texts_to_sequences(x_text)\n",
    "    tensorr = tf.keras.preprocessing.sequence.pad_sequences(tensorr,\n",
    "                                                           maxlen=max_document_length,\n",
    "                                                           padding='post',truncating='post')\n",
    "    x = np.array(tensorr)\n",
    "\n",
    "    # Randomly shuffle data\n",
    "    np.random.seed(10)\n",
    "    shuffle_indices = np.random.permutation(np.arange(len(y)))\n",
    "    x_shuffled = x[shuffle_indices]\n",
    "    y_shuffled = y[shuffle_indices]\n",
    "\n",
    "    # Split train/test set\n",
    "    # TODO: This is very crude, should use cross-validation\n",
    "    dev_sample_index = -1 * int(FLAGS.dev_sample_percentage * float(len(y)))\n",
    "    x_train, x_dev = x_shuffled[:dev_sample_index], x_shuffled[dev_sample_index:]\n",
    "    y_train, y_dev = y_shuffled[:dev_sample_index], y_shuffled[dev_sample_index:]\n",
    "\n",
    "    del x, y, x_shuffled, y_shuffled\n",
    "\n",
    "    print(\"Vocabulary Size: {:d}\".format(len(vocab_processor.word_counts)))\n",
    "    print(\"Train/Dev split: {:d}/{:d}\".format(len(y_train), len(y_dev)))\n",
    "    return x_train, y_train, vocab_processor, x_dev, y_dev\n",
    "\n",
    "def train(x_train, y_train, vocab_processor, x_dev, y_dev):\n",
    "    # Training\n",
    "    # ==================================================\n",
    "    cnn = TextCNN(sequence_length=x_train.shape[1],\n",
    "                  num_classes=y_train.shape[1],\n",
    "                  vocab_size=len(vocab_processor.word_counts)+1,\n",
    "                  embedding_size=FLAGS.embedding_dim,\n",
    "                  filter_sizes=list(map(int,FLAGS.filter_sizes.split(\",\"))),\n",
    "                  num_filters=FLAGS.num_filters,\n",
    "                  l2_reg_lambda=FLAGS.l2_reg_lambda)\n",
    "    cnn.model.compile(loss=\"categorical_crossentropy\",optimizer='adam',metrics=['accuracy'])\n",
    "    cnn.model.fit(x_train,y_train,batch_size=1024,epochs=10,validation_data=(x_dev,y_dev))\n",
    "    cnn.model.save('txt_classification.h5')\n",
    "\n",
    "x_train,y_train,vocab_processor,x_dev,y_dev = preprocess()\n",
    "train(x_train,y_train,vocab_processor,x_dev,y_dev)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a9a80a3",
   "metadata": {},
   "source": [
    "### 2.2 运行第3个例子代码\n",
    "#### 2.2.1 基于Tensorflow 1.x运行\n",
    "截图如下，将原代码中适用于python 2.x的print语句修改后可以在python 3.6版本下运行。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "99f10039",
   "metadata": {},
   "source": [
    "![tensorflow01-4](https://gitee.com/dotzhen/cloud-notes/raw/master/tensorflow01-4.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "418ae89f",
   "metadata": {},
   "source": [
    "#### 2.2.2 基于Tensorflow 2.x运行\n",
    "代码修改如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "17652bd4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "正在载入数据...\n",
      "引入嵌入词典和矩阵\n",
      "开始进行文档处理\n",
      "求得训练集与测试集的tensor并赋值\n",
      "引入嵌入词典和矩阵\n",
      "开始进行文档处理\n",
      "求得训练集与测试集的tensor并赋值\n",
      "得到120000维的doc_train，label_train\n",
      "得到9600维的doc_dev, label_train\n",
      "Model: \"model_1\"\n",
      "_________________________________________________________________\n",
      "Layer (type)                 Output Shape              Param #   \n",
      "=================================================================\n",
      "input_2 (InputLayer)         [(None, 1014)]            0         \n",
      "_________________________________________________________________\n",
      "embedding_1 (Embedding)      (None, 1014, 69)          4761      \n",
      "_________________________________________________________________\n",
      "tf.expand_dims_1 (TFOpLambda (None, 1014, 69, 1)       0         \n",
      "_________________________________________________________________\n",
      "conv2d_6 (Conv2D)            (None, 1008, 1, 256)      123904    \n",
      "_________________________________________________________________\n",
      "max_pooling2d_3 (MaxPooling2 (None, 336, 1, 256)       0         \n",
      "_________________________________________________________________\n",
      "tf.compat.v1.transpose_6 (TF (None, 336, 256, 1)       0         \n",
      "_________________________________________________________________\n",
      "conv2d_7 (Conv2D)            (None, 330, 1, 256)       459008    \n",
      "_________________________________________________________________\n",
      "max_pooling2d_4 (MaxPooling2 (None, 110, 1, 256)       0         \n",
      "_________________________________________________________________\n",
      "tf.compat.v1.transpose_7 (TF (None, 110, 256, 1)       0         \n",
      "_________________________________________________________________\n",
      "conv2d_8 (Conv2D)            (None, 108, 1, 256)       196864    \n",
      "_________________________________________________________________\n",
      "tf.compat.v1.transpose_8 (TF (None, 108, 256, 1)       0         \n",
      "_________________________________________________________________\n",
      "conv2d_9 (Conv2D)            (None, 106, 1, 256)       196864    \n",
      "_________________________________________________________________\n",
      "tf.compat.v1.transpose_9 (TF (None, 106, 256, 1)       0         \n",
      "_________________________________________________________________\n",
      "conv2d_10 (Conv2D)           (None, 104, 1, 256)       196864    \n",
      "_________________________________________________________________\n",
      "tf.compat.v1.transpose_10 (T (None, 104, 256, 1)       0         \n",
      "_________________________________________________________________\n",
      "conv2d_11 (Conv2D)           (None, 102, 1, 256)       196864    \n",
      "_________________________________________________________________\n",
      "max_pooling2d_5 (MaxPooling2 (None, 34, 1, 256)        0         \n",
      "_________________________________________________________________\n",
      "tf.compat.v1.transpose_11 (T (None, 34, 256, 1)        0         \n",
      "_________________________________________________________________\n",
      "flatten_1 (Flatten)          (None, 8704)              0         \n",
      "_________________________________________________________________\n",
      "dense_3 (Dense)              (None, 1024)              8913920   \n",
      "_________________________________________________________________\n",
      "dropout_2 (Dropout)          (None, 1024)              0         \n",
      "_________________________________________________________________\n",
      "dense_4 (Dense)              (None, 1024)              1049600   \n",
      "_________________________________________________________________\n",
      "dropout_3 (Dropout)          (None, 1024)              0         \n",
      "_________________________________________________________________\n",
      "dense_5 (Dense)              (None, 4)                 4100      \n",
      "=================================================================\n",
      "Total params: 11,342,749\n",
      "Trainable params: 11,342,749\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n",
      "Epoch 1/10\n",
      "59/59 [==============================] - 116s 2s/step - loss: 1.3864 - accuracy: 0.2539 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 2/10\n",
      "59/59 [==============================] - 142s 2s/step - loss: 1.3864 - accuracy: 0.2492 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 3/10\n",
      "59/59 [==============================] - 142s 2s/step - loss: 1.3864 - accuracy: 0.2481 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 4/10\n",
      "59/59 [==============================] - 142s 2s/step - loss: 1.3864 - accuracy: 0.2486 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 5/10\n",
      "59/59 [==============================] - 142s 2s/step - loss: 1.3864 - accuracy: 0.2485 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 6/10\n",
      "59/59 [==============================] - 143s 2s/step - loss: 1.3863 - accuracy: 0.2476 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 7/10\n",
      "59/59 [==============================] - 143s 2s/step - loss: 1.3863 - accuracy: 0.2494 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 8/10\n",
      "59/59 [==============================] - 142s 2s/step - loss: 1.3863 - accuracy: 0.2496 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 9/10\n",
      "59/59 [==============================] - 142s 2s/step - loss: 1.3863 - accuracy: 0.2498 - val_loss: 1.3863 - val_accuracy: 0.2500\n",
      "Epoch 10/10\n",
      "59/59 [==============================] - 142s 2s/step - loss: 1.3863 - accuracy: 0.2479 - val_loss: 1.3863 - val_accuracy: 0.2500\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "import numpy as np\n",
    "from math import sqrt\n",
    "import csv\n",
    "\n",
    "class TrainingConfig(object):\n",
    "    decay_step = 15000\n",
    "    decay_rate = 0.95\n",
    "    epoches = 20 #50000\n",
    "    evaluate_every = 100\n",
    "    checkpoint_every = 100\n",
    "\n",
    "class ModelConfig(object):\n",
    "    conv_layers = [[256, 7, 3],\n",
    "                   [256, 7, 3],\n",
    "                   [256, 3, None],\n",
    "                   [256, 3, None],\n",
    "                   [256, 3, None],\n",
    "                   [256, 3, 3]]\n",
    "\n",
    "    fc_layers = [1024, 1024]\n",
    "    dropout_keep_prob = 0.9\n",
    "    learning_rate = 0.001\n",
    "\n",
    "class Config(object):\n",
    "    alphabet = \"abcdefghijklmnopqrstuvwxyz0123456789-,;.!?:'\\\"/\\\\|_@#$%^&*~`+-=<>()[]{}\"\n",
    "    alphabet_size = len(alphabet)\n",
    "    l0 = 1014\n",
    "    batch_size = 128\n",
    "    nums_classes = 4\n",
    "    example_nums = 120000\n",
    "\n",
    "    train_data_source = 'data/ag_news_csv/train.csv'\n",
    "    dev_data_source = 'data/ag_news_csv/test.csv'\n",
    "\n",
    "    training = TrainingConfig()\n",
    "\n",
    "    model = ModelConfig()\n",
    "\n",
    "\n",
    "config = Config()\n",
    "\n",
    "class Dataset(object):\n",
    "    def __init__(self, data_source):\n",
    "        self.data_source = data_source\n",
    "        self.index_in_epoch = 0\n",
    "        self.alphabet = config.alphabet\n",
    "        self.alphabet_size = config.alphabet_size\n",
    "        self.num_classes = config.nums_classes\n",
    "        self.l0 = config.l0\n",
    "        self.epochs_completed = 0\n",
    "        self.batch_size = config.batch_size\n",
    "        self.example_nums = config.example_nums\n",
    "        self.doc_image = []\n",
    "        self.label_image = []\n",
    "\n",
    "    def next_batch(self):\n",
    "        # 得到Dataset对象的batch\n",
    "        start = self.index_in_epoch\n",
    "        self.index_in_epoch += self.batch_size\n",
    "        if self.index_in_epoch > self.example_nums:\n",
    "            # Finished epoch\n",
    "            self.epochs_completed += 1\n",
    "            # Shuffle the data\n",
    "            perm = np.arange(self.example_nums)\n",
    "            np.random.shuffle(perm)\n",
    "            self.doc_image = self.doc_image[perm]\n",
    "            self.label_image = self.label_image[perm]\n",
    "            # Start next epoch\n",
    "            start = 0\n",
    "            self.index_in_epoch = self.batch_size\n",
    "            assert self.batch_size <= self.example_nums\n",
    "        end = self.index_in_epoch\n",
    "        batch_x = np.array(self.doc_image[start:end], dtype='int64')\n",
    "        batch_y = np.array(self.label_image[start:end], dtype='float32')\n",
    "\n",
    "        return batch_x, batch_y\n",
    "\n",
    "    def dataset_read(self):\n",
    "        # doc_vec表示一个一篇文章中的所有字母，doc_image代表所有文章\n",
    "        # label_class代表分类\n",
    "        # doc_count代表数据总共有多少行\n",
    "        docs = []\n",
    "        label = []\n",
    "        doc_count = 0\n",
    "        csvfile = open(self.data_source, 'r')\n",
    "        for line in csv.reader(csvfile, delimiter=',', quotechar='\"'):\n",
    "            content = line[1] + \". \" + line[2]\n",
    "            docs.append(content.lower())\n",
    "            label.append(line[0])\n",
    "            doc_count = doc_count + 1\n",
    "\n",
    "        # 引入embedding矩阵和字典\n",
    "        print (\"引入嵌入词典和矩阵\")\n",
    "        embedding_w, embedding_dic = self.onehot_dic_build()\n",
    "\n",
    "        # 将每个句子中的每个字母，转化为embedding矩阵的索引\n",
    "        # 如：doc_vec表示一个一篇文章中的所有字母，doc_image代表所有文章\n",
    "        doc_image = []\n",
    "        label_image = []\n",
    "        print( \"开始进行文档处理\")\n",
    "        for i in range(doc_count):\n",
    "            doc_vec = self.doc_process(docs[i], embedding_dic)\n",
    "            doc_image.append(doc_vec)\n",
    "            label_class = np.zeros(self.num_classes, dtype='float32')\n",
    "            label_class[int(label[i]) - 1] = 1\n",
    "            label_image.append(label_class)\n",
    "\n",
    "        del embedding_w, embedding_dic\n",
    "        print (\"求得训练集与测试集的tensor并赋值\")\n",
    "        self.doc_image = np.asarray(doc_image, dtype='int64')\n",
    "        self.label_image = np.array(label_image, dtype='float32')\n",
    "\n",
    "    def doc_process(self, doc, embedding_dic):\n",
    "        # 如果在embedding_dic中存在该词，那么就将该词的索引加入到doc的向量表示doc_vec中，不存在则用UNK代替\n",
    "        # 不到l0的文章，进行填充，填UNK的value值，即0\n",
    "        min_len = min(self.l0, len(doc))\n",
    "        doc_vec = np.zeros(self.l0, dtype='int64')\n",
    "        for j in range(min_len):\n",
    "            if doc[j] in embedding_dic:\n",
    "                doc_vec[j] = embedding_dic[doc[j]]\n",
    "            else:\n",
    "                doc_vec[j] = embedding_dic['UNK']\n",
    "        return doc_vec\n",
    "\n",
    "    def onehot_dic_build(self):\n",
    "        # onehot编码\n",
    "        alphabet = self.alphabet\n",
    "        embedding_dic = {}\n",
    "        embedding_w = []\n",
    "        # 对于字母表中不存在的或者空的字符用全0向量代替\n",
    "        embedding_dic[\"UNK\"] = 0\n",
    "        embedding_w.append(np.zeros(len(alphabet), dtype='float32'))\n",
    "\n",
    "        for i, alpha in enumerate(alphabet):\n",
    "            onehot = np.zeros(len(alphabet), dtype='float32')\n",
    "            embedding_dic[alpha] = i + 1\n",
    "            onehot[i] = 1\n",
    "            embedding_w.append(onehot)\n",
    "\n",
    "        embedding_w = np.array(embedding_w, dtype='float32')\n",
    "        return embedding_w, embedding_dic\n",
    "\n",
    "class CharCNN(object):\n",
    "    \"\"\"\n",
    "    A CNN for text classification.\n",
    "    Uses an embedding layer, followed by a convolutional, max-pooling and softmax layer.\n",
    "    \"\"\"\n",
    "    def __init__(\n",
    "      self, l0, num_classes, conv_layers, fc_layers, l2_reg_lambda=0.0):\n",
    "        input_shape=(l0)\n",
    "        inputs = tf.keras.Input(shape=input_shape)\n",
    "        x = tf.keras.layers.Embedding(config.alphabet_size,config.alphabet_size,input_length=l0)(inputs)\n",
    "        x = tf.expand_dims(x,-1)\n",
    "        \n",
    "        for i, cl in enumerate(conv_layers):\n",
    "            x = tf.keras.layers.Conv2D(cl[0],kernel_size=(cl[1],x.shape[2]),\n",
    "                                      padding='valid',use_bias=True,activation='relu')(x)\n",
    "            if not cl[-1] is None:\n",
    "                x = tf.keras.layers.MaxPooling2D(pool_size=(cl[2],1),strides=(cl[2],1),padding='valid')(x)\n",
    "            x = tf.transpose(x,[0,1,3,2])\n",
    "        x = tf.keras.layers.Flatten()(x)\n",
    "        \n",
    "        for i, fl in enumerate(fc_layers):\n",
    "            x = tf.keras.layers.Dense(fl,activation='relu')(x)\n",
    "            x = tf.keras.layers.Dropout(0.5)(x)\n",
    "        outputs = tf.keras.layers.Dense(num_classes,activation='softmax')(x)\n",
    "        \n",
    "        self.model = tf.keras.Model(inputs=inputs, outputs = outputs)\n",
    "        self.model.summary()\n",
    "\n",
    "# Load data\n",
    "print(\"正在载入数据...\")\n",
    "# 函数dataset_read：输入文件名,返回训练集,测试集标签\n",
    "# 注：embedding_w大小为vocabulary_size × embedding_size\n",
    "train_data = Dataset(config.train_data_source)\n",
    "dev_data = Dataset(config.dev_data_source)\n",
    "train_data.dataset_read()\n",
    "dev_data.dataset_read()\n",
    "\n",
    "print (\"得到120000维的doc_train，label_train\")\n",
    "print (\"得到9600维的doc_dev, label_train\")\n",
    "\n",
    "x_train = train_data.doc_image\n",
    "y_train = train_data.label_image\n",
    "x_dev = dev_data.doc_image\n",
    "y_dev = dev_data.label_image\n",
    "\n",
    "cnn = CharCNN(\n",
    "            l0=config.l0,\n",
    "            num_classes=config.nums_classes,\n",
    "            conv_layers=config.model.conv_layers,\n",
    "            fc_layers=config.model.fc_layers,\n",
    "            l2_reg_lambda=0)\n",
    "cnn.model.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n",
    "cnn.model.fit(x_train,y_train, batch_size=2048, epochs=10, validation_data=(x_dev, y_dev))\n",
    "cnn.model.save('char_classification.h5')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4ce01aed",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:tensorflow]",
   "language": "python",
   "name": "conda-env-tensorflow-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
