{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "d75b6308",
   "metadata": {},
   "source": [
    "# Tensorflow工程师职场实战技第4课书面作业\n",
    "学号：114764\n",
    "\n",
    "**作业内容：**  \n",
    "尝试训练一个识别中文零到九孤立词的模型（数据可以在百度云中下载链接：https://pan.baidu.com/s/1xU1HDjjPzEXzppkIf1Q53Q 密码：6dik）\n",
    "数据说明：不同文件夹是不同人朗读0~9 数字的语音，每个文件夹都包含20个文件，-后的数字表示朗读的数字是什么，比如文件10015-3a.wav，表示在文件夹10015中，朗读数字3的语音，每个数字每个人都会读两次，用a、b区分。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0f427c7",
   "metadata": {},
   "source": [
    "**做题思路**：  \n",
    "* 首先观察了一下数据集，整体是比较规则的，第个文件都是31K，语音文件需要经过数值化后，再用数学模型进行计算分类。按上课中提到需要将语音转换为MFCC向量后处理是一种相对普遍的做法；我尝试了一下用MFCC向量化后，每个文件转化为一个（199，39）的向量（矩阵）。\n",
    "* 如果每个语音文件向量化为一个（199，39）的矩阵，不就可类比为一个只有一个通道的灰度图片？如果是图片我们不就可以用相对经典的处理图片分类的神经网络来处理了。对于处理0~9数字分类最经典的就是lenet-5卷积神经网络了。于是我决定用lenet-5来做分类器。\n",
    "* 我又找了一下网上的资料，发现还有一种向量化方法是计算fbank。fbank特征更多是希望符合声音信号的本质，拟合人耳的接收特性。DCT是线性变换，会丢失语音信号中原本的一些高度非线性成分。在深度学习之前，受限于算法，mfcc配GMMs-HMMs是ASR的主流做法。当深度学习方法出来之后，由于神经网络对高度相关的信息不敏感，mfcc不是最优选择，经过实际验证，其在神经网络中的表现也明显不如fbank。于是我决定采用fbank来向量化语音文件。\n",
    "* 通过观察，我发现样本只有560个，这个数量对于训练神经网络有点偏少，尤其是我还打算分出30%的样本来做测试集。因此我想可以参照图片分类的做法来增大训练集。可以对一个图片进行扭曲，旋转等操作方法来获取得到更多的图片样本，我这里相比采用的方法是在wav语音文件中加入随机噪声来获得更多的训练用语音样本（对测试集不做）。通过这种方法我将训练集扩大了100倍。我采用的随机化方法还比较简单，如果考虑复杂一点的噪声产生方法，或者对噪声的音量进行调节可能可以获得到质量更好的样本。\n",
    "* 这里还有一个问题，就是fbank向量的维度是(199,26)而lenet-5的要求输入的数据维度是(28,28)，我考虑是不是能像处理图片一样将不同尺寸的图片缩放成统一尺寸，在数学上实际就是插值化处理，我不确定，但是我尝试了一下，效果上是可以的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "b3d32b4f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T13:56:40.676366Z",
     "start_time": "2022-01-04T13:56:38.500211Z"
    }
   },
   "outputs": [],
   "source": [
    "import tensorflow as tf\n",
    "from tensorflow.keras.layers import Input, Conv2D, Activation, MaxPooling2D, Flatten, Dense, Dropout\n",
    "from tensorflow.keras.models import Model\n",
    "import os\n",
    "from tensorflow.keras.callbacks import TensorBoard\n",
    "from tensorflow.keras.utils import plot_model, to_categorical\n",
    "import scipy.io.wavfile as wav\n",
    "import cv2 as cv\n",
    "from python_speech_features import *\n",
    "import numpy as np\n",
    "import librosa\n",
    "import random"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "adf97d02",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T12:04:59.053259Z",
     "start_time": "2022-01-04T12:04:59.039249Z"
    }
   },
   "outputs": [],
   "source": [
    "# 工具函数\n",
    "def mkdir(path): #创建目录\n",
    "    folder = os.path.exists(path)\n",
    "    if not folder:  #判断是否存在文件夹如果不存在则创建为文件夹\n",
    "        os.makedirs(path)  #makedirs 创建文件时如果路径不存在会创建这个路径\n",
    "        \n",
    "def add_noise(data): #对语音文件加随机噪声\n",
    "    wn = np.random.normal(0,1,len(data))\n",
    "    data_noise = np.where(data != 0.0, data.astype('float64') + 0.02 * wn, 0.0).astype(np.float32)\n",
    "    return data_noise\n",
    "\n",
    "def get_mfcc(data, fs): #获得MFCC向量\n",
    "    wav_feature =  mfcc(data, fs)\n",
    "    d_mfcc_feat = delta(wav_feature, 1)\n",
    "    d_mfcc_feat2 = delta(wav_feature, 2)\n",
    "    feature = np.hstack((wav_feature, d_mfcc_feat, d_mfcc_feat2))\n",
    "    return feature\n",
    "\n",
    "def get_fbank(data, fs): #获得fbank向量\n",
    "    wav_feature = logfbank(data, fs)\n",
    "    return wav_feature\n",
    "\n",
    "def norm_data(data): #对数据进行归一化\n",
    "#     m = np.mean(data)\n",
    "    mx = np.max(data)\n",
    "    mn = np.min(data)\n",
    "    return (data-mn)/(mx-mn)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "93f74c4c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T12:04:59.084655Z",
     "start_time": "2022-01-04T12:04:59.054260Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "total files:  560\n"
     ]
    }
   ],
   "source": [
    "fc = 0\n",
    "for filepath, dirnames, filenames in os.walk(r'data'):\n",
    "    for filename in filenames:\n",
    "        fc += 1\n",
    "        fullpath=os.path.join(filepath, filename)\n",
    "        file_stats = os.stat(fullpath)\n",
    "        if file_stats.st_size != 32044:\n",
    "            print(\"Not the size: \",fullpath) #文件大小都一致\n",
    "# 统计总样本数\n",
    "print('total files: ',fc)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "f2f01eec",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T13:50:33.114646Z",
     "start_time": "2022-01-04T13:50:31.677965Z"
    }
   },
   "outputs": [],
   "source": [
    "#分train,test集\n",
    "split_ratio=0.3\n",
    "flist=[]\n",
    "for filepath, dirnames, filenames in os.walk(r'data'):\n",
    "    for filename in filenames:\n",
    "        fullpath=os.path.join(filepath, filename)\n",
    "        flist.append(fullpath)\n",
    "testset = random.sample(flist,int(len(flist)*split_ratio))\n",
    "for ts in testset:\n",
    "    flist.remove(ts)\n",
    "trainset = flist\n",
    "\n",
    "mkdir('train')\n",
    "mkdir('test')\n",
    "\n",
    "for it in trainset:\n",
    "    os.system('copy '+it+' train\\\\')\n",
    "\n",
    "for it in testset:\n",
    "    os.system('copy '+it+' test\\\\')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "acd1f8fd",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:17:32.815763Z",
     "start_time": "2022-01-04T14:16:47.185979Z"
    }
   },
   "outputs": [],
   "source": [
    "#在训练集中针对文件增加噪声产生更多的训练样本\n",
    "amplification = 100 # 每个样本放大100倍，原来有训练样本392个，现在要增加到39200+392个\n",
    "for ts in trainset:\n",
    "    part1,part2=os.path.split(ts)\n",
    "    part3,part4=os.path.splitext(part2)\n",
    "#     data, fs = librosa.core.load(ts)\n",
    "    fs,data = wav.read(ts)\n",
    "    for i in range(amplification):\n",
    "        newfile = 'train\\\\'+part3+'-noise'+str(i)+part4\n",
    "        data_noise = add_noise(data)\n",
    "        wav.write(newfile, fs, data.astype(np.int8))\n",
    "#         librosa.output.write_wav(newfile, data_noise, fs)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "id": "7eb860da",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:35:39.814828Z",
     "start_time": "2022-01-04T14:35:39.809827Z"
    }
   },
   "outputs": [],
   "source": [
    "X=np.zeros((39592,28,28))\n",
    "y=np.zeros(39592)\n",
    "#创建训练集，X表示自变量部分\n",
    "batch = 0\n",
    "for filepath, dirnames, filenames in os.walk(r'train'):\n",
    "    for filename in filenames:\n",
    "        fullpath=os.path.join(filepath, filename)\n",
    "        lab=int(filename[6:7])\n",
    "        turn=filename[7:8]\n",
    "        fs, signal = wav.read(fullpath)\n",
    "        fbank=get_fbank(signal,fs)\n",
    "        X[batch]=norm_data(cv.resize(fbank, (28, 28)))#用opencv的图像resize方法将（199，26）shape的数据转换为(28,28)shape\n",
    "        y[batch]=lab\n",
    "        batch+=1\n",
    "\n",
    "X=np.expand_dims(X,axis=-1) #将X维度变化为（39592，28，28，1）方便输入lenet-5模型\n",
    "y = tf.keras.utils.to_categorical(y, 10) #将因变量，即标签one-hot处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "id": "3073284e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:35:43.386040Z",
     "start_time": "2022-01-04T14:35:43.379039Z"
    }
   },
   "outputs": [],
   "source": [
    "#创建 LeNet-5 模型\n",
    "def creat_model():\n",
    "    inputs = Input((28,28,1))\n",
    "    conv2d_1 = Conv2D(32, (5, 5), padding ='same',activation='relu',name = 'first_layer')(inputs)\n",
    "    maxpool_2 = MaxPooling2D(pool_size=(2,2), strides=2, padding='same')(conv2d_1)\n",
    "    conv2d_3 = Conv2D(64, (5, 5), padding = 'same',activation='relu')(maxpool_2)\n",
    "    maxpool_4 = MaxPooling2D(pool_size=(2,2), strides=2, padding='same')(conv2d_3) \n",
    "    flatten = Flatten()(maxpool_4) \n",
    "    dense_5 = Dense(120,activation='relu')(flatten)\n",
    "    dense_5 = Dropout(0.5)(dense_5)\n",
    "    outputs = Dense(10,activation='softmax')(dense_5)   \n",
    "                      \n",
    "    model = Model(inputs=[inputs], outputs=[outputs])\n",
    "    \n",
    "    return model    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "id": "3fffa3b0",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:50:31.622893Z",
     "start_time": "2022-01-04T14:50:31.581885Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"model_2\"\n",
      "_________________________________________________________________\n",
      " Layer (type)                Output Shape              Param #   \n",
      "=================================================================\n",
      " input_3 (InputLayer)        [(None, 28, 28, 1)]       0         \n",
      "                                                                 \n",
      " first_layer (Conv2D)        (None, 28, 28, 32)        832       \n",
      "                                                                 \n",
      " max_pooling2d_4 (MaxPooling  (None, 14, 14, 32)       0         \n",
      " 2D)                                                             \n",
      "                                                                 \n",
      " conv2d_2 (Conv2D)           (None, 14, 14, 64)        51264     \n",
      "                                                                 \n",
      " max_pooling2d_5 (MaxPooling  (None, 7, 7, 64)         0         \n",
      " 2D)                                                             \n",
      "                                                                 \n",
      " flatten_2 (Flatten)         (None, 3136)              0         \n",
      "                                                                 \n",
      " dense_4 (Dense)             (None, 120)               376440    \n",
      "                                                                 \n",
      " dropout_2 (Dropout)         (None, 120)               0         \n",
      "                                                                 \n",
      " dense_5 (Dense)             (None, 10)                1210      \n",
      "                                                                 \n",
      "=================================================================\n",
      "Total params: 429,746\n",
      "Trainable params: 429,746\n",
      "Non-trainable params: 0\n",
      "_________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "#创建模型\n",
    "model = creat_model()\n",
    "model.summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "id": "e7b208cd",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:52:38.021365Z",
     "start_time": "2022-01-04T14:50:35.686194Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/5\n",
      "990/990 [==============================] - 25s 25ms/step - loss: 1.7071 - val_loss: 0.8605\n",
      "Epoch 2/5\n",
      "990/990 [==============================] - 24s 25ms/step - loss: 0.6814 - val_loss: 0.6832\n",
      "Epoch 3/5\n",
      "990/990 [==============================] - 24s 25ms/step - loss: 0.3506 - val_loss: 0.6208\n",
      "Epoch 4/5\n",
      "990/990 [==============================] - 24s 25ms/step - loss: 0.1997 - val_loss: 0.5734\n",
      "Epoch 5/5\n",
      "990/990 [==============================] - 24s 24ms/step - loss: 0.1284 - val_loss: 0.5908\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.History at 0x2013d08d1f0>"
      ]
     },
     "execution_count": 92,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#编译模型\n",
    "model.compile(loss='categorical_crossentropy', optimizer=\"sgd\")\n",
    "#训练模型\n",
    "model.fit(X,y,epochs=5,batch_size=32,validation_split=0.2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "id": "bf7c67a5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:53:03.078521Z",
     "start_time": "2022-01-04T14:53:03.053514Z"
    }
   },
   "outputs": [],
   "source": [
    "model.save('lenet-5-5.h5')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bb35a2ba",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:53:16.848276Z",
     "start_time": "2022-01-04T14:53:16.797264Z"
    }
   },
   "outputs": [],
   "source": [
    "x_test=np.zeros((168,28,28))\n",
    "y_test=np.zeros(168)\n",
    "\n",
    "batch = 0\n",
    "for filepath, dirnames, filenames in os.walk(r'test'):\n",
    "    for filename in filenames:\n",
    "        fullpath=os.path.join(filepath, filename)\n",
    "        lab=int(filename[6:7])\n",
    "        turn=filename[7:8]\n",
    "        fs, signal = wav.read(fullpath)\n",
    "        fbank=get_fbank(signal,fs)\n",
    "        x_test[batch]=norm_data(cv.resize(fbank, (28, 28)))\n",
    "        y_test[batch]=lab\n",
    "        batch+=1\n",
    "\n",
    "x_test=np.expand_dims(x_test,axis=-1)\n",
    "y_test = tf.keras.utils.to_categorical(y_test, 10)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "id": "89d13579",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2022-01-04T14:53:16.848276Z",
     "start_time": "2022-01-04T14:53:16.797264Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2/2 [==============================] - 0s 6ms/step - loss: 0.7446\n",
      "Test loss: 0.7445540428161621\n"
     ]
    }
   ],
   "source": [
    "#给模型打分，输出训练好的模型在测试集上的表现\n",
    "score_history = model.evaluate(x_test, y_test, batch_size=128)\n",
    "print('Test loss:', score_history)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37caf224",
   "metadata": {},
   "source": [
    "只训练了5个epoch，但整体效果还可以，如果对噪声控制再精细一点，训练多一点，结果可能会更好。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python [conda env:root] *",
   "language": "python",
   "name": "conda-root-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
