{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# DFCNN + Transformer模型完成中文语音识别\n",
    "\n",
    "语音识别，通常称为自动语音识别，（Automatic Speech Recognition，ASR），主要是将人类语音中的词汇内容转换为计算机可读的输入，一般都是可以理解的文本内容，也有可能是二进制编码或者字符序列。但是，我们一般理解的语音识别其实都是狭义的语音转文字的过程，简称语音转文本识别（ Speech To Text, STT ）更合适，这样就能与语音合成(Text To Speech, TTS )对应起来。\n",
    "\n",
    "语音识别系统的主要流程如下图所示。\n",
    "\n",
    "![](./img/flow.png)\n",
    "\n",
    "本实践任务为搭建一个基于深度学习的中文语音识别系统，主要包括声学模型和语言模型，能够将输入的音频信号识别为汉字。\n",
    "\n",
    "本实践使用的模型均为近年来在语音识别深度学习领域中表现较为突出的模型，声学模型为 DFCNN，语言模型为 Transformer，下面开始进行实践。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 进入ModelArts\n",
    "\n",
    "点击如下链接：https://www.huaweicloud.com/product/modelarts.html ， 进入ModelArts主页。点击“立即使用”按钮，输入用户名和密码登录，进入ModelArts使用页面。\n",
    "\n",
    "### 创建ModelArts notebook\n",
    "\n",
    "下面，我们在ModelArts中创建一个notebook开发环境，ModelArts notebook提供网页版的Python开发环境，可以方便的编写、运行代码，并查看运行结果。\n",
    "\n",
    "第一步：在ModelArts服务主界面依次点击“开发环境”、“创建”\n",
    "\n",
    "![create_nb_create_button](./img/create_nb_create_button.png)\n",
    "\n",
    "第二步：填写notebook所需的参数：\n",
    "\n",
    "| 参数 | 说明 |\n",
    "| - - - - - | - - - - - |\n",
    "| 计费方式 | 按需计费  |\n",
    "| 名称 | 自定义名称 |\n",
    "| 工作环境 | Python3 |\n",
    "| 资源池 | 公共资源池 |\n",
    "| 类型 | GPU |\n",
    "| 规格 | [限时免费]体验规格GPU版 |\n",
    "| 存储配置 | EVS |\n",
    "| 磁盘规格 | 5GB |\n",
    "\n",
    "第三步：配置好notebook参数后，点击下一步，进入notebook信息预览。确认无误后，点击“立即创建”\n",
    "\n",
    "第四步：创建完成后，返回开发环境主界面，等待Notebook创建完毕后，打开Notebook，进行下一步操作。\n",
    "![modelarts_notebook_index](./img/modelarts_notebook_index.png)\n",
    "\n",
    "### 在ModelArts中创建开发环境\n",
    "\n",
    "接下来，我们创建一个实际的开发环境，用于后续的实验步骤。\n",
    "\n",
    "第一步：点击下图所示的“打开”按钮，进入刚刚创建的Notebook\n",
    "![inter_dev_env](img/enter_dev_env.png)\n",
    "\n",
    "第二步：点击右上角的\"New\"，然后创建TensorFlow 1.13.1开发环境。\n",
    "\n",
    "第三步：点击左上方的文件名\"Untitled\"，并输入一个与本实验相关的名称，如“speech_recognition”\n",
    "\n",
    "![notebook_untitled_filename](./img/notebook_untitled_filename.png)\n",
    "![notebook_name_the_ipynb](./img/notebook_name_the_ipynb.png)\n",
    "\n",
    "\n",
    "### 在Notebook中编写并执行代码\n",
    "\n",
    "在Notebook中，我们输入一个简单的打印语句，然后点击上方的运行按钮，可以查看语句执行的结果：\n",
    "![run_helloworld](./img/run_helloworld.png)\n",
    "\n",
    "\n",
    "开发环境准备好啦，接下来可以愉快地写代码啦！\n",
    "\n",
    "\n",
    "### 准备源代码和数据\n",
    "\n",
    "准备案例所需的源代码和数据，相关资源已经保存在 OBS 中，我们通过 ModelArts SDK 将资源下载到本地。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Successfully download file modelarts-labs-bj4/notebook/DL_speech_recognition/speech_recognition.tar.gz from OBS to local ./speech_recognition.tar.gz\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import subprocess\n",
    "from modelarts.session import Session\n",
    "session = Session()\n",
    "\n",
    "if session.region_name == 'cn-north-1':\n",
    "    bucket_path = 'modelarts-labs/notebook/DL_speech_recognition/speech_recognition.tar.gz'\n",
    "elif session.region_name == 'cn-north-4':\n",
    "    bucket_path = 'modelarts-labs-bj4/notebook/DL_speech_recognition/speech_recognition.tar.gz'\n",
    "else:\n",
    "    print(\"请更换地区到北京一或北京四\")\n",
    "\n",
    "if not os.path.exists('speech_recognition'):\n",
    "    session.download_data(bucket_path=bucket_path, path='./speech_recognition.tar.gz')\n",
    "    subprocess.run(['tar xf ./speech_recognition.tar.gz;rm ./speech_recognition.tar.gz'], stdout=subprocess.PIPE, shell=True, check=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上一步下载了speech_recognition.tar.gz，解压后文件夹结构如下：\n",
    "\n",
    "```\n",
    "speech_recognition\n",
    " │\n",
    " ├─── data\n",
    " │        ├── A2_0.wav\n",
    " │        ├── A2_0.wav.trn\n",
    " │        ├── A2_1.wav\n",
    " │        ├── A2_1.wav.trn\n",
    " │        ├── A2_2.wav\n",
    " │        ├── A2_2.wav.trn\n",
    " │        │      :\n",
    " │        │      :\n",
    " │        │      :\n",
    " │        ├── A36_249.wav\n",
    " │        └── A36_249.wav.trn\n",
    " │\n",
    " ├─── acoustic_model\n",
    " ├─── language_model\n",
    " └─── data.txt\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据集——THCHS-30\n",
    "\n",
    "THCHS30是一个经典的中文语音数据集，包含了1万余条语音文件，大约40小时的中文语音数据，内容以新闻文章诗句为主，全部为女声。THCHS-30是在安静的办公室环境下，通过单个碳粒麦克风录取的，采样频率16kHz，采样大小16bits，录制对象为普通话流利的女性大学生。\n",
    "\n",
    "thchs30数据库大小为6.4G。其中，这些录音根据其文本内容分成了四部分，A（句子的ID是1-250），B（句子的ID是251-500），C（501-750），D（751-1000）。ABC三组包括30个人的10893句发音，用来做训练，D包括10个人的2496句发音，用来做测试。其具体的划分如下表所示：\n",
    "\n",
    "数据集 | 音频时长(h) | 句子数 | 词数\n",
    "- | - | - | -\n",
    "train(训练) | 25 | 10000 | 198252\n",
    "dev(验证) | 2:14 | 893 | 17743\n",
    "test(测试) | 6:15 | 2495 | 49085\n",
    "\n",
    "THCHS-30数据集可从 http://www.openslr.org/18/ 下载。其他常用的开源中文语音数据集还有 Aishell、Free ST Chinese Mandarin Corpus、Primewords Chinese Corpus Set、aidatatang_200zh 等，均可以从 http://www.openslr.org/resources.php 下载。\n",
    "\n",
    "THCHS-30语音数据格式为`.wav`，对应的拼音和汉字的文本文件格式为`.wav.trn`。\n",
    "\n",
    "在本实践中，选取A部分语音均放在`data`文件夹下进行训练和测试。\n",
    "\n",
    "同时将所有数据的拼音和汉字文本整理在`data.txt`文件中，以方便使用。下面为读取`data.txt`文件十条内容。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "A11_0.wav\n",
      "lv4 shi4 yang2 chun1 yan1 jing3 da4 kuai4 wen2 zhang1 de di3 se4 si4 yue4 de lin2 luan2 geng4 shi4 lv4 de2 xian1 huo2 xiu4 mei4 shi1 yi4 ang4 ran2\n",
      "绿是阳春烟景大块文章的底色四月的林峦更是绿得鲜活秀媚诗意盎然\n",
      "\n",
      "A11_1.wav\n",
      "ta1 jin3 ping2 yao1 bu4 de li4 liang4 zai4 yong3 dao4 shang4 xia4 fan1 teng2 yong3 dong4 she2 xing2 zhuang4 ru2 hai3 tun2 yi4 zhi2 yi3 yi1 tou2 de you1 shi4 ling3 xian1\n",
      "他仅凭腰部的力量在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先\n",
      "\n",
      "A11_10.wav\n",
      "pao4 yan3 da3 hao3 le zha4 yao4 zen3 me zhuang1 yue4 zheng4 cai2 yao3 le yao3 ya2 shu1 di4 tuo1 qu4 yi1 fu2 guang1 bang3 zi chong1 jin4 le shui3 cuan4 dong4\n",
      "炮眼打好了炸药怎么装岳正才咬了咬牙倏地脱去衣服光膀子冲进了水窜洞\n",
      "\n",
      "A11_100.wav\n",
      "ke3 shei2 zhi1 wen2 wan2 hou4 ta1 yi1 zhao4 jing4 zi zhi1 jian4 zuo3 xia4 yan3 jian3 de xian4 you4 cu1 you4 hei1 yu3 you4 ce4 ming2 xian3 bu2 dui4 cheng1\n",
      "可谁知纹完后她一照镜子只见左下眼睑的线又粗又黑与右侧明显不对称\n",
      "\n",
      "A11_102.wav\n",
      "yi1 jin4 men2 wo3 bei4 jing1 dai1 le zhe4 hu4 ming2 jiao4 pang2 ji2 de lao3 nong2 shi4 kang4 mei3 yuan2 chao2 fu4 shang1 hui2 xiang1 de lao3 bing1 qi1 zi3 chang2 nian2 you3 bing4 jia1 tu2 si4 bi4 yi1 pin2 ru2 xi3\n",
      "一进门我被惊呆了这户名叫庞吉的老农是抗美援朝负伤回乡的老兵妻子长年有病家徒四壁一贫如洗\n",
      "\n",
      "A11_103.wav\n",
      "zou3 chu1 cun1 zi lao3 yuan3 lao3 yuan3 wo3 hai2 hui2 tou2 zhang1 wang4 na4 ge4 an1 ning2 tian2 jing4 de xiao3 yuan4 na4 ge4 shi3 wo3 zhong1 shen1 nan2 wang4 de xiao3 yuan4\n",
      "走出村子老远老远我还回头张望那个安宁恬静的小院那个使我终身难忘的小院\n",
      "\n",
      "A11_104.wav\n",
      "er4 yue4 si4 ri4 zhu4 jin4 xin1 xi1 men2 wai4 luo2 jia1 nian3 wang2 jia1 gang1 zhu1 zi4 qing1 wen2 xun4 te4 di4 cong2 dong1 men2 wai4 gan3 lai2 qing4 he4\n",
      "二月四日住进新西门外罗家碾王家冈朱自清闻讯特地从东门外赶来庆贺\n",
      "\n",
      "A11_105.wav\n",
      "dan1 wei4 bu2 shi4 wo3 lao3 die1 kai1 de ping2 shen2 me yao4 yi1 ci4 er4 ci4 zhao4 gu4 wo3 wo3 bu4 neng2 ba3 zi4 ji3 de bao1 fu2 wang3 xue2 xiao4 shuai3\n",
      "单位不是我老爹开的凭什么要一次二次照顾我我不能把自己的包袱往学校甩\n",
      "\n",
      "A11_106.wav\n",
      "dou1 yong4 cao3 mao4 huo4 ge1 bo zhou3 hu4 zhe wan3 lie4 lie4 qie ju1 chuan1 guo4 lan4 ni2 tang2 ban1 de yuan4 ba4 pao3 hui2 zi4 ji3 de su4 she4 qu4 le\n",
      "都用草帽或胳膊肘护着碗趔趔趄趄穿过烂泥塘般的院坝跑回自己的宿舍去了\n",
      "\n",
      "A11_107.wav\n",
      "xiang1 gang3 yan3 yi4 quan1 huan1 ying2 mao2 a1 min3 jia1 meng2 wu2 xian4 tai2 yu3 hua2 xing1 yi1 xie1 zhong4 da4 de yan3 chang4 huo2 dong4 dou1 yao1 qing3 ta1 chu1 chang3 you3 ji3 ci4 hai2 te4 yi4 an1 pai2 ya1 zhou4 yan3 chu1\n",
      "香港演艺圈欢迎毛阿敏加盟无线台与华星一些重大的演唱活动都邀请她出场有几次还特意安排压轴演出\n",
      "\n",
      "语音总数量： 13388 \n",
      "\n"
     ]
    }
   ],
   "source": [
    "with open('./speech_recognition/data.txt',\"r\", encoding='UTF-8') as f:    #设置文件对象\n",
    "    f_ = f.readlines()\n",
    "    for i in range(10):\n",
    "        for j in range(3):\n",
    "            print(f_[i].split('\\t')[j])\n",
    "    print('语音总数量：',len(f_),'\\n')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先加载需要的python库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import numpy as np\n",
    "import scipy.io.wavfile as wav\n",
    "import matplotlib.pyplot as plt\n",
    "import tensorflow as tf"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 声学模型\n",
    "\n",
    "在本实践中，选择使用**深度全序列卷积神经网络（DFCNN，Deep Fully Convolutional NeuralNetwork）**进行声学模型的建模。\n",
    "\n",
    "CNN早在2012年就被用于语音识别系统，但始终没有大的突破。主要的原因是其使用固定长度的帧拼接作为输入，无法看到足够长的语音上下文信息；另外一个缺陷将CNN视作一种特征提取器，因此所用的卷积层数很少，表达能力有限。\n",
    "\n",
    "DFCNN直接将一句语音转化成一张图像作为输入，即先对每帧语音进行傅里叶变换，再将时间和频率作为图像的两个维度，然后通过非常多的卷积层和池化层的组合，对整句语音进行建模，输出单元直接与最终的识别结果（比如音节或者汉字）相对应。DFCNN 的原理是把语谱图看作带有特定模式的图像，其结构如下图所示。\n",
    "\n",
    "![](./img/DFCNN.png)\n",
    "\n",
    "下面从输入端、模型结构和输出端三个方面来阐述 DFCNN 的优势：\n",
    "\n",
    "首先，从输入端来看，传统语音特征在傅里叶变换之后使用各种人工设计的滤波器组来提取特征，造成了频域上的信息损失，在高频区域的信息损失尤为明显，而且传统语音特征为了计算量的考虑必须采用非常大的帧移，无疑造成了时域上的信息损失，在说话人语速较快的时候表现得更为突出。因此 DFCNN 直接将语谱图作为输入，避免了频域和时域两个维度的信息损失，相比其他以传统语音特征作为输入的语音识别框架相比具有天然的优势。\n",
    "\n",
    "其次，从模型结构来看，DFCNN 借鉴了图像识别中效果最好的网络配置，每个卷积层使用 3x3 的小卷积核，并在多个卷积层之后再加上池化层，这样大大增强了 CNN 的表达能力，与此同时，通过累积非常多的这种卷积池化层对，DFCNN 可以看到非常长的历史和未来信息，这就保证了 DFCNN 可以出色地表达语音的长时相关性，相比 RNN 或者 LSTM 网络结构在鲁棒性上更加出色。\n",
    "\n",
    "最后，从输出端来看，DFCNN 比较灵活，可以方便地和其他建模方式融合。比如，本实践采用的 DFCNN 与连接时序分类模型（CTC，connectionist temporal classification）方案结合，以实现整个模型的端到端声学模型训练，且其包含的池化层等特殊结构可以使得以上端到端训练变得更加稳定。与传统的声学模型训练相比，采用CTC作为损失函数的声学模型训练，是一种完全端到端的声学模型训练，不需要预先对数据做对齐，只需要一个输入序列和一个输出序列即可以训练。这样就不需要对数据对齐和一一标注，并且CTC直接输出序列预测的概率，不需要外部的后处理。\n",
    "\n",
    "#### 下面来建立DFCNN声学模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Colocations handled automatically by placer.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "打印声学模型结构\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4249: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4229: to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "__________________________________________________________________________________________________\n",
      "Layer (type)                    Output Shape         Param #     Connected to                     \n",
      "==================================================================================================\n",
      "the_inputs (InputLayer)         (None, None, 200, 1) 0                                            \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_1 (Conv2D)               (None, None, 200, 32 320         the_inputs[0][0]                 \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_1 (BatchNor (None, None, 200, 32 128         conv2d_1[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_2 (Conv2D)               (None, None, 200, 32 9248        batch_normalization_1[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_2 (BatchNor (None, None, 200, 32 128         conv2d_2[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_1 (MaxPooling2D)  (None, None, 100, 32 0           batch_normalization_2[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_3 (Conv2D)               (None, None, 100, 64 18496       max_pooling2d_1[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_3 (BatchNor (None, None, 100, 64 256         conv2d_3[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_4 (Conv2D)               (None, None, 100, 64 36928       batch_normalization_3[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_4 (BatchNor (None, None, 100, 64 256         conv2d_4[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_2 (MaxPooling2D)  (None, None, 50, 64) 0           batch_normalization_4[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_5 (Conv2D)               (None, None, 50, 128 73856       max_pooling2d_2[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_5 (BatchNor (None, None, 50, 128 512         conv2d_5[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_6 (Conv2D)               (None, None, 50, 128 147584      batch_normalization_5[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_6 (BatchNor (None, None, 50, 128 512         conv2d_6[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_3 (MaxPooling2D)  (None, None, 25, 128 0           batch_normalization_6[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_7 (Conv2D)               (None, None, 25, 128 147584      max_pooling2d_3[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_7 (BatchNor (None, None, 25, 128 512         conv2d_7[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_8 (Conv2D)               (None, None, 25, 128 147584      batch_normalization_7[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_8 (BatchNor (None, None, 25, 128 512         conv2d_8[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_9 (Conv2D)               (None, None, 25, 128 147584      batch_normalization_8[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_9 (BatchNor (None, None, 25, 128 512         conv2d_9[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_10 (Conv2D)              (None, None, 25, 128 147584      batch_normalization_9[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_10 (BatchNo (None, None, 25, 128 512         conv2d_10[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "reshape_1 (Reshape)             (None, None, 3200)   0           batch_normalization_10[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "dropout_1 (Dropout)             (None, None, 3200)   0           reshape_1[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "dense_1 (Dense)                 (None, None, 256)    819456      dropout_1[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "dropout_2 (Dropout)             (None, None, 256)    0           dense_1[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "the_labels (InputLayer)         (None, None)         0                                            \n",
      "__________________________________________________________________________________________________\n",
      "dense_2 (Dense)                 (None, None, 50)     12850       dropout_2[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "input_length (InputLayer)       (None, 1)            0                                            \n",
      "__________________________________________________________________________________________________\n",
      "label_length (InputLayer)       (None, 1)            0                                            \n",
      "__________________________________________________________________________________________________\n",
      "ctc (Lambda)                    (None, 1)            0           the_labels[0][0]                 \n",
      "                                                                 dense_2[0][0]                    \n",
      "                                                                 input_length[0][0]               \n",
      "                                                                 label_length[0][0]               \n",
      "==================================================================================================\n",
      "Total params: 1,712,914\n",
      "Trainable params: 1,710,994\n",
      "Non-trainable params: 1,920\n",
      "__________________________________________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "import keras\n",
    "from keras.layers import Input, Conv2D, BatchNormalization, MaxPooling2D\n",
    "from keras.layers import Reshape, Dense, Dropout, Lambda\n",
    "from keras.optimizers import Adam\n",
    "from keras import backend as K\n",
    "from keras.models import Model\n",
    "from tensorflow.contrib.training import HParams\n",
    "\n",
    "#定义卷积层\n",
    "def conv2d(size):\n",
    "    return Conv2D(size, (3,3), use_bias=True, activation='relu',\n",
    "        padding='same', kernel_initializer='he_normal')\n",
    "\n",
    "#定义BN层\n",
    "def norm(x):\n",
    "    return BatchNormalization(axis=-1)(x)\n",
    "\n",
    "#定义最大池化层\n",
    "def maxpool(x):\n",
    "    return MaxPooling2D(pool_size=(2,2), strides=None, padding=\"valid\")(x)\n",
    "\n",
    "#定义dense层\n",
    "def dense(units, activation=\"relu\"):\n",
    "    return Dense(units, activation=activation, use_bias=True,\n",
    "        kernel_initializer='he_normal')\n",
    "\n",
    "#两个卷积层加一个最大池化层的组合\n",
    "def cnn_cell(size, x, pool=True):\n",
    "    x = norm(conv2d(size)(x))\n",
    "    x = norm(conv2d(size)(x))\n",
    "    if pool:\n",
    "        x = maxpool(x)\n",
    "    return x\n",
    "\n",
    "#CTC损失函数\n",
    "def ctc_lambda(args):\n",
    "    labels, y_pred, input_length, label_length = args\n",
    "    y_pred = y_pred[:, :, :]\n",
    "    return K.ctc_batch_cost(labels, y_pred, input_length, label_length)\n",
    "\n",
    "#组合声学模型\n",
    "class acoustic_model():\n",
    "    def __init__(self, args):\n",
    "        self.vocab_size = args.vocab_size\n",
    "        self.learning_rate = args.learning_rate\n",
    "        self.is_training = args.is_training\n",
    "        self._model_init()\n",
    "        if self.is_training:\n",
    "            self._ctc_init()\n",
    "            self.opt_init()\n",
    "\n",
    "    def _model_init(self):\n",
    "        self.inputs = Input(name='the_inputs', shape=(None, 200, 1))\n",
    "        self.h1 = cnn_cell(32, self.inputs)\n",
    "        self.h2 = cnn_cell(64, self.h1)\n",
    "        self.h3 = cnn_cell(128, self.h2)\n",
    "        self.h4 = cnn_cell(128, self.h3, pool=False)\n",
    "        self.h5 = cnn_cell(128, self.h4, pool=False)\n",
    "        # 200 / 8 * 128 = 3200\n",
    "        self.h6 = Reshape((-1, 3200))(self.h5)\n",
    "        self.h6 = Dropout(0.2)(self.h6)\n",
    "        self.h7 = dense(256)(self.h6)\n",
    "        self.h7 = Dropout(0.2)(self.h7)\n",
    "        self.outputs = dense(self.vocab_size, activation='softmax')(self.h7)\n",
    "        self.model = Model(inputs=self.inputs, outputs=self.outputs)\n",
    "\n",
    "    def _ctc_init(self):\n",
    "        self.labels = Input(name='the_labels', shape=[None], dtype='float32')\n",
    "        self.input_length = Input(name='input_length', shape=[1], dtype='int64')\n",
    "        self.label_length = Input(name='label_length', shape=[1], dtype='int64')\n",
    "        self.loss_out = Lambda(ctc_lambda, output_shape=(1,), name='ctc')\\\n",
    "            ([self.labels, self.outputs, self.input_length, self.label_length])\n",
    "        self.ctc_model = Model(inputs=[self.labels, self.inputs,\n",
    "            self.input_length, self.label_length], outputs=self.loss_out)\n",
    "\n",
    "    def opt_init(self):\n",
    "        opt = Adam(lr = self.learning_rate, beta_1 = 0.9, beta_2 = 0.999, decay = 0.01, epsilon = 10e-8)\n",
    "        self.ctc_model.compile(loss={'ctc': lambda y_true, output: output}, optimizer=opt)\n",
    "\n",
    "def acoustic_model_hparams():\n",
    "    params = HParams(\n",
    "        vocab_size = 50,\n",
    "        learning_rate = 0.0008,\n",
    "        is_training = True)\n",
    "    return params\n",
    "\n",
    "print(\"打印声学模型结构\")\n",
    "acoustic_model_args = acoustic_model_hparams()    \n",
    "acoustic = acoustic_model(acoustic_model_args)\n",
    "acoustic.ctc_model.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 获取数据类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy.fftpack import fft\n",
    "\n",
    "# 获取信号的时频图\n",
    "def compute_fbank(file):\n",
    "    x=np.linspace(0, 400 - 1, 400, dtype = np.int64)\n",
    "    w = 0.54 - 0.46 * np.cos(2 * np.pi * (x) / (400 - 1) ) \n",
    "    fs, wavsignal = wav.read(file)\n",
    "    time_window = 25 \n",
    "    window_length = fs / 1000 * time_window \n",
    "    wav_arr = np.array(wavsignal)\n",
    "    wav_length = len(wavsignal)\n",
    "    range0_end = int(len(wavsignal)/fs*1000 - time_window) // 10 \n",
    "    data_input = np.zeros((range0_end, 200), dtype = np.float) \n",
    "    data_line = np.zeros((1, 400), dtype = np.float)\n",
    "    for i in range(0, range0_end):\n",
    "        p_start = i * 160\n",
    "        p_end = p_start + 400\n",
    "        data_line = wav_arr[p_start:p_end]    \n",
    "        data_line = data_line * w \n",
    "        data_line = np.abs(fft(data_line))\n",
    "        data_input[i]=data_line[0:200] \n",
    "    data_input = np.log(data_input + 1)\n",
    "    return data_input\n",
    "\n",
    "\n",
    "class get_data():\n",
    "    def __init__(self, args):\n",
    "        self.data_path = args.data_path        \n",
    "        self.data_length = args.data_length\n",
    "        self.batch_size = args.batch_size\n",
    "        self.source_init()\n",
    "\n",
    "    def source_init(self):\n",
    "        self.wav_lst = []\n",
    "        self.pin_lst = []\n",
    "        self.han_lst = []\n",
    "        with open('speech_recognition/data.txt', 'r', encoding='utf8') as f:\n",
    "            data = f.readlines()\n",
    "        for line in data:\n",
    "            wav_file, pin, han = line.split('\\t')\n",
    "            self.wav_lst.append(wav_file)\n",
    "            self.pin_lst.append(pin.split(' '))\n",
    "            self.han_lst.append(han.strip('\\n'))\n",
    "        if self.data_length:\n",
    "            self.wav_lst = self.wav_lst[:self.data_length]\n",
    "            self.pin_lst = self.pin_lst[:self.data_length]\n",
    "            self.han_lst = self.han_lst[:self.data_length]\n",
    "        self.acoustic_vocab = self.acoustic_model_vocab(self.pin_lst)\n",
    "        self.pin_vocab = self.language_model_pin_vocab(self.pin_lst)\n",
    "        self.han_vocab = self.language_model_han_vocab(self.han_lst)\n",
    "\n",
    "    def get_acoustic_model_batch(self):\n",
    "        _list = [i for i in range(len(self.wav_lst))]\n",
    "        while 1:\n",
    "            for i in range(len(self.wav_lst) // self.batch_size):\n",
    "                wav_data_lst = []\n",
    "                label_data_lst = []\n",
    "                begin = i * self.batch_size\n",
    "                end = begin + self.batch_size\n",
    "                sub_list = _list[begin:end]\n",
    "                for index in sub_list:\n",
    "                    fbank = compute_fbank(self.data_path + self.wav_lst[index])\n",
    "                    pad_fbank = np.zeros((fbank.shape[0] // 8 * 8 + 8, fbank.shape[1]))\n",
    "                    pad_fbank[:fbank.shape[0], :] = fbank\n",
    "                    label = self.pin2id(self.pin_lst[index], self.acoustic_vocab)\n",
    "                    label_ctc_len = self.ctc_len(label)\n",
    "                    if pad_fbank.shape[0] // 8 >= label_ctc_len:\n",
    "                        wav_data_lst.append(pad_fbank)\n",
    "                        label_data_lst.append(label)\n",
    "                pad_wav_data, input_length = self.wav_padding(wav_data_lst)\n",
    "                pad_label_data, label_length = self.label_padding(label_data_lst)\n",
    "                inputs = {'the_inputs': pad_wav_data,\n",
    "                          'the_labels': pad_label_data,\n",
    "                          'input_length': input_length,\n",
    "                          'label_length': label_length,\n",
    "                          }\n",
    "                outputs = {'ctc': np.zeros(pad_wav_data.shape[0], )}\n",
    "                yield inputs, outputs\n",
    "\n",
    "    def get_language_model_batch(self):\n",
    "        batch_num = len(self.pin_lst) // self.batch_size\n",
    "        for k in range(batch_num):\n",
    "            begin = k * self.batch_size\n",
    "            end = begin + self.batch_size\n",
    "            input_batch = self.pin_lst[begin:end]\n",
    "            label_batch = self.han_lst[begin:end]\n",
    "            max_len = max([len(line) for line in input_batch])\n",
    "            input_batch = np.array(\n",
    "                [self.pin2id(line, self.pin_vocab) + [0] * (max_len - len(line)) for line in input_batch])\n",
    "            label_batch = np.array(\n",
    "                [self.han2id(line, self.han_vocab) + [0] * (max_len - len(line)) for line in label_batch])\n",
    "            yield input_batch, label_batch\n",
    "\n",
    "    def pin2id(self, line, vocab):\n",
    "        return [vocab.index(pin) for pin in line]\n",
    "\n",
    "    def han2id(self, line, vocab):\n",
    "        return [vocab.index(han) for han in line]\n",
    "\n",
    "    def wav_padding(self, wav_data_lst):\n",
    "        wav_lens = [len(data) for data in wav_data_lst]\n",
    "        wav_max_len = max(wav_lens)\n",
    "        wav_lens = np.array([leng // 8 for leng in wav_lens])\n",
    "        new_wav_data_lst = np.zeros((len(wav_data_lst), wav_max_len, 200, 1))\n",
    "        for i in range(len(wav_data_lst)):\n",
    "            new_wav_data_lst[i, :wav_data_lst[i].shape[0], :, 0] = wav_data_lst[i]\n",
    "        return new_wav_data_lst, wav_lens\n",
    "\n",
    "    def label_padding(self, label_data_lst):\n",
    "        label_lens = np.array([len(label) for label in label_data_lst])\n",
    "        max_label_len = max(label_lens)\n",
    "        new_label_data_lst = np.zeros((len(label_data_lst), max_label_len))\n",
    "        for i in range(len(label_data_lst)):\n",
    "            new_label_data_lst[i][:len(label_data_lst[i])] = label_data_lst[i]\n",
    "        return new_label_data_lst, label_lens\n",
    "\n",
    "    def acoustic_model_vocab(self, data):\n",
    "        vocab = []\n",
    "        for line in data:\n",
    "            line = line\n",
    "            for pin in line:\n",
    "                if pin not in vocab:\n",
    "                    vocab.append(pin)\n",
    "        vocab.append('_')\n",
    "        return vocab\n",
    "\n",
    "    def language_model_pin_vocab(self, data):\n",
    "        vocab = ['<PAD>']\n",
    "        for line in data:\n",
    "            for pin in line:\n",
    "                if pin not in vocab:\n",
    "                    vocab.append(pin)\n",
    "        return vocab\n",
    "\n",
    "    def language_model_han_vocab(self, data):\n",
    "        vocab = ['<PAD>']\n",
    "        for line in data:\n",
    "            line = ''.join(line.split(' '))\n",
    "            for han in line:\n",
    "                if han not in vocab:\n",
    "                    vocab.append(han)\n",
    "        return vocab\n",
    "\n",
    "    def ctc_len(self, label):\n",
    "        add_len = 0\n",
    "        label_len = len(label)\n",
    "        for i in range(label_len - 1):\n",
    "            if label[i] == label[i + 1]:\n",
    "                add_len += 1\n",
    "        return label_len + add_len"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 声学模型训练\n",
    "\n",
    "准备训练参数及数据 \n",
    "\n",
    "为了本示例演示效果，参数`batch_size`在此仅设置为`1`，参数`data_length`在此仅设置为`20`。\n",
    "\n",
    "若进行完整训练，则应注释`data_args.data_length = 20`，并调高`batch_size`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "声学模型参数：\n",
      "[('is_training', True), ('learning_rate', 0.0008), ('vocab_size', 353)]\n"
     ]
    }
   ],
   "source": [
    "def data_hparams():\n",
    "    params = HParams(\n",
    "        data_path = './speech_recognition/data/', #d数据路径\n",
    "        batch_size = 1,      #批尺寸\n",
    "        data_length = None,   #长度\n",
    "    )\n",
    "    return params\n",
    "\n",
    "data_args = data_hparams()\n",
    "data_args.data_length = 20 # 重新训练需要注释该行\n",
    "train_data = get_data(data_args)\n",
    "\n",
    "acoustic_model_args = acoustic_model_hparams()\n",
    "acoustic_model_args.vocab_size = len(train_data.acoustic_vocab)\n",
    "acoustic = acoustic_model(acoustic_model_args)\n",
    "\n",
    "print('声学模型参数：')\n",
    "print(acoustic_model_args)\n",
    "\n",
    "if os.path.exists('/speech_recognition/acoustic_model/model.h5'):\n",
    "    print('加载声学模型')\n",
    "    acoustic.ctc_model.load_weights('./speech_recognition/acoustic_model/model.h5')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "训练声学模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练轮数epochs： 20\n",
      "批数量batch_num： 20\n",
      "开始训练！\n",
      "第 1 个epoch\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Deprecated in favor of operator or tf.math.divide.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/1\n",
      "20/20 [==============================] - 15s 741ms/step - loss: 292.0404\n",
      "第 2 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 26ms/step - loss: 195.0730\n",
      "第 3 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 27ms/step - loss: 179.2059\n",
      "第 4 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 28ms/step - loss: 156.6438\n",
      "第 5 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 26ms/step - loss: 125.4767\n",
      "第 6 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 28ms/step - loss: 94.7194\n",
      "第 7 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 27ms/step - loss: 65.2171\n",
      "第 8 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 27ms/step - loss: 44.6281\n",
      "第 9 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 26ms/step - loss: 25.8198\n",
      "第 10 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 27ms/step - loss: 16.2043\n",
      "第 11 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 27ms/step - loss: 8.9479\n",
      "第 12 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 26ms/step - loss: 2.3711\n",
      "第 13 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 26ms/step - loss: 1.5653\n",
      "第 14 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 0s 25ms/step - loss: 1.0379\n",
      "第 15 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 27ms/step - loss: 0.6209\n",
      "第 16 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 29ms/step - loss: 0.5799\n",
      "第 17 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 26ms/step - loss: 0.5872\n",
      "第 18 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 26ms/step - loss: 0.6505\n",
      "第 19 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 25ms/step - loss: 0.5772\n",
      "第 20 个epoch\n",
      "Epoch 1/1\n",
      "20/20 [==============================] - 1s 25ms/step - loss: 0.4246\n",
      "\n",
      "训练完成，保存模型\n"
     ]
    }
   ],
   "source": [
    "epochs = 20\n",
    "batch_num = len(train_data.wav_lst) // train_data.batch_size\n",
    "print(\"训练轮数epochs：\",epochs)\n",
    "print(\"批数量batch_num：\",batch_num)\n",
    "\n",
    "print(\"开始训练！\")\n",
    "for k in range(epochs):\n",
    "    print('第', k+1, '个epoch')\n",
    "    batch = train_data.get_acoustic_model_batch()\n",
    "    acoustic.ctc_model.fit_generator(batch, steps_per_epoch=batch_num, epochs=1)\n",
    "\n",
    "print(\"\\n训练完成，保存模型\")\n",
    "acoustic.ctc_model.save_weights('./speech_recognition/acoustic_model/model.h5')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 语言模型\n",
    "\n",
    "在本实践中，选择使用 Transformer 结构进行语言模型的建模。\n",
    "\n",
    "Transformer 是完全基于注意力机制（attention mechanism）的网络框架，attention 来自于论文[《attention is all you need》](https://arxiv.org/abs/1706.03762)。 一个序列每个字符对其上下文字符的影响作用都不同，每个字对序列的语义信息贡献也不同，可以通过一种机制将原输入序列中字符向量通过加权融合序列中所有字符的语义向量信息来产生新的向量，即增强了原语义信息。其结构如下图所示。\n",
    "\n",
    "![](./img/self-attention.png)\n",
    "\n",
    "其中左半部分是编码器 encoder 右半部分是解码器 decoder。在本实践中，仅需要搭建左侧 encoder 结构即可。\n",
    "\n",
    "encoder的详细结构为: 由N=6个相同的 layers 组成, 每一层包含两个 sub-layers. 第一个 sub-layer 就是多头注意力层（multi-head attention layer）然后是一个简单的全连接层。 其中每个 sub-layer 都加了residual connection（残差连接）和 normalisation（归一化）。 \n",
    "\n",
    "下面来具体构建一个基于 Transformer 的语言模型。\n",
    "\n",
    "#### 定义归一化 normalize层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "def normalize(inputs, \n",
    "              epsilon = 1e-8,\n",
    "              scope=\"ln\",\n",
    "              reuse=None):\n",
    "    with tf.variable_scope(scope, reuse=reuse):\n",
    "        inputs_shape = inputs.get_shape()\n",
    "        params_shape = inputs_shape[-1:]\n",
    "\n",
    "        mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)\n",
    "        beta= tf.Variable(tf.zeros(params_shape))\n",
    "        gamma = tf.Variable(tf.ones(params_shape))\n",
    "        normalized = (inputs - mean) / ( (variance + epsilon) ** (.5) )\n",
    "        outputs = gamma * normalized + beta\n",
    "\n",
    "    return outputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 定义嵌入层 embedding\n",
    "\n",
    "即位置向量，将每个位置编号，然后每个编号对应一个向量，通过结合位置向量和词向量，就给每个词都引入了一定的位置信息，以便Attention分辨出不同位置的词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def embedding(inputs, \n",
    "              vocab_size, \n",
    "              num_units, \n",
    "              zero_pad=True, \n",
    "              scale=True,\n",
    "              scope=\"embedding\", \n",
    "              reuse=None):\n",
    "    with tf.variable_scope(scope, reuse=reuse):\n",
    "        lookup_table = tf.get_variable('lookup_table',\n",
    "                                       dtype=tf.float32,\n",
    "                                       shape=[vocab_size, num_units],\n",
    "                                       initializer=tf.contrib.layers.xavier_initializer())\n",
    "        if zero_pad:\n",
    "            lookup_table = tf.concat((tf.zeros(shape=[1, num_units]),\n",
    "                                      lookup_table[1:, :]), 0)\n",
    "        outputs = tf.nn.embedding_lookup(lookup_table, inputs)\n",
    "\n",
    "        if scale:\n",
    "            outputs = outputs * (num_units ** 0.5) \n",
    "\n",
    "    return outputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 多头注意力层（multi-head attention layer）\n",
    "\n",
    "要了解多头注意力层，首先要知道点乘注意力（Scaled Dot-Product Attention）。Attention 有三个输入（querys，keys，values），有一个输出。选择三个输入是考虑到模型的通用性，输出是所有 value 的加权求和。value 的权重来自于 query 和 keys 的乘积，经过一个 softmax 之后得到。\n",
    "\n",
    "Scaled Dot-Product Attention 的公式及结构如下图所示。\n",
    "\n",
    "![](./img/Scaled_Dot-Product_Attention.png)\n",
    "\n",
    "\n",
    "\n",
    "Multi-Head Attention 就是对输入 K，V，Q 分别进行 H 次线性变换，然后把Scaled Dot-Product Attention的过程做 H 次，把输出的结果做 concat 结合，即为输出。\n",
    "\n",
    "Multi-Head Attention 的公式及结构如下图所示。\n",
    "\n",
    "![](./img/Multi-Head_Attention.png)\n",
    "\n",
    "#### 定义 multi-head attention层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "def multihead_attention(emb,\n",
    "                        queries, \n",
    "                        keys, \n",
    "                        num_units=None, \n",
    "                        num_heads=8, \n",
    "                        dropout_rate=0,\n",
    "                        is_training=True,\n",
    "                        causality=False,\n",
    "                        scope=\"multihead_attention\", \n",
    "                        reuse=None):\n",
    "    with tf.variable_scope(scope, reuse=reuse):\n",
    "        if num_units is None:\n",
    "            num_units = queries.get_shape().as_list[-1]\n",
    "        \n",
    "        Q = tf.layers.dense(queries, num_units, activation=tf.nn.relu) # (N, T_q, C)\n",
    "        K = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)\n",
    "        V = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)\n",
    "        \n",
    "        Q_ = tf.concat(tf.split(Q, num_heads, axis=2), axis=0) # (h*N, T_q, C/h) \n",
    "        K_ = tf.concat(tf.split(K, num_heads, axis=2), axis=0) # (h*N, T_k, C/h) \n",
    "        V_ = tf.concat(tf.split(V, num_heads, axis=2), axis=0) # (h*N, T_k, C/h) \n",
    "\n",
    "        outputs = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # (h*N, T_q, T_k)\n",
    "        \n",
    "        outputs = outputs / (K_.get_shape().as_list()[-1] ** 0.5)\n",
    "        \n",
    "        key_masks = tf.sign(tf.abs(tf.reduce_sum(emb, axis=-1))) # (N, T_k)\n",
    "        key_masks = tf.tile(key_masks, [num_heads, 1]) # (h*N, T_k)\n",
    "        key_masks = tf.tile(tf.expand_dims(key_masks, 1), [1, tf.shape(queries)[1], 1]) # (h*N, T_q, T_k)\n",
    "        \n",
    "        paddings = tf.ones_like(outputs)*(-2**32+1)\n",
    "        outputs = tf.where(tf.equal(key_masks, 0), paddings, outputs) # (h*N, T_q, T_k)\n",
    "  \n",
    "        if causality:\n",
    "            diag_vals = tf.ones_like(outputs[0, :, :]) # (T_q, T_k)\n",
    "            tril = tf.contrib.linalg.LinearOperatorTriL(diag_vals).to_dense() # (T_q, T_k)\n",
    "            masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(outputs)[0], 1, 1]) # (h*N, T_q, T_k)\n",
    "   \n",
    "            paddings = tf.ones_like(masks)*(-2**32+1)\n",
    "            outputs = tf.where(tf.equal(masks, 0), paddings, outputs) # (h*N, T_q, T_k)\n",
    "  \n",
    "        outputs = tf.nn.softmax(outputs) # (h*N, T_q, T_k)\n",
    "         \n",
    "        query_masks = tf.sign(tf.abs(tf.reduce_sum(emb, axis=-1))) # (N, T_q)\n",
    "        query_masks = tf.tile(query_masks, [num_heads, 1]) # (h*N, T_q)\n",
    "        query_masks = tf.tile(tf.expand_dims(query_masks, -1), [1, 1, tf.shape(keys)[1]]) # (h*N, T_q, T_k)\n",
    "        outputs *= query_masks # broadcasting. (N, T_q, C)\n",
    "          \n",
    "        outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=tf.convert_to_tensor(is_training))\n",
    "               \n",
    "        outputs = tf.matmul(outputs, V_) # ( h*N, T_q, C/h)\n",
    "        \n",
    "        outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2 ) # (N, T_q, C)\n",
    "              \n",
    "        outputs += queries\n",
    "              \n",
    "        outputs = normalize(outputs) # (N, T_q, C)\n",
    " \n",
    "    return outputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 定义 feedforward层\n",
    "\n",
    "两层全连接层，用卷积模拟加速运算，并添加残差结构。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "def feedforward(inputs, \n",
    "                num_units=[2048, 512],\n",
    "                scope=\"multihead_attention\", \n",
    "                reuse=None):\n",
    "    with tf.variable_scope(scope, reuse=reuse):\n",
    "        params = {\"inputs\": inputs, \"filters\": num_units[0], \"kernel_size\": 1,\n",
    "                  \"activation\": tf.nn.relu, \"use_bias\": True}\n",
    "        outputs = tf.layers.conv1d(**params)\n",
    "        \n",
    "        params = {\"inputs\": outputs, \"filters\": num_units[1], \"kernel_size\": 1,\n",
    "                  \"activation\": None, \"use_bias\": True}\n",
    "        outputs = tf.layers.conv1d(**params)\n",
    "        \n",
    "        outputs += inputs\n",
    "        \n",
    "        outputs = normalize(outputs)\n",
    "    \n",
    "    return outputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 定义 label_smoothing层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "def label_smoothing(inputs, epsilon=0.1):\n",
    "    K = inputs.get_shape().as_list()[-1] # number of channels\n",
    "    return ((1-epsilon) * inputs) + (epsilon / K)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下面可以将上述层组合，建立完整的语言模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "语音模型建立完成！\n"
     ]
    }
   ],
   "source": [
    "#组合语言模型\n",
    "class language_model():\n",
    "    def __init__(self, arg):\n",
    "        self.graph = tf.Graph()\n",
    "        with self.graph.as_default():\n",
    "            self.is_training = arg.is_training\n",
    "            self.hidden_units = arg.hidden_units\n",
    "            self.input_vocab_size = arg.input_vocab_size\n",
    "            self.label_vocab_size = arg.label_vocab_size\n",
    "            self.num_heads = arg.num_heads\n",
    "            self.num_blocks = arg.num_blocks\n",
    "            self.max_length = arg.max_length\n",
    "            self.learning_rate = arg.learning_rate\n",
    "            self.dropout_rate = arg.dropout_rate\n",
    "                \n",
    "            self.x = tf.placeholder(tf.int32, shape=(None, None))\n",
    "            self.y = tf.placeholder(tf.int32, shape=(None, None))\n",
    "            self.emb = embedding(self.x, vocab_size=self.input_vocab_size, num_units=self.hidden_units, scale=True, scope=\"enc_embed\")\n",
    "            self.enc = self.emb + embedding(tf.tile(tf.expand_dims(tf.range(tf.shape(self.x)[1]), 0), [tf.shape(self.x)[0], 1]),\n",
    "                                        vocab_size=self.max_length,num_units=self.hidden_units, zero_pad=False, scale=False,scope=\"enc_pe\")\n",
    "            self.enc = tf.layers.dropout(self.enc, \n",
    "                                        rate=self.dropout_rate, \n",
    "                                        training=tf.convert_to_tensor(self.is_training))\n",
    "                        \n",
    "            for i in range(self.num_blocks):\n",
    "                with tf.variable_scope(\"num_blocks_{}\".format(i)):\n",
    "                    self.enc = multihead_attention(emb = self.emb,\n",
    "                                                    queries=self.enc, \n",
    "                                                    keys=self.enc, \n",
    "                                                    num_units=self.hidden_units, \n",
    "                                                    num_heads=self.num_heads, \n",
    "                                                    dropout_rate=self.dropout_rate,\n",
    "                                                    is_training=self.is_training,\n",
    "                                                    causality=False)\n",
    "                                \n",
    "            self.outputs = feedforward(self.enc, num_units=[4*self.hidden_units, self.hidden_units])\n",
    "                                    \n",
    "            self.logits = tf.layers.dense(self.outputs, self.label_vocab_size)\n",
    "            self.preds = tf.to_int32(tf.argmax(self.logits, axis=-1))\n",
    "            self.istarget = tf.to_float(tf.not_equal(self.y, 0))\n",
    "            self.acc = tf.reduce_sum(tf.to_float(tf.equal(self.preds, self.y))*self.istarget)/ (tf.reduce_sum(self.istarget))\n",
    "            tf.summary.scalar('acc', self.acc)\n",
    "                        \n",
    "            if self.is_training:  \n",
    "                self.y_smoothed = label_smoothing(tf.one_hot(self.y, depth=self.label_vocab_size))\n",
    "                self.loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.logits, labels=self.y_smoothed)\n",
    "                self.mean_loss = tf.reduce_sum(self.loss*self.istarget) / (tf.reduce_sum(self.istarget))\n",
    "                \n",
    "                self.global_step = tf.Variable(0, name='global_step', trainable=False)\n",
    "                self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate, beta1=0.9, beta2=0.98, epsilon=1e-8)\n",
    "                self.train_op = self.optimizer.minimize(self.mean_loss, global_step=self.global_step)\n",
    "                        \n",
    "                tf.summary.scalar('mean_loss', self.mean_loss)\n",
    "                self.merged = tf.summary.merge_all()\n",
    "\n",
    "print('语音模型建立完成！')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 语言模型训练\n",
    "\n",
    "准备训练参数及数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-13-89328ea9c47f>:23: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use keras.layers.dropout instead.\n",
      "WARNING:tensorflow:From <ipython-input-10-c0da1b61dd9f>:15: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use keras.layers.dense instead.\n",
      "WARNING:tensorflow:From <ipython-input-11-0657c4b451a0>:8: conv1d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use keras.layers.conv1d instead.\n",
      "WARNING:tensorflow:From <ipython-input-13-89328ea9c47f>:40: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "语言模型参数：\n",
      "[('dropout_rate', 0.2), ('hidden_units', 512), ('input_vocab_size', 353), ('is_training', True), ('label_vocab_size', 415), ('learning_rate', 0.0003), ('max_length', 100), ('num_blocks', 6), ('num_heads', 8)]\n"
     ]
    }
   ],
   "source": [
    "def language_model_hparams():\n",
    "    params = HParams(\n",
    "        num_heads = 8,\n",
    "        num_blocks = 6,\n",
    "        input_vocab_size = 50,\n",
    "        label_vocab_size = 50,\n",
    "        max_length = 100,\n",
    "        hidden_units = 512,\n",
    "        dropout_rate = 0.2,\n",
    "        learning_rate = 0.0003,\n",
    "        is_training = True)\n",
    "    return params\n",
    "\n",
    "language_model_args = language_model_hparams()\n",
    "language_model_args.input_vocab_size = len(train_data.pin_vocab)\n",
    "language_model_args.label_vocab_size = len(train_data.han_vocab)\n",
    "language = language_model(language_model_args)\n",
    "\n",
    "print('语言模型参数：')\n",
    "print(language_model_args)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "训练语言模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练轮数epochs： 20\n",
      "\n",
      "开始训练！\n",
      "第 1 个 epoch : average loss =  6.218134140968322\n",
      "第 2 个 epoch : average loss =  3.2289454698562623\n",
      "第 3 个 epoch : average loss =  1.8876620471477508\n",
      "第 4 个 epoch : average loss =  1.291277027130127\n",
      "第 5 个 epoch : average loss =  1.1049616515636445\n",
      "第 6 个 epoch : average loss =  1.0946208953857421\n",
      "第 7 个 epoch : average loss =  1.082367479801178\n",
      "第 8 个 epoch : average loss =  1.0920936942100525\n",
      "第 9 个 epoch : average loss =  1.0675474047660827\n",
      "第 10 个 epoch : average loss =  1.0946592748165132\n",
      "第 11 个 epoch : average loss =  1.1080005645751954\n",
      "第 12 个 epoch : average loss =  1.0991867125034331\n",
      "第 13 个 epoch : average loss =  1.088491427898407\n",
      "第 14 个 epoch : average loss =  1.10106263756752\n",
      "第 15 个 epoch : average loss =  1.0777182012796402\n",
      "第 16 个 epoch : average loss =  1.0825719654560089\n",
      "第 17 个 epoch : average loss =  1.0695657193660737\n",
      "第 18 个 epoch : average loss =  1.0671538591384888\n",
      "第 19 个 epoch : average loss =  1.04828782081604\n",
      "第 20 个 epoch : average loss =  1.0553287982940673\n",
      "\n",
      "训练完成，保存模型\n"
     ]
    }
   ],
   "source": [
    "epochs = 20\n",
    "print(\"训练轮数epochs：\",epochs)\n",
    "\n",
    "print(\"\\n开始训练！\")\n",
    "with language.graph.as_default():\n",
    "    saver =tf.train.Saver()\n",
    "with tf.Session(graph=language.graph) as sess:\n",
    "    merged = tf.summary.merge_all()\n",
    "    sess.run(tf.global_variables_initializer())\n",
    "    if os.path.exists('/speech_recognition/language_model/model.meta'):\n",
    "        print('加载语言模型')\n",
    "        saver.restore(sess, './speech_recognition/language_model/model')\n",
    "    writer = tf.summary.FileWriter('./speech_recognition/language_model/tensorboard', tf.get_default_graph())\n",
    "    for k in range(epochs):\n",
    "        total_loss = 0\n",
    "        batch = train_data.get_language_model_batch()\n",
    "        for i in range(batch_num):\n",
    "            input_batch, label_batch = next(batch)\n",
    "            feed = {language.x: input_batch, language.y: label_batch}\n",
    "            cost,_ = sess.run([language.mean_loss,language.train_op], feed_dict=feed)\n",
    "            total_loss += cost\n",
    "            if (k * batch_num + i) % 10 == 0:\n",
    "                rs=sess.run(merged, feed_dict=feed)\n",
    "                writer.add_summary(rs, k * batch_num + i)\n",
    "        print('第', k+1, '个 epoch', ': average loss = ', total_loss/batch_num)\n",
    "    print(\"\\n训练完成，保存模型\")\n",
    "    saver.save(sess, './speech_recognition/language_model/model')\n",
    "    writer.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型测试\n",
    "\n",
    "准备解码所需字典，需和训练一致，也可以将字典保存到本地，直接进行读取。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_args = data_hparams()\n",
    "data_args.data_length = 20 # 重新训练需要注释该行\n",
    "train_data = get_data(data_args)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "准备测试所需数据， 不必和训练数据一致。\n",
    "\n",
    "在本实践中，由于教学原因演示数据集及模型参数均较小，故不区分训练集和测试集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_data = get_data(data_args)\n",
    "acoustic_model_batch = test_data.get_acoustic_model_batch()\n",
    "language_model_batch = test_data.get_language_model_batch()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "加载训练好的声学模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "__________________________________________________________________________________________________\n",
      "Layer (type)                    Output Shape         Param #     Connected to                     \n",
      "==================================================================================================\n",
      "the_inputs (InputLayer)         (None, None, 200, 1) 0                                            \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_21 (Conv2D)              (None, None, 200, 32 320         the_inputs[0][0]                 \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_21 (BatchNo (None, None, 200, 32 128         conv2d_21[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_22 (Conv2D)              (None, None, 200, 32 9248        batch_normalization_21[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_22 (BatchNo (None, None, 200, 32 128         conv2d_22[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_7 (MaxPooling2D)  (None, None, 100, 32 0           batch_normalization_22[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_23 (Conv2D)              (None, None, 100, 64 18496       max_pooling2d_7[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_23 (BatchNo (None, None, 100, 64 256         conv2d_23[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_24 (Conv2D)              (None, None, 100, 64 36928       batch_normalization_23[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_24 (BatchNo (None, None, 100, 64 256         conv2d_24[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_8 (MaxPooling2D)  (None, None, 50, 64) 0           batch_normalization_24[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_25 (Conv2D)              (None, None, 50, 128 73856       max_pooling2d_8[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_25 (BatchNo (None, None, 50, 128 512         conv2d_25[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_26 (Conv2D)              (None, None, 50, 128 147584      batch_normalization_25[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_26 (BatchNo (None, None, 50, 128 512         conv2d_26[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "max_pooling2d_9 (MaxPooling2D)  (None, None, 25, 128 0           batch_normalization_26[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_27 (Conv2D)              (None, None, 25, 128 147584      max_pooling2d_9[0][0]            \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_27 (BatchNo (None, None, 25, 128 512         conv2d_27[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_28 (Conv2D)              (None, None, 25, 128 147584      batch_normalization_27[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_28 (BatchNo (None, None, 25, 128 512         conv2d_28[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_29 (Conv2D)              (None, None, 25, 128 147584      batch_normalization_28[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_29 (BatchNo (None, None, 25, 128 512         conv2d_29[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "conv2d_30 (Conv2D)              (None, None, 25, 128 147584      batch_normalization_29[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "batch_normalization_30 (BatchNo (None, None, 25, 128 512         conv2d_30[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "reshape_3 (Reshape)             (None, None, 3200)   0           batch_normalization_30[0][0]     \n",
      "__________________________________________________________________________________________________\n",
      "dropout_5 (Dropout)             (None, None, 3200)   0           reshape_3[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "dense_5 (Dense)                 (None, None, 256)    819456      dropout_5[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "dropout_6 (Dropout)             (None, None, 256)    0           dense_5[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "the_labels (InputLayer)         (None, None)         0                                            \n",
      "__________________________________________________________________________________________________\n",
      "dense_6 (Dense)                 (None, None, 353)    90721       dropout_6[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "input_length (InputLayer)       (None, 1)            0                                            \n",
      "__________________________________________________________________________________________________\n",
      "label_length (InputLayer)       (None, 1)            0                                            \n",
      "__________________________________________________________________________________________________\n",
      "ctc (Lambda)                    (None, 1)            0           the_labels[0][0]                 \n",
      "                                                                 dense_6[0][0]                    \n",
      "                                                                 input_length[0][0]               \n",
      "                                                                 label_length[0][0]               \n",
      "==================================================================================================\n",
      "Total params: 1,790,785\n",
      "Trainable params: 1,788,865\n",
      "Non-trainable params: 1,920\n",
      "__________________________________________________________________________________________________\n",
      "声学模型参数：\n",
      "[('is_training', True), ('learning_rate', 0.0008), ('vocab_size', 353)]\n",
      "\n",
      "加载声学模型完成！\n"
     ]
    }
   ],
   "source": [
    "acoustic_model_args = acoustic_model_hparams()\n",
    "acoustic_model_args.vocab_size = len(train_data.acoustic_vocab)\n",
    "acoustic = acoustic_model(acoustic_model_args)\n",
    "acoustic.ctc_model.summary()\n",
    "acoustic.ctc_model.load_weights('./speech_recognition/acoustic_model/model.h5')\n",
    "\n",
    "print('声学模型参数：')\n",
    "print(acoustic_model_args)\n",
    "print('\\n加载声学模型完成！')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "加载训练好的语言模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use standard file APIs to check for files with this prefix.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "语言模型参数：\n",
      "[('dropout_rate', 0.2), ('hidden_units', 512), ('input_vocab_size', 353), ('is_training', True), ('label_vocab_size', 415), ('learning_rate', 0.0003), ('max_length', 100), ('num_blocks', 6), ('num_heads', 8)]\n",
      "\n",
      "加载语言模型完成！\n"
     ]
    }
   ],
   "source": [
    "language_model_args = language_model_hparams()\n",
    "language_model_args.input_vocab_size = len(train_data.pin_vocab)\n",
    "language_model_args.label_vocab_size = len(train_data.han_vocab)\n",
    "language = language_model(language_model_args)\n",
    "sess = tf.Session(graph=language.graph)\n",
    "with language.graph.as_default():\n",
    "    saver =tf.train.Saver()\n",
    "with sess.as_default():\n",
    "    saver.restore(sess, './speech_recognition/language_model/model')\n",
    "\n",
    "print('语言模型参数：')\n",
    "print(language_model_args)\n",
    "print('\\n加载语言模型完成！')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "定义解码器"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "def decode_ctc(num_result, num2word):\n",
    "    result = num_result[:, :, :]\n",
    "    in_len = np.zeros((1), dtype = np.int32)\n",
    "    in_len[0] = result.shape[1]\n",
    "    t = K.ctc_decode(result, in_len, greedy = True, beam_width=10, top_paths=1)\n",
    "    v = K.get_value(t[0][0])\n",
    "    v = v[0]\n",
    "    text = []\n",
    "    for i in v:\n",
    "        text.append(num2word[i])\n",
    "    return v, text"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用搭建好的语音识别系统进行测试\n",
    "\n",
    "在这里显示出10条语音示例的原文拼音及识别结果、原文汉字及识别结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "示例 1\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4303: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Create a `tf.sparse.SparseTensor` and use `tf.sparse.to_dense` instead.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原文拼音： lv4 shi4 yang2 chun1 yan1 jing3 da4 kuai4 wen2 zhang1 de di3 se4 si4 yue4 de lin2 luan2 geng4 shi4 lv4 de2 xian1 huo2 xiu4 mei4 shi1 yi4 ang4 ran2\n",
      "识别结果： lv4 shi4 yang2 chun1 yan1 jing3 da4 kuai4 wen2 zhang1 de di3 se4 si4 yue4 de lin2 luan2 geng4 shi4 lv4 de2 xian1 huo2 xiu4 mei4 shi1 yi4 ang4 ran2\n",
      "原文汉字： 绿是阳春烟景大块文章的底色四月的林峦更是绿得鲜活秀媚诗意盎然\n",
      "识别结果： 绿是阳春烟景大块文章的底色四月的林峦更是绿得鲜活秀媚诗意盎然\n",
      "\n",
      "示例 2\n",
      "原文拼音： ta1 jin3 ping2 yao1 bu4 de li4 liang4 zai4 yong3 dao4 shang4 xia4 fan1 teng2 yong3 dong4 she2 xing2 zhuang4 ru2 hai3 tun2 yi4 zhi2 yi3 yi1 tou2 de you1 shi4 ling3 xian1\n",
      "识别结果： ta1 jin3 ping2 yao1 bu4 de li4 liang4 zai4 yong3 dao4 shang4 xia4 fan1 teng2 yong3 dong4 she2 xing2 zhuang4 ru2 hai3 tun2 yi4 zhi2 yi3 yi1 tou2 de you1 shi4 ling3 xian1\n",
      "原文汉字： 他仅凭腰部的力量在泳道上下翻腾蛹动蛇行状如海豚一直以一头的优势领先\n",
      "识别结果： 他仅凭腰部的力量在泳道上下翻腾泳动蛇行状如海豚一直以一头的优势领先\n",
      "\n",
      "示例 3\n",
      "原文拼音： pao4 yan3 da3 hao3 le zha4 yao4 zen3 me zhuang1 yue4 zheng4 cai2 yao3 le yao3 ya2 shu1 di4 tuo1 qu4 yi1 fu2 guang1 bang3 zi chong1 jin4 le shui3 cuan4 dong4\n",
      "识别结果： pao4 yan3 da3 hao3 le zha4 yao4 zen3 me zhuang1 yue4 zheng4 cai2 yao3 le yao3 ya2 shu1 di4 tuo1 qu4 yi1 fu2 guang1 bang3 zi chong1 jin4 le shui3 cuan4 dong4\n",
      "原文汉字： 炮眼打好了炸药怎么装岳正才咬了咬牙倏地脱去衣服光膀子冲进了水窜洞\n",
      "识别结果： 炮眼打好了炸药怎么装岳正才咬了咬牙倏地脱去衣服光膀子冲进了水窜洞\n",
      "\n",
      "示例 4\n",
      "原文拼音： ke3 shei2 zhi1 wen2 wan2 hou4 ta1 yi1 zhao4 jing4 zi zhi1 jian4 zuo3 xia4 yan3 jian3 de xian4 you4 cu1 you4 hei1 yu3 you4 ce4 ming2 xian3 bu2 dui4 cheng1\n",
      "识别结果： ke3 shei2 zhi1 wen2 wan2 hou4 ta1 yi1 zhao4 jing4 zi zhi1 jian4 zuo3 xia4 yan3 jian3 de xian4 you4 cu1 you4 hei1 yu3 you4 ce4 ming2 xian3 bu2 dui4 cheng1\n",
      "原文汉字： 可谁知纹完后她一照镜子只见左下眼睑的线又粗又黑与右侧明显不对称\n",
      "识别结果： 可谁知纹完后她一照镜子知见左下眼睑的线又粗又黑与又侧明显不对称\n",
      "\n",
      "示例 5\n",
      "原文拼音： yi1 jin4 men2 wo3 bei4 jing1 dai1 le zhe4 hu4 ming2 jiao4 pang2 ji2 de lao3 nong2 shi4 kang4 mei3 yuan2 chao2 fu4 shang1 hui2 xiang1 de lao3 bing1 qi1 zi3 chang2 nian2 you3 bing4 jia1 tu2 si4 bi4 yi1 pin2 ru2 xi3\n",
      "识别结果： yi1 jin4 men2 wo3 bei4 jing1 dai1 le zhe4 hu4 ming2 jiao4 pang2 ji2 de lao3 nong2 shi4 kang4 mei3 yuan2 chao2 fu4 shang1 hui2 xiang1 de lao3 bing1 qi1 zi3 chang2 nian2 you3 bing4 jia1 tu2 si4 bi4 yi1 pin2 ru2 xi3\n",
      "原文汉字： 一进门我被惊呆了这户名叫庞吉的老农是抗美援朝负伤回乡的老兵妻子长年有病家徒四壁一贫如洗\n",
      "识别结果： 一进门我被惊呆了这户名叫庞吉的老农是抗美援朝负伤回乡的老兵妻子长年有病家徒四壁一贫如洗\n",
      "\n",
      "示例 6\n",
      "原文拼音： zou3 chu1 cun1 zi lao3 yuan3 lao3 yuan3 wo3 hai2 hui2 tou2 zhang1 wang4 na4 ge4 an1 ning2 tian2 jing4 de xiao3 yuan4 na4 ge4 shi3 wo3 zhong1 shen1 nan2 wang4 de xiao3 yuan4\n",
      "识别结果： zou3 chu1 cun1 zi lao3 yuan3 lao3 yuan3 wo3 hai2 hui2 tou2 zhang1 wang4 na4 ge4 an1 ning2 tian2 jing4 de xiao3 yuan4 na4 ge4 shi3 wo3 zhong1 shen1 nan2 wang4 de xiao3 yuan4\n",
      "原文汉字： 走出村子老远老远我还回头张望那个安宁恬静的小院那个使我终身难忘的小院\n",
      "识别结果： 走出村子老远老远我还回头张望那个安宁恬静的小院那个使我终身难望的小院\n",
      "\n",
      "示例 7\n",
      "原文拼音： er4 yue4 si4 ri4 zhu4 jin4 xin1 xi1 men2 wai4 luo2 jia1 nian3 wang2 jia1 gang1 zhu1 zi4 qing1 wen2 xun4 te4 di4 cong2 dong1 men2 wai4 gan3 lai2 qing4 he4\n",
      "识别结果： er4 yue4 si4 ri4 zhu4 jin4 xin1 xi1 men2 wai4 luo2 jia1 nian3 wang2 jia1 gang1 zhu1 zi4 qing1 wen2 xun4 te4 di4 cong2 dong1 men2 wai4 gan3 lai2 qing4 he4\n",
      "原文汉字： 二月四日住进新西门外罗家碾王家冈朱自清闻讯特地从东门外赶来庆贺\n",
      "识别结果： 二月四日住进新西门外罗家碾王家冈朱自清闻讯特地从东门外赶来庆贺\n",
      "\n",
      "示例 8\n",
      "原文拼音： dan1 wei4 bu2 shi4 wo3 lao3 die1 kai1 de ping2 shen2 me yao4 yi1 ci4 er4 ci4 zhao4 gu4 wo3 wo3 bu4 neng2 ba3 zi4 ji3 de bao1 fu2 wang3 xue2 xiao4 shuai3\n",
      "识别结果： dan1 wei4 bu2 shi4 wo3 lao3 die1 kai1 de ping2 shen2 me yao4 yi1 ci4 er4 ci4 zhao4 gu4 wo3 wo3 bu4 neng2 ba3 zi4 ji3 de bao1 fu2 wang3 xue2 xiao4 shuai3\n",
      "原文汉字： 单位不是我老爹开的凭什么要一次二次照顾我我不能把自己的包袱往学校甩\n",
      "识别结果： 单位不是我老爹开的凭什么要一次二次照顾我我不能把自己的包袱往学校甩\n",
      "\n",
      "示例 9\n",
      "原文拼音： dou1 yong4 cao3 mao4 huo4 ge1 bo zhou3 hu4 zhe wan3 lie4 lie4 qie ju1 chuan1 guo4 lan4 ni2 tang2 ban1 de yuan4 ba4 pao3 hui2 zi4 ji3 de su4 she4 qu4 le\n",
      "识别结果： dou1 yong4 cao3 mao4 huo4 ge1 bo zhou3 hu4 zhe wan3 lie4 lie4 qie ju1 chuan1 guo4 lan4 ni2 tang2 ban1 de yuan4 ba4 pao3 hui2 zi4 ji3 de su4 she4 qu4 le\n",
      "原文汉字： 都用草帽或胳膊肘护着碗趔趔趄趄穿过烂泥塘般的院坝跑回自己的宿舍去了\n",
      "识别结果： 都用草帽或胳膊肘护着碗趔趔趄趄穿过烂泥塘般的院坝跑回自己的宿舍去了\n",
      "\n",
      "示例 10\n",
      "原文拼音： xiang1 gang3 yan3 yi4 quan1 huan1 ying2 mao2 a1 min3 jia1 meng2 wu2 xian4 tai2 yu3 hua2 xing1 yi1 xie1 zhong4 da4 de yan3 chang4 huo2 dong4 dou1 yao1 qing3 ta1 chu1 chang3 you3 ji3 ci4 hai2 te4 yi4 an1 pai2 ya1 zhou4 yan3 chu1\n",
      "识别结果： xiang1 gang3 yan3 yi4 quan1 huan1 ying2 mao2 a1 min3 jia1 meng2 wu2 xian4 tai2 yu3 hua2 xing1 yi1 xie1 zhong4 da4 de yan3 chang4 huo2 dong4 dou1 yao1 qing3 ta1 chu1 chang3 you3 ji3 ci4 hai2 te4 yi4 an1 pai2 ya1 zhou4 yan3 chu1\n",
      "原文汉字： 香港演艺圈欢迎毛阿敏加盟无线台与华星一些重大的演唱活动都邀请她出场有几次还特意安排压轴演出\n",
      "识别结果： 香港演意圈欢迎毛阿敏加盟无线台与华星一些重大的演唱活动都邀请她出场有几次还特意安排压轴演出\n"
     ]
    }
   ],
   "source": [
    "for i in range(10):\n",
    "    print('\\n示例', i+1)\n",
    "    # 载入训练好的模型，并进行识别\n",
    "    inputs, outputs = next(acoustic_model_batch)\n",
    "    x = inputs['the_inputs']\n",
    "    y = inputs['the_labels'][0]\n",
    "    result = acoustic.model.predict(x, steps=1)\n",
    "    # 将数字结果转化为文本结果\n",
    "    _, text = decode_ctc(result, train_data.acoustic_vocab)\n",
    "    text = ' '.join(text)\n",
    "    print('原文拼音：', ' '.join([train_data.acoustic_vocab[int(i)] for i in y]))\n",
    "    print('识别结果：', text)\n",
    "    with sess.as_default():\n",
    "        try:\n",
    "            _, y = next(language_model_batch)\n",
    "        \n",
    "            text = text.strip('\\n').split(' ')\n",
    "            x = np.array([train_data.pin_vocab.index(pin) for pin in text])\n",
    "            x = x.reshape(1, -1)\n",
    "            preds = sess.run(language.preds, {language.x: x})\n",
    "            got = ''.join(train_data.han_vocab[idx] for idx in preds[0])\n",
    "            print('原文汉字：', ''.join(train_data.han_vocab[idx] for idx in y[0]))\n",
    "            print('识别结果：', got)\n",
    "        except StopIteration:\n",
    "            break\n",
    "sess.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "至此，一个简易的语音识别系统就搭建完成。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "TensorFlow-1.13.1",
   "language": "python",
   "name": "tensorflow-1.13.1"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
