{ "paper_id": "2018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:59.172557Z" }, "title": "A Study on Mandarin Speech Recognition using Long Short-Term Memory Neural Network", "authors": [ { "first": "\u8cf4\u5efa\u5b8f", "middle": [ "\uf02a" ], "last": "\u3001\u738b\u9038\u5982", "suffix": "", "affiliation": {}, "email": "" }, { "first": "\uf02a", "middle": [], "last": "\u570b\u7acb\u4ea4\u901a\u5927\u5b78\u96fb\u4fe1\u5de5\u7a0b\u7814\u7a76\u6240", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Chien-hung Lai and Yih-Ru Wang \u6458\u8981 \u8fd1\u5e74\u4f86\u985e\u795e\u7d93\u7db2\u8def(Neural network)\u88ab\u5ee3\u6cdb\u904b\u7528\u65bc\u8a9e\u97f3\u8fa8\u8b58\u9818\u57df\u4e2d\uff0c\u672c\u8ad6\u6587\u4f7f\u7528 \u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def(Recurrent Neural Network)\u8a13\u7df4\u8072\u5b78\u6a21\u578b\uff0c\u4e26\u4e14\u5efa\u7acb\u4e2d\u6587\u5927 \u8fad\u5f59\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\u3002\u7531\u65bc\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u70ba\u5faa\u74b0\u5f0f\u9023\u63a5(Cyclic connections)\uff0c \u61c9\u7528\u65bc\u6642\u9593\u5e8f\u5217\u8a0a\u865f\u7684\u6a21\u578b\u5316(Modeling)\uff0c\u8f03\u65bc\u50b3\u7d71\u5168\u9023\u63a5(Full connection)\u7684 \u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def\u800c\u8a00\u66f4\u6709\u76ca\u8655\u3002 \u7136 \u800c \u4e00 \u822c \u55ae \u7d14 \u905e \u8ff4 \u5f0f \u985e \u795e \u7d93 \u7db2 \u8def \u5728 \u8a13 \u7df4 \u4e0a \u96a8 \u8457 \u6642 \u9593 \u7684 \u905e \u8ff4 \u5728 \u53cd \u5411 \u50b3 \u64ad (Backpropagation)\u66f4\u65b0\u6b0a\u91cd\u6642\u6709\u8457\u68af\u5ea6\u6d88\u5931(Gradient vanishing)\u4ee5\u53ca\u68af\u5ea6\u7206\u70b8 (Gradient exploding)\u7684\u554f\u984c\uff0c\u5c0e\u81f4\u8a13\u7df4\u88ab\u8feb\u4e2d\u6b62\uff0c\u4ee5\u53ca\u7121\u6cd5\u6709\u6548\u7684\u6355\u6349\u5230\u9577\u671f \u7684\u8a18\u61b6\u95dc\u806f\uff0c\u56e0\u6b64\u9577\u77ed\u671f\u8a18\u61b6(Long Short-Term Memory, LSTM)\u70ba\u88ab\u63d0\u51fa\u7528\u4f86 \u89e3\u6c7a\u6b64\u554f\u984c\u4e4b\u6a21\u578b\uff0c\u672c\u7814\u7a76\u57fa\u65bc\u6b64\u6a21\u578b\u67b6\u69cb\u7d50\u5408\u4e86\u5377\u7a4d\u795e\u7d93\u7db2\u8def(Convolutional Neural Network)\u53ca\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def(Deep Neural Network)\u5efa\u69cb\u51fa CLDNN \u6a21 \u578b\u3002 \u8a13\u7df4\u8a9e\u6599\u90e8\u5206\uff0c\u672c\u7814\u7a76\u4f7f\u7528\u4e86 TCC300(24 \u5c0f\u6642)\u3001AIShell(162 \u5c0f\u6642)\u3001NER(111 \u5c0f\u6642)\uff0c\u4e26\u52a0\u5165\u8a9e\u8a00\u6a21\u578b\u5efa\u7acb\u5927\u8fad\u5f59\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\uff0c\u70ba\u4e86\u6aa2\u6e2c\u7cfb\u7d71\u5f37\u5065\u5ea6 (Robustness)\uff0c\u4f7f\u7528\u4e09\u7a2e\u4e0d\u540c\u74b0\u5883\u4e4b\u6e2c\u8a66\u8a9e\u6599\uff0c\u5206\u5225\u70ba TCC300(2.4 \u5c0f\u6642\uff0c\u6717\u8b80", "pdf_parse": { "paper_id": "2018", "_pdf_hash": "", "abstract": [ { "text": "Chien-hung Lai and Yih-Ru Wang \u6458\u8981 \u8fd1\u5e74\u4f86\u985e\u795e\u7d93\u7db2\u8def(Neural network)\u88ab\u5ee3\u6cdb\u904b\u7528\u65bc\u8a9e\u97f3\u8fa8\u8b58\u9818\u57df\u4e2d\uff0c\u672c\u8ad6\u6587\u4f7f\u7528 \u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def(Recurrent Neural Network)\u8a13\u7df4\u8072\u5b78\u6a21\u578b\uff0c\u4e26\u4e14\u5efa\u7acb\u4e2d\u6587\u5927 \u8fad\u5f59\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\u3002\u7531\u65bc\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u70ba\u5faa\u74b0\u5f0f\u9023\u63a5(Cyclic connections)\uff0c \u61c9\u7528\u65bc\u6642\u9593\u5e8f\u5217\u8a0a\u865f\u7684\u6a21\u578b\u5316(Modeling)\uff0c\u8f03\u65bc\u50b3\u7d71\u5168\u9023\u63a5(Full connection)\u7684 \u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def\u800c\u8a00\u66f4\u6709\u76ca\u8655\u3002 \u7136 \u800c \u4e00 \u822c \u55ae \u7d14 \u905e \u8ff4 \u5f0f \u985e \u795e \u7d93 \u7db2 \u8def \u5728 \u8a13 \u7df4 \u4e0a \u96a8 \u8457 \u6642 \u9593 \u7684 \u905e \u8ff4 \u5728 \u53cd \u5411 \u50b3 \u64ad (Backpropagation)\u66f4\u65b0\u6b0a\u91cd\u6642\u6709\u8457\u68af\u5ea6\u6d88\u5931(Gradient vanishing)\u4ee5\u53ca\u68af\u5ea6\u7206\u70b8 (Gradient exploding)\u7684\u554f\u984c\uff0c\u5c0e\u81f4\u8a13\u7df4\u88ab\u8feb\u4e2d\u6b62\uff0c\u4ee5\u53ca\u7121\u6cd5\u6709\u6548\u7684\u6355\u6349\u5230\u9577\u671f \u7684\u8a18\u61b6\u95dc\u806f\uff0c\u56e0\u6b64\u9577\u77ed\u671f\u8a18\u61b6(Long Short-Term Memory, LSTM)\u70ba\u88ab\u63d0\u51fa\u7528\u4f86 \u89e3\u6c7a\u6b64\u554f\u984c\u4e4b\u6a21\u578b\uff0c\u672c\u7814\u7a76\u57fa\u65bc\u6b64\u6a21\u578b\u67b6\u69cb\u7d50\u5408\u4e86\u5377\u7a4d\u795e\u7d93\u7db2\u8def(Convolutional Neural Network)\u53ca\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def(Deep Neural Network)\u5efa\u69cb\u51fa CLDNN \u6a21 \u578b\u3002 \u8a13\u7df4\u8a9e\u6599\u90e8\u5206\uff0c\u672c\u7814\u7a76\u4f7f\u7528\u4e86 TCC300(24 \u5c0f\u6642)\u3001AIShell(162 \u5c0f\u6642)\u3001NER(111 \u5c0f\u6642)\uff0c\u4e26\u52a0\u5165\u8a9e\u8a00\u6a21\u578b\u5efa\u7acb\u5927\u8fad\u5f59\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\uff0c\u70ba\u4e86\u6aa2\u6e2c\u7cfb\u7d71\u5f37\u5065\u5ea6 (Robustness)\uff0c\u4f7f\u7528\u4e09\u7a2e\u4e0d\u540c\u74b0\u5883\u4e4b\u6e2c\u8a66\u8a9e\u6599\uff0c\u5206\u5225\u70ba TCC300(2.4 \u5c0f\u6642\uff0c\u6717\u8b80", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "\u8fd1\u5e74\u4f86\uff0c\u4eba\u5de5\u667a\u6167(Artificial intelligence, AI)\u513c\u7136\u5df2\u6210\u70ba\u96a8\u8655\u53ef\u807d\u898b\u7684\u95dc\u9375\u8a5e\uff0c\u7d9c\u89c0\u6b77\u53f2\uff0c AI \u6d6a\u6f6e\u5171\u51fa\u73fe\u904e\u4e09\u6b21\uff0c\u800c\u6bcf\u4e00\u6b21\u6d6a\u6f6e\u7684\u8208\u8d77\uff0c\u90fd\u548c\u8a9e\u97f3\u8fa8\u8b58\u6280\u8853\u7684\u767c\u5c55\u812b\u96e2\u4e0d\u4e86\u95dc\u4fc2\u3002 \u65e9\u671f\u7684\u8a9e\u97f3\u8fa8\u8b58\u6280\u8853\u662f\u7531\u8a9e\u8a00\u5b78\u5b78\u8005\u900f\u904e\u7814\u7a76\u8072\u5b78\u4ee5\u53ca\u8a9e\u8a00\u5b78\u4e4b\u9593\u7684\u95dc\u806f\uff0c\u7d71\u6574\u6b78\u7d0d\u51fa \u4e00\u5957\u898f\u5247\u6cd5(Ruled-based)\u7684\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\uff1b\u4f46\u662f\u7531\u65bc\u8072\u5b78\u548c\u8a9e\u8a00\u5b78\u4e8c\u8005\u9593\u7684\u8b8a\u5316\uff0c\u7121\u6cd5 \u55ae\u55ae\u4f7f\u7528\u898f\u5247\u6cd5\u5b8c\u6210\u63cf\u8ff0\uff0c\u800c\u5f8c\u767c\u5c55\u51fa\u50cf\u6a5f\u5668\u5b78\u7fd2(Machine Learning)\u9019\u6a23\u900f\u904e\u8cc7\u6599\u9a45\u52d5 (Data-driven)\u7684\u65b9\u6cd5\uff0c\u8b93\u6a5f\u5668\u5f9e\u8f38\u5165\u5e36\u6709\u6a19\u7c64(Label)\u7684\u8cc7\u6599\u4e2d\uff0c\u81ea\u52d5\u5256\u6790\u4e26\u5f9e\u4e2d\u7372\u53d6\u898f\u5247\uff0c \u4e26\u5c0d\u65bc\u672a\u77e5\u7684\u8cc7\u6599\u9032\u884c\u9810\u6e2c\u3002 \u5728\u8fd1\u671f\u7684\u5927\u8a5e\u5f59\u9023\u7e8c\u8a9e\u97f3\u8fa8\u8b58(Large Vocabulary Continuous Speech Recognition, LVCSR)\u7cfb\u7d71\u4e2d\uff0c\u8072\u5b78\u6a21\u578b\u90e8\u5206\u6709\u5225\u65bc\u50b3\u7d71\u7684\u9ad8\u65af\u6df7\u5408\u6a21\u578b(Gaussian Mixture Model, GMM) (Reynolds, 2009) \uff0c\u4f7f\u7528\u4e86\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def (Deep Neural Network, DNN) (Zhang, Trmal, Povey & Khudanpur, 2014) ", "cite_spans": [ { "start": 419, "end": 435, "text": "(Reynolds, 2009)", "ref_id": "BIBREF1" }, { "start": 448, "end": 474, "text": "(Deep Neural Network, DNN)", "ref_id": null }, { "start": 475, "end": 514, "text": "(Zhang, Trmal, Povey & Khudanpur, 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "\u7dd2\u8ad6 (Introduction)", "sec_num": "1." } ], "back_matter": [ { "text": "\u4f5c\u70ba\u7279\u5fb5\u53c3\u6578\u9032\u884c\u8072\u5b78\u6a21\u578b\u8a13\u7df4 (Madikeri, Dey, Motlicek & Ferras, 2016) \uff0c\u76ee\u7684\u70ba\u4e86\u5b78\u7fd2 \u8a9e \u8005 \u7279 \u6027 \uff0c \u589e \u52a0 \u6a21 \u578b \u5f37 \u5065 \u6027 \uff0c \u8a13 \u7df4 \u8a9e \u6599 \u4e0d \u8db3 \u7684 \u65b9 \u9762 \uff0c \u53ef \u4ee5 \u4f7f \u7528 \u534a \u76e3 \u7763 \u5f0f \u5b78 \u7fd2 (semi-supervised learning) (Manohar, Hadian, Povey & Khudanpur, 2018) \uff0c\u8490\u96c6\u7121\u8f49\u5beb\u6587\u672c \u4e4b\u8a9e\u6599\uff0c\u900f\u904e\u8fa8\u8b58\u7d50\u679c\u4e4b\u4fe1\u5fc3\u5206\u6578\u6c7a\u7b56\u662f\u5426\u52a0\u5165\u70ba\u8a13\u7df4\u8a9e\u6599\uff1b\u81f3\u65bc\u8a9e\u8a00\u6a21\u578b\u90e8\u5206\uff0c\u89e3\u6c7a \u4eba\u540d\u9020\u6210 OOV \u4e4b\u554f\u984c\uff0c\u4e14\u5c07\u6587\u672c\u9032\u884c\u5206\u985e\uff0c\u4ee5\u5efa\u69cb\u51fa\u4e0d\u540c domain \u4e4b\u8a9e\u8a00\u6a21\u578b\uff0c\u4ee5\u53ca\u5feb \u901f\u9032\u884c\u8abf\u9069\u8a9e\u8a00\u6a21\u578b\u4e4b\u5efa\u7acb\u8207\u8f49\u63db\u3002", "cite_spans": [ { "start": 15, "end": 55, "text": "(Madikeri, Dey, Motlicek & Ferras, 2016)", "ref_id": null }, { "start": 157, "end": 199, "text": "(Manohar, Hadian, Povey & Khudanpur, 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u8cf4\u5efa\u5b8f\u8207\u738b\u9038\u5982", "sec_num": null }, { "text": "Abdel-Hamid, O., Mohamed, A., Jiang, H., Deng, L., Penn, G., & Yu, D. (2014 Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., \u2026Vesely\u00b4, K. (2011) . The Kaldi speech recognition toolkit. In Proceedings of IEEE ASRU 2011.", "cite_spans": [ { "start": 17, "end": 75, "text": "Mohamed, A., Jiang, H., Deng, L., Penn, G., & Yu, D. (2014", "ref_id": null }, { "start": 76, "end": 169, "text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., \u2026Vesely\u00b4, K. (2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u53c3\u8003\u6587\u737b (References)", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Purely sequence-trained neural networks for ASR based on lattice-free MMI", "authors": [ { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "V", "middle": [], "last": "Peddinti", "suffix": "" }, { "first": "D", "middle": [], "last": "Galvez", "suffix": "" }, { "first": "P", "middle": [], "last": "Ghahrmani", "suffix": "" }, { "first": "V", "middle": [], "last": "Manohar", "suffix": "" }, { "first": "X", "middle": [], "last": "Na", "suffix": "" }, { "first": "S", "middle": [], "last": "\u2026khudanpur", "suffix": "" } ], "year": 2016, "venue": "Proceedings of Interspeech", "volume": "", "issue": "", "pages": "2751--2755", "other_ids": { "DOI": [ "10.21437/Interspeech.2016-595" ] }, "num": null, "urls": [], "raw_text": "Povey, D., Peddinti, V., Galvez, D., Ghahrmani, P., Manohar, V., Na, X., \u2026Khudanpur, S. (2016). Purely sequence-trained neural networks for ASR based on lattice-free MMI. In Proceedings of Interspeech 2016, 2751-2755. doi: 10.21437/Interspeech.2016-595", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Gaussian mixture models", "authors": [ { "first": "D", "middle": [ "A" ], "last": "Reynolds", "suffix": "" } ], "year": 2009, "venue": "Encyclopedia of Biometrics", "volume": "", "issue": "", "pages": "659--663", "other_ids": { "DOI": [ "10.1007/978-0-387-73003-5_196" ] }, "num": null, "urls": [], "raw_text": "Reynolds, D. A. (2009). Gaussian mixture models. In S. Z. Li (Eds.), Encyclopedia of Biometrics (pp. 659-663) 2009. doi: 10.1007/978-0-387-73003-5_196", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Convolutional Long Short-Term Memory Fully Connected Deep Neural Networks", "authors": [ { "first": "T", "middle": [ "N" ], "last": "Sainath", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "A", "middle": [], "last": "Senior", "suffix": "" }, { "first": "H", "middle": [], "last": "Sak", "suffix": "" } ], "year": 2015, "venue": "Proceedings of 2015 IEEE International Conference on Acoustics Speech and Signal Processing", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ICASSP.2015.7178838" ] }, "num": null, "urls": [], "raw_text": "Sainath, T. N., Vinyals, O., Senior, A., & Sak, H. (2015). Convolutional Long Short-Term Memory Fully Connected Deep Neural Networks. In Proceedings of 2015 IEEE International Conference on Acoustics Speech and Signal Processing. doi: 10.1109/ICASSP.2015.7178838", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition", "authors": [ { "first": "H", "middle": [], "last": "Sak", "suffix": "" }, { "first": "A", "middle": [], "last": "Senior", "suffix": "" }, { "first": "F", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1402.1128" ] }, "num": null, "urls": [], "raw_text": "Sak, H., Senior, A., & Beaufays, F. (2014). Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition. Retrieved from arXiv:1402.1128", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling", "authors": [ { "first": "H", "middle": [], "last": "Sak", "suffix": "" }, { "first": "A", "middle": [], "last": "Senior", "suffix": "" }, { "first": "F", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2014, "venue": "Proceedings of INTERSPEECH 2014", "volume": "", "issue": "", "pages": "338--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sak, H., Senior, A., & Beaufays, F. (2014). Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Proceedings of INTERSPEECH 2014, 338-342.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Fast and accurate recurrent neural network acoustic models for speech recognition", "authors": [ { "first": "H", "middle": [], "last": "Sak", "suffix": "" }, { "first": "A", "middle": [], "last": "Senior", "suffix": "" }, { "first": "K", "middle": [], "last": "Rao", "suffix": "" }, { "first": "F", "middle": [], "last": "Beaufays", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Sixteenth Annual Conference of the International Speech Communication Association", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sak, H., Senior, A., Rao, K., & Beaufays, F. (2015). Fast and accurate recurrent neural network acoustic models for speech recognition. In Proceedings of Sixteenth Annual Conference of the International Speech Communication Association.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Improving deep neural network acoustic models using generalized maxout networks", "authors": [ { "first": "X", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Trmal", "suffix": "" }, { "first": "D", "middle": [], "last": "Povey", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ICASSP 2014", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/ICASSP.2014.6853589" ] }, "num": null, "urls": [], "raw_text": "Zhang, X., Trmal, J., Povey, D., & Khudanpur, S. (2014). Improving deep neural network acoustic models using generalized maxout networks. In Proceedings of ICASSP 2014. doi: 10.1109/ICASSP.2014.6853589", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "type_str": "table", "content": "
4 6\u8cf4\u5efa\u5b8f\u8207\u738b\u9038\u5982 \u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\u985e\u795e\u7d93\u7db2\u8def\u5efa\u69cb\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u5668\u4e4b\u7814\u7a76 \u8cf4\u5efa\u5b8f\u8207\u738b\u9038\u5982 \u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\u985e\u795e\u7d93\u7db2\u8def\u5efa\u69cb\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u5668\u4e4b\u7814\u7a76 \u8cf4\u5efa\u5b8f\u8207\u738b\u9038\u5982 \u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\u985e\u795e\u7d93\u7db2\u8def\u5efa\u69cb\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u5668\u4e4b\u7814\u7a76 \u8cf4\u5efa\u5b8f\u8207\u738b\u9038\u5982 5 7 9 \u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\u985e\u795e\u7d93\u7db2\u8def\u5efa\u69cb\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u5668\u4e4b\u7814\u7a76 11
\u7531\u65bc\u8a13\u7df4\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def\u975e\u5e38\u8017\u6642\uff0c\u672c\u7814\u7a76\u4f7f\u7528\u7e6a\u5716\u8655\u7406\u5668(Graphics Processing Unit, GPU)\u4f86\u8a13\u7df4\u542b\u6709 DNN\u3001CNN (Abdel-Hamid et al., 2014) (Abdel-Hamid, Mohamed, Jiang & Penn, 2012) \u53ca LSTM (Sak, Senior & Beaufays, 2014a) (Sak, Senior & Beaufays, 2014b) \u4e4b CPU Intel\u00ae Core TM i7-8700K @ 3.70GHz RAM 64 GB DDR4-3000 HDD 4 TB SATA-III 7200RPM GPU NVIDIA GeForce GTX 1080TI OS Arch Linux 4.17.5-1 64bit \u8868 2. GPU \u898f\u683c\u63cf\u8ff0 [Table 2. GPU specification] \u578b\u865f NVIDIA GeForce GTX 1080TI CUDA \u6838\u5fc3\u6578 3584 \u57fa\u790e\u6642\u8108 1480 MHz 1582 MHz \u8a18\u61b6\u9ad4\u6642\u8108 11 Gbps \u8a18\u61b6\u9ad4\u5bb9\u91cf\u914d\u7f6e 11264 MB \u8a18\u61b6\u9ad4\u4ecb\u9762\u578b\u865f GDDR5X \u8a18\u61b6\u9ad4\u4ecb\u9762\u983b\u5bec 484 GB/s \u4ea4\u901a\u5927\u5b78 \u9577\u6587 \u7537 50 \u7537 75059 \u7537 622 \u5973 50 \u5973 73555 \u5973 616 \u7e3d\u6578 100 \u7e3d\u6578 148614 \u7e3d\u6578 1238 \u6210\u529f\u5927\u5b78 \u9577\u6587 \u7537 50 \u7537 63127 \u7537 588 \u5973 50 \u5973 68749 \u5973 582 \u7e3d\u6578 100 \u7e3d\u6578 131876 \u7e3d\u6578 1170 1 Mandarin Microphone Speech Corpus-TCC300. http://www.aclclp.org.tw/use_mat_c.php#tcc300edu Other \u4e0d\u592a\u4e56\u5b78\u5802 BG 9.5 143138 1586 \u661f\u671f\u8b1b\u5ea7 WK 8.4 113202 1102 \u9047\u898b\u5e78\u798f\u5e7c\u5152\u5712 YX 5.6 90419 826 \u6536\u85cf\u4eba\u751f SR 16.5 280074 2670 \u96d9\u8a9e\u65b0\u805e SY 34.5 434851 4015 \u8868 6. AIShell \u8a9e\u6599\u5eab\u8a9e\u8005\u8cc7\u8a0a [Table 6. AIShell corpus speaker information] \u5e74\u9f61\u7bc4\u570d \u8a9e\u8005\u6578 \u5730\u5340 \u8a9e\u8005\u6578 16 -25 316 \u5317\u65b9 26 -40 71 \u5357\u65b9 56 > 40 13 \u5176\u4ed6 11 \u5408\u8a08 400 \u5408\u8a08 400 \u5716 1. \u9577\u77ed\u671f\u8a18\u61b6\u5167\u90e8\u7d50\u69cb\u5716 Recurrent)\u5c64\u6578\u76ee\u7686 \u70ba 256\uff0c\u800c\u5728\u8a13\u7df4\u904e\u7a0b\u4e2d\uff0c\u70ba\u4e86\u907f\u514d\u68af\u5ea6\u7522\u751f\u7206\u70b8\uff0c\u5728\u905e\u8ff4\u7684\u904e\u7a0b\u4e2d\u6703\u8a2d\u5b9a\u9650\u5e45\u95a5\u503c \uf075 \u578b\u66f4\u70ba\u591a\u5143\uff0c\u4e14\u589e\u52a0\u8cc7\u6599\u5eab\u3002 \uf075 TCC300\uff1a\u5305\u542b\u8a5e\u3001\u77ed\u53e5\u3001\u9577\u53e5\uff0c\u5167\u5bb9\u7531\u4e2d\u7814\u9662 500 \u842c\u8a5e\u6a19\u793a\u8a9e\u6599\u5eab\u4e2d\u9078\u53d6\u3002 \u500d\u7684\u6642\u9593\u3002 searching algorithm)\uff0c\u8a2d\u5b9a\u6700\u5927\u5b58\u6d3b\u72c0\u614b\u6578(Max-active states)\u53ca\u5149\u675f\u503c(Beam)\uff0c\u627e\u51fa\u7576\u4e0b \u4f7f\u7528\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u5728\u89e3\u78bc\u6642\u76f8\u5c0d\u8017\u6642\uff0c\u5c0d\u7167 CDNN \u53ca CLDNN \u4e4b RTF\uff0c\u5c07\u8fd1\u591a\u51fa\u5169 \u641c\u5c0b\u8a08\u7b97\u627e\u51fa\u6700\u4f73\u8def\u5f91\uff0c\u4f46\u662f\u4e00\u822c\u7dad\u7279\u6bd4\u7b97\u6cd5\u904e\u65bc\u8017\u6642\uff0c\u56e0\u6b64\u52a0\u5165\u5149\u675f\u641c\u5c0b\u6f14\u7b97\u6cd5(Beam \u9577\u77ed\u671f\u8a18\u61b6\u6a21\u578b\u5728\u8072\u5b78\u6a21\u578b\u7684\u5efa\u7acb\u4e0a\u5247\u662f\u975e\u5e38\u6709\u76ca\u8655\u7684\uff0c\u76f8\u5c0d\u6539\u5584\u7387\u9ad8\u9054\u7d04 25%\uff0c\u4f46\u662f \u89e3\u78bc\u904e\u7a0b\u6211\u5011\u4f7f\u7528\u7dad\u7279\u6bd4\u6f14\u7b97\u6cd5(Viterbi algorithm)\uff0c\u900f\u904e\u795e\u7d93\u7db2\u8def\u8f38\u51fa\u72c0\u614b\u5e8f\u5217\uff0c \u795e\u7d93\u7db2\u8def\u5c0d\u6bd4\u50b3\u7d71\u985e\u795e\u7d93\u7db2\u8def\u800c\u8a00\uff0c\u5c0d\u65bc\u7279\u5fb5\u7684\u5b78\u7fd2\u662f\u6709\u5e6b\u52a9\u7684\uff0c\u76f8\u5c0d\u6539\u5584\u7387\u7d04 5%\uff0c\u800c \u4e2d\u6587\u7dad\u57fa\u767e\u79d1\u8a9e\u6599(Wiki)\uff1a\u4e2d\u6587\u7dad\u57fa\u767e\u79d1\u5167\u5bb9\u5ee3\u6cdb\uff0c\u4e14\u8cc7\u8a0a\u8f03\u65b0\uff0c\u80fd\u4f7f\u8a9e\u8a00\u6a21 \u8b58\u932f\u8aa4\u7387\u8a08\u7b97\u65b9\u5f0f\u5982\u5f0f(2)\u6240\u793a\u3002 \u6e2c\u8a66\u8a9e\u6599\u90e8\u5206\u4ea6\u9078\u64c7\u4f7f\u7528 TCC300\uff0c\u97f3\u7bc0\u6578\u70ba 26357\uff0c\u8fa8\u8b58\u7d50\u679c\u5982\u8868 9 \u6240\u793a\uff0c\u5377\u7a4d\u985e 333 t p Memory blocks \u7a2e\u932f\u8aa4\uff1a\u53d6\u4ee3\u578b\u932f\u8aa4(Substitution)\u3001\u63d2\u5165\u578b\u932f\u8aa4(Insertion)\u53ca\u522a\u9664\u578b\u932f\u8aa4(Deletion)\uff1b\u800c\u8fa8 [Figure 3. CLDNN model architecture diagram] \u70ba\u5373\u6642\u7cfb\u7d71(Real-time System)\uff0c\u5247 RTF \u9808\u5c0f\u65bc 1.0\uff1b\u8fa8\u8b58\u932f\u8aa4\u7684\u5206\u6790\u5247\u53ef\u4ee5\u5206\u70ba\u4ee5\u4e0b\u4e09 \u5716 3. CLDNN \u6a21\u578b\u67b6\u69cb\u5716 \u8a2d\u5b9a\u97f3\u6846\u4e4b\u9593\u9694\u70ba 10ms\uff0c\u56e0\u6b64\u53ef\u4ee5\u89e3\u91cb\u70ba\u6bcf\u79d2\u8fa8\u8b58\u7cfb\u7d71\u6240\u9700\u4e4b\u89e3\u78bc\u6642\u9593\uff0c\u82e5\u5efa\u7acb\u4e4b\u7cfb\u7d71 frame 1 frame 2 frame 1 Projection \u5373\u6642\u4fc2\u6578\u5982\u5f0f(1)\u6240\u793a\uff0c\u8868\u793a\u5e73\u5747\u4e00\u500b\u97f3\u6846\u9700\u8981\u89e3\u78bc(decode)\u4e4b\u6642\u9593\uff0c\u53c8\u56e0\u70ba\u672c\u7814\u7a76 \u52a0\u901f\u6642\u8108 3. \u8a9e\u6599\u5eab\u4ecb\u7d39 (Databases) \u672c\u7bc0\u5c07\u5206\u5225\u4ecb\u7d39\u7528\u65bc\u672c\u5be6\u9a57\u4e2d\u4e4b\u6240\u6709\u8a9e\u6599\u5eab\uff0c\u5176\u4e2d\u7528\u4f86\u7576\u4f5c\u8a13\u7df4\u8a9e\u6599\u7684\u6709 TCC300\u3001NER 3.2 NER\u8a9e\u6599\u5eab (NER Corpus) NER \u8a9e\u6599\u5eab\uff0c\u5168\u540d\u70ba NER Manual Transcription Vol1\uff0c\u70ba\u570b\u7acb\u81fa\u5317\u79d1\u6280\u5927\u5b78\u548c\u570b\u5bb6\u6559\u80b2 \u5ee3\u64ad\u96fb\u53f0\u5408\u4f5c\u9304\u88fd\u4e4b\u8a9e\u6599\u5eab\uff0c\u4e3b\u8981\u76ee\u7684\u70ba\u5927\u91cf\u8f49\u5beb\u6559\u80b2\u96fb\u53f0\u4e4b\u7bc0\u76ee\uff0c\u7522\u751f\u7bc0\u76ee\u9010\u5b57\u7a3f\uff0c 3.3 AIShell\u8a9e\u6599\u5eab (AIShell Corpus) AIShell \u8a9e\u6599\u5eab(Bu, Du, Na, Wu & Zheng, 2017)\uff0c\u662f\u7531\u5317\u4eac\u5e0c\u723e\u8c9d\u6bbc\u79d1\u6280\u6709\u9650\u516c\u53f8\u91cb\u653e\u4e4b \u958b\u6e90\u8a9e\u97f3\u8cc7\u6599\u5eab\uff0c\u9304\u88fd\u5167\u5bb9\u5982\u8868 5\uff0c\u6d89\u53ca\u667a\u80fd\u5c45\u5bb6\u3001\u7121\u4eba\u99d5\u99db\u7b49 11 \u9805\u9818\u57df\uff0c\u9304\u88fd\u904e\u7a0b\u7686 \u8868 7. AIShell \u8a9e\u6599\u5eab\u8cc7\u8a0a [Table 7. AIShell corpus information] \u8a9e\u8005\u7e3d\u6578 \u97f3\u7bc0\u7e3d\u6578 \u6a94\u6848\u7e3d\u6578 (Clipping-threshold)\u70ba 30\uff0c\u5373\u7576\u68af\u5ea6\u5927\u65bc\u6b64\u95a5\u503c\u6642\uff0c\u5c07\u68af\u5ea6\u8a2d\u5b9a\u70ba 30\uff0c\u5982\u6b64\u4e00\u4f86\u4fbf\u89e3\u6c7a \u68af \u5ea6\u5728\u53cd\u5411\u50b3\u64ad\u7684\u6642\u5019\u6578\u503c\u904e\u5927\u7684\u554f\u984c\u3002 \u53e6\u5916\uff0c\u5728 DNN \u90e8\u5206\uff0c\u70ba\u4e86\u89e3\u6c7a\u5c64\u6578\u904e\u591a\u5c0e\u81f4\u5b78\u7fd2\u56f0\u96e3\u7684\u554f\u984c\uff0c\u6709\u7814\u7a76\u63d0\u51fa\u6279\u6b21\u6b63 5.2 \u5f62\u97f3\u7fa9\u5206\u5408\u8a5e\u524d\u8655\u7406 (Preprocess of Variant Words) \u6f22\u5b57\u5177\u6709\u4e09\u5927\u8981\u7d20\uff1a\u300c\u5f62\u3001\u97f3\u3001\u7fa9\u300d\uff0c\u5176\u4e2d\u5b57\u7fa9\u70ba\u8a9e\u6587\u4e4b\u6838\u5fc3\uff0c\u5b57\u5f62\u3001\u5b57\u97f3\u7686\u56e0\u5b57\u7fa9\u800c \u5b58\u5728\u3002\u800c\u5728\u6587\u5b57\u7684\u6f14\u9032\u7576\u4e2d\uff0c\u6709\u4e9b\u5b57\u5f62\u8b8a\u7684\u4e0d\u4e00\u81f4\uff0c\u6216\u8005\u56e0\u70ba\u6c92\u6709\u5275\u7acb\u800c\u501f\u7528\uff0c\u751a\u81f3\u53ef \u97f3\u6846\u6240\u6709\u53ef\u80fd\u8def\u5f91(Hypotheses)\uff0c\u4e26\u522a\u9664\u5206\u6578\u4e4b\u5149\u675f\u81e8\u754c\u503c\uff0c\u5373\u7576\u4e0b\u8def\u5f91\u8207\u6700\u9ad8\u5206\u5dee\u5927\u65bc \u5149\u675f\u503c\uff0c\u5247\u522a\u9664\u8a72\u8def\u5f91\uff0c\u6700\u5f8c\u5c07\u72c0\u614b\u6578\u63a7\u5236\u65bc\u6700\u5927\u5b58\u6d3b\u72c0\u614b\u6578\u4e0b\uff0c\u5982\u6b64\u96d6\u7136\u6703\u72a7\u7272\u4e9b\u8a31 \u8fa8\u8b58\u7387\uff0c\u4f46\u80fd\u5927\u5e45\u63d0\u5347\u8fa8\u8b58\u901f\u5ea6\uff0c\u672c\u7814\u7a76\u8a2d\u5b9a\u6700\u5927\u5b58\u6d3b\u72c0\u614b\u6578\u70ba 7000\u3001\u5149\u675f\u503c\u70ba 15.0\u3002 \u8868 9Model SER (%) RTF \u53ca AIShell \u8a9e\u6599\u5eab\uff0c\u800c\u70ba\u4e86\u6e2c\u8a66\u672c\u5be6\u9a57\u4e4b\u8fa8\u8b58\u7cfb\u7d71\u5c0d\u65bc\u4e0d\u540c\u74b0\u5883\u7684\u8fa8\u8b58\u80fd\u529b\uff0c\u56e0\u6b64\u5728\u6e2c \u8a66\u8a9e\u6599\u7684\u9078\u64c7\u4e0a\uff0c\u4f7f\u7528 TCC300 \u53ca NER \u8a9e\u6599\u5eab\uff0c\u5176\u4e2d NER \u70ba\u5ee3\u64ad\u8a9e\u6599\uff0c\u53c8\u53ef\u7d30\u5206\u70ba\u80cc \u666f\u4e7e\u6de8\u7121\u96dc\u8a0a\u4e4b NER-clean \u4ee5\u53ca\u80cc\u666f\u6709\u4eba\u70ba\u96dc\u8a0a\u6216\u97f3\u6a02\u53c3\u96dc\u5176\u4e2d\u4e4b NER-other\u3002 3.1 TCC300\u8a9e\u6599\u5eab (TCC300 Corpus) \u5be6\u9a57\u4e2d\u6240\u4f7f\u7528\u7684 TCC300 \u9ea5\u514b\u98a8\u8a9e\u97f3\u8cc7\u6599\u5eab 1 \u662f\u7531\u570b\u7acb\u4ea4\u901a\u5927\u5b78(National Chiao Tung University, NCTU)\u3001\u570b\u7acb\u6210\u529f\u5927\u5b78(National Cheng Kung University, NCKU)\u3001\u570b\u7acb\u53f0\u7063\u5927 \u5b78(National Taiwan University, NTU)\u5171\u540c\u9304\u88fd\u800c\u6210\uff0c\u4e26\u4e14\u7531\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703(The Association for Computational Linguistics and Chinese Language Processing, ACLCLP)\u767c\u884c\uff0c \u6b64\u8a9e\u6599\u5eab\u5c6c\u65bc\u9ea5\u514b\u98a8\u6717\u8b80\u8a9e\u97f3\uff0c\u4e3b\u8981\u76ee\u7684\u70ba\u63d0\u4f9b\u53f0\u7063\u8154\u4e4b\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8a8d\u7814\u7a76\u4f7f\u7528\u3002 \u8a73\u7d30\u8cc7\u8a0a\u5982\u8868 3 \u6240\u793a\uff0c\u53f0\u7063\u5927\u5b78\u90e8\u5206\u4e3b\u8981\u5305\u542b\u8a5e\u4ee5\u53ca\u77ed\u53e5\uff0c\u6587\u672c\u7d93\u904e\u8a2d\u8a08\uff0c\u8003\u616e\u97f3 \u7bc0\u8207\u5176\u76f8\u9023\u51fa\u73fe\u4e4b\u6a5f\u7387\uff0c\u5171\u7531 100 \u4eba\u9304\u88fd\u800c\u6210\uff1b\u4ea4\u901a\u5927\u5b78\u8207\u6210\u529f\u5927\u5b78\u90e8\u5206\u5247\u70ba\u9577\u6587\u8a9e\u6599\uff0c \u5176\u8a9e\u53e5\u5167\u5bb9\u900f\u904e\u4e2d\u7814\u9662\u63d0\u4f9b\u4e4b 500 \u842c\u8a5e\u8a5e\u985e\u6a19\u793a\u8a9e\u6599\u5eab\u4e2d\u9078\u53d6\uff0c\u6bcf\u7bc7\u6587\u7ae0\u5305\u542b\u6578\u767e\u5b57\uff0c \u518d\u5207\u5272\u5206\u6210 3 \u81f3 4 \u6bb5\uff0c\u6bcf\u6bb5\u81f3\u591a\u5305\u542b 231 \u5b57\uff0c\u5169\u6821\u5206\u5225\u5404\u9304\u88fd 100 \u4eba\u800c\u6210\uff0c\u4e14\u6bcf\u4eba\u6717\u8b80 \u7684\u6587\u7ae0\u7686\u4e0d\u76f8\u540c\u3002\u6bcf\u500b\u5b78\u6821\u4e4b\u9304\u97f3\u53d6\u6a23\u983b\u7387\u7686\u70ba 16000 Hz\uff0c\u53d6\u6a23\u4f4d\u5143\u6578\u70ba 16 \u4f4d\u5143\u3002 \u672c\u5be6\u9a57\u9032\u4e00\u6b65\u5c07\u6574\u500b TCC300 \u8a9e\u6599\u5eab\u5206\u70ba\u8a13\u7df4\u8a9e\u6599\u8207\u6e2c\u8a66\u8a9e\u6599\uff0c\u8a13\u7df4\u8207\u6e2c\u8a66\u6bd4\u4f8b\u7d04 \u70ba 9 : 1\uff0c\u5206\u5225\u8cc7\u8a0a\u5982\u4e0b\uff1a \uf0b7 \u8a13\u7df4\u8a9e\u6599\uff1a\u7d04\u70ba 24.4 \u5c0f\u6642\uff0c\u5171 284 \u4f4d\u8a9e\u8005\uff0c8633 \u53e5\u767c\u97f3\uff0c304780 \u500b\u97f3\u7bc0\u6578\u3002 \uf0b7 \u6e2c\u8a66\u8a9e\u6599\uff1a\u7d04\u70ba 2.4 \u5c0f\u6642\uff0c\u5171 19 \u4f4d\u8a9e\u8005\uff0c225 \u53e5\u9577\u53e5\u767c\u97f3\uff0c26357 \u500b\u97f3\u7bc0\u6578\u3002 \u8868 \u5b78\u6821\u540d\u7a31 \u6587\u7ae0\u5c6c\u6027 \u8a9e\u8005\u7e3d\u6578 \u97f3\u7bc0\u7e3d\u6578 \u6a94\u6848\u7e3d\u6578 \u53f0\u7063\u5927\u5b78 \u77ed\u53e5 \u7537 50 \u7537 27541 \u7537 3425 \u5973 50 \u5973 24677 \u5973 3084 \u7e3d\u6578 100 \u7e3d\u6578 52218 \u7e3d\u6578 6590 \u4ee5\u5efa\u7f6e\u5927\u898f\u6a21\u53f0\u7063\u8154\u4e4b\u8a9e\u6599\u5eab\uff0c\u8a73\u7d30\u5167\u5bb9\u5982\u8868 4 \u6240\u793a\uff0c\u5167\u5bb9\u5927\u90e8\u4efd\u70ba\u8ac7\u8a71\u6027\u7bc0\u76ee\uff0c\u591a\u70ba \u81ea\u767c\u6027(Spontaneous)\u8a9e\u97f3\uff0c\u50c5\u5c11\u90e8\u5206\u70ba\u65b0\u805e\u5831\u5c0e\u4e4b\u6717\u8b80\u5f0f(Reading)\u8a9e\u97f3\u3002 \u6b64\u8a9e\u6599\u5eab\u4f9d\u7167 1. \u70ba\u9304\u97f3\u5ba4\u5167\u6216\u70ba\u9304\u97f3\u5ba4\u4ee5\u5916\u4e4b\u5834\u6240\u9304\u88fd\uff0c2. \u6709\u7121\u4efb\u4f55\u80cc\u666f\u896f\u6a02\u6216 \u975e\u4eba\u8072\u4e4b\u566a\u97f3\u5169\u9805\u689d\u4ef6\u5206\u70ba\u5169\u500b\u90e8\u5206\uff1a\u4e7e\u6de8\u8a9e\u6599(Clean\uff0c\u7d04 19.4 \u5c0f\u6642\uff0c\u5171 5106 \u500b\u6a94\u6848) \u53ca\u5176\u4ed6\u8a9e\u6599(Other\uff0c\u7d04 107.4 \u5c0f\u6642\uff0c\u5171 15983 \u500b\u6a94\u6848)\u5408\u8a08\u5171\u7d04 126.8 \u5c0f\u6642\uff0c21089 \u500b\u6a94\u6848\uff0c \u53d6\u6a23\u983b\u7387\u70ba 16000 Hz\uff0c\u53d6\u6a23\u4f4d\u5143\u6578\u70ba 16 \u4f4d\u5143\uff0c\u8072\u9053\u6578\u70ba 1(mono)\u3002 \u8a9e\u6599\u5eab\u4e2d\u9010\u5b57\u7a3f\u4f86\u6e90\u7531\u570b\u7acb\u81fa\u5317\u79d1\u6280\u5927\u5b78\u4e4b\u96d9\u8a9e\u8a9e\u97f3\u8fa8\u8b58\u5668\u9032\u884c\u521d\u6b65\u8f49\u5beb\u9010\u5b57\u7a3f\uff0c \u5f8c\u7d93\u7531\u4eba\u5de5\u6821\u6b63\u4ee5\u53ca\u5207\u5272\uff0c\u4e26\u79fb\u9664\u6709\u7248\u6b0a\u7591\u616e\u4e4b\u97f3\u6a02\u6bb5\u843d\u5f8c\u7522\u751f\u3002 \u672c\u5be6\u9a57\u4ea6\u9032\u4e00\u6b65\u5c07\u6b64\u8a9e\u6599\u5eab\u5206\u70ba\u8a13\u7df4\u8a9e\u6599\u53ca\u6e2c\u8a66\u8a9e\u6599\uff0c\u8a73\u7d30\u8cc7\u8a0a\u5982\u4e0b\uff1a \uf0b7 \u8a13\u7df4\u8a9e\u6599\uff1a\u7d04\u70ba 111.5 \u5c0f\u6642\uff0c\u5171 18710 \u53e5\u767c\u97f3\uff0c1715091 \u500b\u97f3\u7bc0\u6578\u3002 \uf0b7 \u6e2c\u8a66\u8a9e\u6599\uff1a \uf06e Clean\uff1a\u7d04\u70ba 1.9 \u5c0f\u6642\uff0c\u5171 549 \u53e5\u767c\u97f3\uff0c33660 \u500b\u97f3\u7bc0\u6578\u3002 \uf06e Other\uff1a\u7d04\u70ba 9.0 \u5c0f\u6642\uff0c\u5171 1322 \u53e5\u767c\u97f3\uff0c133746 \u500b\u97f3\u7bc0\u6578\u3002 \u8868 4. NER \u8a9e\u6599\u5eab\u8cc7\u8a0a [Table 4. NER corpus information] \u74b0\u5883\u985e\u578b \u7bc0\u76ee\u540d\u7a31 \u4ee3\u78bc \u7e3d\u6642\u6578 \u97f3\u7bc0\u7e3d\u6578 \u6a94\u6848\u7e3d\u6578 Clean \u5275\u8a2d\u5e02\u96c6 CS 14.4 235052 4028 \u6280\u8077\u6700\u524d\u7dda JZ 1.8 34352 438 \u570b\u969b\u6559\u80b2\u5fc3\u52d5\u7dda GJ 3.2 55057 640 \u591a\u611b\u81ea\u5df1\u4e00\u9ede\u9ede DA 13.6 212821 2347 \u79d1\u5b78 SoEasy KX 1.8 23415 208 \u9752\u5e74\u6545\u4e8b\u9928 QG 17.3 260116 3202 \u5728\u5b89\u975c\u7684\u5ba4\u5167\u74b0\u5883\u3002 \u4f7f\u7528\u9ad8\u6548\u80fd\u9ea5\u514b\u98a8\u9304\u88fd\u800c\u6210\uff0c\u53d6\u6a23\u983b\u7387\u70ba 44100 Hz\uff0c\u5f8c\u964d\u4f4e\u53d6\u6a23\u983b\u7387\u81f3 16000 Hz\uff0c \u53d6\u6a23\u4f4d\u5143\u6578\u70ba 16 \u4f4d\u5143\uff0c\u7531 400 \u540d\u4f86\u81ea\u4e2d\u570b\u4e0d\u540c\u53e3\u97f3\u5730\u5340\u7684\u53c3\u8207\u8005\u9304\u88fd\u800c\u6210\uff0c\u8a9e\u8005\u8cc7\u8a0a\u5982 \u8868 6\u3001\u8868 7 \u4e3b\u984c \u8a9e\u53e5\u6578 \u667a\u80fd\u5c45\u5bb6 5 \u5730\u7406\u8a0a\u606f 30 \u97f3\u6a02\u64ad\u653e\u6307\u4ee4 46 \u6578\u5b57\u4e32 29 \u96fb\u8996\u8207\u96fb\u5f71\u64ad\u653e\u6307\u4ee4 10 \u91d1\u878d 132 \u79d1\u5b78\u8207\u79d1\u6280 85 \u9ad4\u80b2 66 \u5a1b\u6a02 27 \u65b0\u805e 66 \u82f1\u6587\u62fc\u5beb 4 \u7537 186 \u7537 939132 \u7537 65205 \u5973 214 \u5973 1101080 \u5973 \u898f\u5316(Batch normalization, BN)\u65b9\u6cd5(Ioffe, & Szegedy, 2015)\uff0c\u5c07\u6bcf\u4e00\u5c64 DNN \u4e4b\u8f38\u51fa\u4f9d\u7167\u5c0f \u578b\u6279\u6b21\u6578(mini-batch)\u9032\u884c\u6b63\u898f\u5316\uff0c\u5982\u6b64\u4e00\u4f86\u5c31\u53ef\u4ee5\u5927\u5e45\u589e\u52a0\u8a13\u7df4\u5b78\u7fd2\u7387 \u8b93\u6a21\u578b\u8a13\u7df4\u52a0\u901f \u80fd\u662f\u932f\u7528\uff0c\u5404\u7a2e\u8907\u96dc\u7684\u56e0\u7d20\u5c0e\u81f4\u6f22\u5b57\u5f62\u6210\u4e86\u300c\u591a\u5f62\u3001\u6b67\u97f3\u3001\u7570\u7fa9\u300d\u7684\u72c0\u6cc1\uff0c\u56e0\u6b64\u76ee\u524d\u6f22 \u5b57\u5728\u4f7f\u7528\u4e0a\u5448\u73fe\u5b57\u5f62\u4e0d\u4e00\u3001\u5b57\u97f3\u5206\u6b67\u3001\u4e14\u5b57\u7fa9\u5bec\u5ee3\u7684\u7279\u6027\u3002 100 Seconds RTF Frames \uf03d DNN 21.17 0.04 \uf0b4 (1) CDNN 19.52 0.05 76395 \u5408\u8a08 400 \u5408\u8a08 2040212 \u5408\u8a08 141600 \u4ee5\u53ca\u907f\u514d\u5c64\u6578\u904e\u6df1\u800c\u9020\u6210\u7684\u904e\u5ea6\u64ec\u5408(Over-fitting)\u7684\u554f\u984c\u3002 5. \u8a9e\u8a00\u6a21\u578b\u4e4b\u5efa\u7acb (Language Model Establishment) \u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u56e0\u5f62\u97f3\u7fa9\u4e4b\u4e0d\u540c\uff0c\u6709\u4e9b\u8a5e\u985e\u662f\u53ef\u4ee5\u5408\u4f75\u7684\uff0c\u5982\u8868 8 \u6240\u793a\uff0c\u5927\u81f4\u4e0a\u53ef\u4ee5 \u5206\u70ba\u4e09\u985e\uff0c\u5373\u540c\u5f62\u7570\u97f3\u3001\u7570\u5f62\u540c\u97f3\u53ca\u7570\u5f62\u7570\u97f3\uff0c\u7f6e\u63db\u539f\u5247\u662f\u5efa\u7acb\u5728\u5b57\u7fa9\u76f8\u540c\u4e4b\u4e0a\uff0c\u76ee\u7684 \u662f\u70ba\u4e86\u8b93\u6587\u7ae0\u4e2d\u540c\u7fa9\u8a5e\u6b63\u898f\u5316\uff0c\u4ee5\u5229\u9078\u8a5e\u6642\u5bb9\u7d0d\u66f4\u591a\u8a5e\u5f59\uff0c\u4f46\u662f\u5728\u8a9e\u8a00\u6a21\u578b\u5efa\u7acb\u5f8c\uff0c\u8fa8 100% S I D ER \uf02b \uf02b \uf03d \uf0b4 LDNN 15.72 0.10 (2) N CLDNN 15.23 0.15 4. \u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def\u6a21\u578b\u914d\u7f6e (Deep Neural Network Model Configuration) CLDNN \u70ba\u8fd1\u5e74\u4f86\u88ab\u63d0\u51fa(Sainath, Vinyals, Senior & Sak, 2015)\u9069\u5408\u7528\u4f86\u5efa\u7acb\u8072\u5b78\u6a21\u578b\u7684 \u4e00\u7a2e\u67b6\u69cb\uff0c\u5176\u540d\u7a31\u4f86\u6e90\u70ba\u5377\u7a4d\u985e\u795e\u7d93\u7db2\u8def(CNN)\u52a0\u4e0a\u9577\u77ed\u671f\u8a18\u61b6(LSTM)\u5f8c\u518d\u63a5\u4e0a\u6df1\u5c64\u985e \u795e\u7d93\u7db2\u8def(DNN)\uff0c\u666e\u904d\u8a8d\u70ba\uff0cCNN \u80fd\u5920\u5b78\u7fd2\u7279\u5fb5\u53c3\u6578\u5728\u983b\u57df\u4e0a\u7684\u8b8a\u5316\u7a0b\u5ea6\uff0cLSTM \u5247\u64c5 \u9577\u6642\u57df\u4e0a\u7684\u6a21\u578b\u5efa\u7acb\uff0c\u6700\u5f8c DNN \u9069\u5408\u5c07\u7279\u5fb5\u6620\u5c04\u81f3\u66f4\u53ef\u5206\u96e2\u7684\u7a7a\u9593\u4e0a\u3002 \u6b64\u4e3b\u8981\u6a21\u578b\u4ea6\u4f7f\u7528 TCC300 \u4f5c\u70ba\u8a13\u7df4\u8a9e\u6599\uff0c\u7279\u5fb5\u53c3\u6578\u7684\u62bd\u53d6\u4e5f\u662f 40 \u7dad\u4e4b Fbank\uff0c\u672c \u7814\u7a76\u4f7f\u7528\u7684 LSTM \u5e36\u6709\u6620\u5c04\u5c64\u53ca\u7aba\u8996\u5b54\uff0c\u8a73\u7d30\u67b6\u69cb\u53c3\u898b\u5716 1\uff0c\u865b\u7dda\u9023\u7d50\u90e8\u5206\u5373\u70ba\u7aba\u8996\u5b54 \u4f5c\u7528\u4e4b\u9014\u5f91\uff0c\u76ee\u7684\u5728\u65bc\u8b93\u9598\u9580\u505a\u6c7a\u5b9a\u6642\u80fd\u540c\u6642\u8003\u616e\u77ed\u671f\u8a18\u61b6\u8207\u9577\u671f\u8a18\u61b6\uff0c\u800c\u6620\u5c04\u5c64\u4e4b\u76ee \u7684\u5728\u65bc\u964d\u4f4e LSTM \u8f38\u51fa\u6216\u905e\u8ff4\u7684\u795e\u7d93\u5143\u6578\u91cf\uff0c\u964d\u4f4e\u6a21\u578b\u7e3d\u53c3\u6578\u91cf\uff0c\u5e6b\u52a9\u7db2\u8def\u8a13\u7df4\u66f4\u70ba\u5feb \u901f\uff0c\u548c CNN \u5f8c\u9023\u63a5\u7684\u964d\u7dad\u5168\u9023\u63a5\u5c64\u6709\u7570\u66f2\u540c\u5de5\u4e4b\u5999\u3002 Input g 1 t c \uf02d \uf0b4 h \uf0b4 \uf0b4 Recurrent Output t c t i t f t o t m t r t y \uf073 \uf073 \uf073 t x \u672c\u7814\u7a76\u76ee\u7684\u65bc\u5efa\u7acb\u4e00\u4e2d\u6587\u5927\u8a5e\u5f59\u8fa8\u8b58\u7cfb\u7d71\uff0c\u56e0\u6b64\u9700\u8981\u5efa\u7acb\u8a9e\u8a00\u6a21\u578b\uff0c\u4e26\u52a0\u5165\u81f3\u7cfb\u7d71\u4e2d\uff0c \u8fa8\u8b58\u51fa\u4e2d\u6587\u8a5e\u5f59\u5e8f\u5217\u3002\u5982\u5716 2 \u6240\u793a\uff0c\u5efa\u7acb\u6d41\u7a0b\u70ba\uff1a\u5c07\u6587\u5b57\u8a9e\u6599\u7d93\u570b\u7acb\u4ea4\u901a\u5927\u5b78\u8a9e\u97f3\u8655\u7406 \u5be6\u9a57\u5ba4\u738b\u9038\u5982\u8001\u5e2b\u64b0\u5beb\u4e4b\u7e41\u9ad4\u4e2d\u6587\u65b7\u8a5e\u5668\u9032\u884c\u65b7\u8a5e\uff0c\u5f8c\u5c07\u6587\u5b57\u9032\u884c\u6b63\u898f\u5316\u3001\u79fb\u9664\u5197\u9918\u8d05 \u5b57\u3001\u53d6\u4ee3\u540c\u7fa9\u7570\u5b57\u8a5e(Variant Word, VW)\u7b49\u524d\u8655\u7406\uff0c\u63a5\u8457\u4f9d\u7167\u8a5e\u983b(Term Frequency, TF) \u53ca\u6a94\u6848\u983b\u7387(Document Frequency, DF)\u9032\u884c\u9078\u8a5e\uff0c\u4e00\u822c\u4f86\u8aaa\uff0c\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\u4e4b\u8a9e\u8a00\u6a21\u578b\u9700 \u8981 TF \u9ad8\u53ca DF \u4ea6\u9ad8\u4e4b\u8a5e\u5f59\uff0c\u672c\u7814\u7a76\u9078\u64c7\u4e86\u516b\u842c\u8a5e\u3001\u5341\u842c\u8a5e\u53ca\u5341\u4e8c\u842c\u8a5e\u5206\u5225\u5efa\u7acb\u4e09\u500b 3-gram \u8a9e\u8a00\u6a21\u578b\uff0c\u800c\u6700\u5f8c\u9808\u5c07\u524d\u8655\u7406\u7f6e\u63db\u7684\u540c\u7fa9\u7570\u5b57\u8a5e\u7f6e\u63db\u56de\u4f86\uff0c\u8a73\u7d30\u5167\u5bb9\u5728\u4e94\u4e4b(\u4e8c)\u7ae0\u7bc0\u89e3 \u8aaa\uff0c\u6700\u5f8c\u4ee5\u6709\u9650\u72c0\u614b\u8f49\u63db\u6a5f\u8868\u793a\u6b64\u8a9e\u8a00\u6a21\u578b\u3002 \u65b7\u8a5e \u6587\u5b57\u8a9e\uf9be TFIDF\u9078\u8a5e \u5efa\uf9f7 \u8a9e\u8a00\u6a21\u578b VW\u7f6e\u63db G.fst L.fst \u6587\u5b57\u524d\u8655\uf9e4 \u5716 2. \u4e2d\u6587\u8a9e\u8a00\u6a21\u578b\u5efa\u7acb\u6d41\u7a0b\u5716 \u8b58\u7aef\u6703\u7522\u751f\u4e00\u500b\u72c0\u6cc1\uff1a\u7570\u97f3\u985e\u7684\u540c\u7fa9\u5b57\u7121\u6cd5\u88ab\u641c\u5c0b\u5230\uff0c\u5982\u7bc4\u4f8b\u4e2d\u7684\u300c\u79ae\u62dc\u4e00\u300d\u88ab\u7f6e\u63db\u6210 \u300c\u9031\u4e00\u300d\uff0c\u56e0\u6b64\u5728\u8a9e\u8a00\u6a21\u578b\u4e2d\u7121\u6cd5\u627e\u5230\u300c\u79ae\u62dc\u4e00\u300d\u9019\u500b\u8a5e\u5f59\uff0c\u56e0\u6b64\u6211\u5011\u5728\u8a9e\u8a00\u6a21\u578b\u5efa\u7f6e \u7684\u6700\u5f8c\u4e00\u6b65\uff0c\u9700\u8981\u8655\u7406\u4e0d\u540c\u767c\u97f3\u4e4b\u540c\u7fa9\u7570\u5b57\u8a5e\u7684\u7f6e\u63db\uff0c\u5c07\u300c\u9031\u4e00\u300d\u5c55\u958b\u6210\u300c\u9031\u4e00\u300d\u3001\u300c\u661f \u671f\u4e00\u300d\u53ca\u300c\u79ae\u62dc\u4e00\u300d\u3002\u672c\u7814\u7a76\u4f7f\u7528\u4e4b\u540c\u7fa9\u7570\u5b57\u8a5e\u8868(variant word table)\u70ba 4261 \u8a5e\u3002 \u8868 8. \u5f62\u97f3\u7fa9\u5206\u5408\u8a5e\u7bc4\u4f8b [Table 8. Example of variant words] \u5f62\u97f3\u7fa9\u5206\u5408\u8a5e\u985e\u578b \u7f6e\u63db\u524d\u6587\u5b57 \u7f6e\u63db\u5f8c\u6587\u5b57 \u7238 \u53e6\u5916\uff0c\u672c\u5be6\u9a57\u4ea6\u4f7f\u7528\u93c8\u5f0f\u6a21\u578b(Chain model) (Povey et al., 2016)\u5efa\u69cb\u8072\u5b78\u6a21\u578b\uff0c\u4e00\u822c 6.1 \u5404 \u5f0f \u985e \u795e \u7d93 \u7db2 \u8def \u8072 \u5b78 \u6a21 \u578b \u8fa8 \u8b58 \u7d50 \u679c (Various Types of Neural \u8072\u5b78\u6a21\u578b\u7684\u8a13\u7df4\u4f7f\u7528\u7684\u662f\u6700\u5927\u5316\u76f8\u4f3c\u5ea6(Maximum Likelihood, ML)\uff0c\u7528\u65bc\u6700\u5927\u5316\u6a21\u578b\u53ca\u5176 Network Acoustic Model Recognition Results) \u7279\u5fb5\u53c3\u6578\u4e4b\u76f8\u4f3c\u5ea6\uff1b\u93c8\u5f0f\u6a21\u578b\u5247\u662f\u4f7f\u7528\u6700\u5927\u4ea4\u4e92\u8cc7\u8a0a\u6cd5\u5247(Maximum Mutual Information, \u70ba\u4e86\u63a2\u8a0e\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u5c0d\u65bc\u8072\u5b78\u6a21\u578b\u4e4b\u5f71\u97ff\uff0c\u672c\u5be6\u9a57\u8a2d\u8a08\u56db\u7d44\u6a21\u578b\uff0c\u4f7f\u7528\u7684\u8a13\u7df4\u8a9e MMI)\u9032\u884c\u8a13\u7df4\u5982\u5f0f(3)\uff0c\u5176\u4e2d ( ) P W \u8868\u793a\u7d66\u5b9a\u9010\u5b57\u6587\u672c(Transcription)\u4e2d\u5e8f\u5217W \u4e4b\u8a9e\u8a00\u6a21\u578b \u6599\u7686\u70ba TCC300\uff0cCDNN \u70ba\u4e00\u822c\u5377\u7a4d\u985e\u795e\u7d93\u7db2\u8def(CNN)\u7d50\u5408\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def(DNN)\uff0c\u8f38\u5165 \u6a5f\u7387\uff0c\u800c\u4ea4\u4e92\u8cc7\u8a0a\u53ef\u4ee5\u62c6\u6210\u5169\u9805\u76f8\u6e1b\uff0c num M \u8868\u793a\u53c3\u8003\u6587\u672c\u5e8f\u5217\uff0c den M \u8868\u793a\u6240\u6709\u53ef\u80fd\u4e4b \u4e4b\u9593\u5f7c\u6b64\u7368\u7acb\uff0c\u4e26\u6c92\u6709\u8a18\u61b6\u7279\u6027\uff1bLDNN \u70ba\u9577\u77ed\u671f\u8a18\u61b6(LSTM)\u7d50\u5408\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def\uff0c\u591a \u6587\u672c\u5e8f\u5217\uff0c\u6700\u5927\u5316 MMI F \u8868\u793a\u8b93\u53c3\u8003\u6587\u672c\u7684\u8def\u5f91\u6a5f\u7387 ( | ) r r P O W \u5728\u6240\u6709\u8def\u5f91\u4e2d\u6700\u70ba\u7a81\u51fa\uff0c \u4e86\u6642\u9593\u8ef8\u4e4b\u8cc7\u8a0a\uff0c\u904e\u53bb\u7684\u96b1\u85cf\u5c64\u72c0\u614b\u88ab\u4fdd\u7559\uff0c\u4e14\u900f\u904e\u9598\u9580\u7be9\u9078\u63a7\u5236\uff0c\u907f\u514d\u767c\u751f\u904e\u64ec\u73fe\u8c61\uff1b \u4f46\u662f\u4e00\u822c\u8a9e\u8a00\u6a21\u578b\u7686\u5efa\u7acb\u5728\u8a5e\u5f59(word)\u4e0a\uff0c\u9019\u6703\u5c0e\u81f4\u8a13\u7df4\u904e\u7a0b\u6548\u7387\u4e0d\u5f70\uff0c\u56e0\u6b64\u93c8\u5f0f\u6a21\u578b \u800c CLDNN \u5247\u7d50\u5408\u4ee5\u4e0a\u4e09\u7a2e\u985e\u795e\u7d93\u7db2\u8def\uff0c\u8a73\u7d30\u67b6\u69cb\u5982\u5716 3 \u6240\u793a\u3002 \u5728\u8a13\u7df4\u4e0a\uff0c\u6703\u5148\u4ee5\u97f3\u7d20(phone)\u70ba\u55ae\u4f4d\uff0c\u5efa\u7acb\u4e00\u500b 4-gram \u4e4b\u8a9e\u8a00\u6a21\u578b\uff0c\u4f5c\u70ba\u8a13\u7df4\u6642\u53c3\u8003\u7528\u3002 \u7238\u7238 \u53e6\u5916\u53c3\u8003\u5716 4 \u53ca\u5716 5\uff0c\u93c8\u5f0f\u6a21\u578b\u4f7f\u7528\u964d\u4f4e 3 \u500d\u4e4b\u97f3\u6846\u901f\u7387 (Sak, Senior, Rao & Beaufays, \u540c\u5f62\u7570\u97f3 \u5abd \u5abd\u5abd \u7570\u5f62\u540c\u97f3 \u624b\u8868 \u624b\u9336 \u74e9 \u5343\u74e6 \u7570\u5f62\u7570\u97f3 \u79ae\u62dc\u4e00 \u9031\u4e00 \u661f\u671f\u4e00 \u9031\u4e00 6. \u5be6\u9a57\u7d50\u679c\u5206\u6790\u8207\u8a0e\u8ad6 \u672c\u7ae0\u7bc0\u5c07\u9032\u884c\u5be6\u9a57\u7d50\u679c\u7684\u5206\u6790\u8207\u63a2\u8a0e\uff0c\u5176\u4e2d\u5305\u542b\u4f7f\u7528\u7121\u6587\u6cd5(Free-grammar)\u4e4b\u8a9e\u8a00\u6a21\u578b\u6e2c \u8a66\u97f3\u7bc0\u932f\u8aa4\u7387(Syllable Error Rate, SER)\uff0c\u4ee5\u53ca\u52a0\u5165\u4e0d\u540c\u8a5e\u5178\u5927\u5c0f\u4e4b\u8a9e\u8a00\u6a21\u578b\u6e2c\u8a66\u8a5e\u932f\u8aa4 \u7387(Word Error Rate, WER) \u8207\u5176\u5373\u6642\u4fc2\u6578(Real-Time Factor, RTF)\u3002 CNN1 CNN2 4-1-4 splice Subsampling 2015)\uff0c\u5373\u4e00\u6b21\u89c0\u5bdf 30ms \u4e4b\u97f3\u6846\uff0c\u4ee5\u53ca\u66f4\u70ba\u7c21\u55ae\u7684 HMM \u62d3\u6a38\u5716\uff0c\u4e00\u500b\u97f3\u7d20(phone)\u50c5\u7528 \u4e00\u500b HMM \u63cf\u8ff0\uff0c\u56e0\u6b64\u93c8\u5f0f\u6a21\u578b\u5728\u89e3\u78bc\u6642\u6bd4\u4e00\u822c\u985e\u795e\u7d93\u7db2\u8def\u6a21\u578b\u52a0\u901f\u4e09\u500d\u5de6\u53f3\uff0c\u5be6\u9a57\u7d50 \u679c\u5982\u8868 10 \u6240\u793a\uff0cChain-CLDNN \u5728\u97f3\u7bc0\u932f\u8aa4\u7387\u4ee5\u53ca RTF \u90fd\u8868\u73fe\u8f03 CLDNN \u6a21\u578b\u4f73\u3002 Dim Reduction 1 ( | ) ( ) log ( | ) ( ) log ( | ) log ( | ) R r r r MMI r r W num den P O W P W F P O W P W P O M P O M \uf03d \uf03d \uf03d \uf02d \uf0e5 \uf0e5 \uf020\uf020\uf020\uf020\uf020 (3) 40 Fbank LSTM1 LSTM2 DNN1 LSTM3 DNN2 Output 2-1-2 splice + LDA frame 2
", "html": null, "text": "(Mohamed, 2014) \u53d6\u4ee3\u4e4b\u3002\u800c\u4ee5 DNN \u5efa\u69cb\u7684\u8072\u5b78\u6a21\u578b\uff0c \u5728\u8a13\u7df4\u7684\u904e\u7a0b\u4e2d\uff0c\u5fc5\u9808\u4f7f\u7528\u5927\u91cf\u7684\u8a9e\u6599\uff0c\u5c0d\u65bc\u4e0d\u540c\u7684\u767c\u8072\u624d\u80fd\u6709\u8f03\u4f73\u7684\u8fa8\u8b58\u7d50\u679c\u3002\u7531\u65bc \u8a9e\u97f3\u70ba\u6642\u5e8f\u76f8\u95dc\u8a0a\u865f\uff0c\u56e0\u6b64\u672c\u7814\u7a76\u52a0\u5165\u4e86\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def(Recurrent neural network)\u8a13 \u7df4\u8072\u5b78\u6a21\u578b\uff0c\u4e26\u63a2\u8a0e\u5176\u8fa8\u8b58\u7d50\u679c \u5728\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\u4e2d\uff0c\u8a9e\u8a00\u6a21\u578b(Language model)\u4ea6\u626e\u6f14\u76f8\u7576\u91cd\u8981\u7684\u89d2\u8272\uff0c\u672c\u7814\u7a76\u57fa\u65bc \u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\u985e\u795e\u7d93\u7db2\u8def\u5efa\u69cb\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u5668\u4e4b\u7814\u7a76 3 \u672c\u5be6\u9a57\u5ba4\u64c1\u6709\u7684\u6587\u5b57\u8a9e\u6599\uff0c\u9078\u64c7\u516b\u842c\u8a5e\u3001\u5341\u842c\u8a5e\u3001\u5341\u4e8c\u842c\u8a5e\u7684\u8a5e\u5178(Lexicon)\u5206\u5225\u5efa\u69cb\u51fa \u4e09\u7a2e Tri-gram \u8a9e\u8a00\u6a21\u578b\uff0c\u4e26\u4e14\u5c0d\u65bc TCC300\u3001NER-clean\u3001NER-other \u4e09\u7a2e\u4e0d\u540c\u74b0\u5883\u7684\u6e2c \u8a66\u8a9e\u6599\u9032\u884c\u5206\u6790\u53ca\u63a2\u8a0e\uff0c\u5176\u4e2d TCC300 \u5c6c\u65bc\u6717\u8b80\u8a9e\u901f\u4e14\u7121\u96dc\u8a0a\u4e4b\u4e00\u822c\u8a9e\u6599\uff0cNER-clean \u70ba\u5feb\u8a9e\u901f\u4e14\u7121\u96dc\u8a0a\u4e4b\u81ea\u767c\u6027\u8a9e\u6599\u3001NER-other \u5247\u70ba\u5feb\u8a9e\u901f\u4e14\u6709\u80cc\u666f\u96dc\u8a0a\u4e4b\u81ea\u767c\u6027\u8a9e\u6599\u3002 \u6240\u793a\uff0c\u6b64\u8a9e\u6599\u5eab\u6587\u672c\u7d93\u4eba\u5de5\u6821\u6b63\u904e\uff0c\u6b63\u78ba\u7387\u70ba 95%\u4ee5\u4e0a\u3002 \u672c\u5be6\u9a57\u9032\u4e00\u6b65\u5c07\u6b64\u8a9e\u6599\u5eab\u5206\u70ba\u8a13\u7df4\u8a9e\u6599\u53ca\u6e2c\u8a66\u8a9e\u6599\uff1a \uf0b7 \u8a13\u7df4\u8a9e\u6599\uff1a\u7d04\u70ba 162.4 \u5c0f\u6642\uff0c\u5171 129341 \u53e5\u767c\u97f3\uff0c1862171 \u500b\u97f3\u7bc0\u6578\u3002 \uf0b7 \u6e2c\u8a66\u8a9e\u6599\uff1a\u7d04\u70ba 16.6 \u5c0f\u6642\uff0c\u5171 12259 \u53e5\u767c\u97f3\uff0c178041 \u500b\u97f3\u7bc0\u6578\u3002 \u8868" }, "TABREF1": { "num": null, "type_str": "table", "content": "
\u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\u985e\u795e\u7d93\u7db2\u8def\u5efa\u69cb\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u5668\u4e4b\u7814\u7a76 \u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6\u985e\u795e\u7d93\u7db2\u8def\u5efa\u69cb\u4e2d\u6587\u8a9e\u97f3\u8fa8\u8b58\u5668\u4e4b\u7814\u7a7613 \u8cf4\u5efa\u5b8f\u8207\u738b\u9038\u5982 15
Model CLDNN \u8868 15. \u5341\u842c\u8a5e\u8a9e\u8a00\u6a21\u578b\u8fa8\u8b58\u7d50\u679c [Table 15. 100K word LM recognition results] SER (%) 15.23 6.3 \u52a0 \u5165 \u8a9e \u8a00 \u6a21 \u578b \u4e26 \u63a2 \u8a0e \u4e0d \u540c \u74b0 \u5883 \u5c0d \u8fa8 \u8b58 \u7387 \u4e4b \u5f71 \u97ff (\u672c\u7bc0\u6211\u5011\u9078\u64c7\u4f7f\u7528\u900f\u904e 900.7 \u500b\u5c0f\u6642\u8a13\u7df4\u8a9e\u6599\u5efa\u7acb\u4e4b Chain-CLDNN \u6a21\u578b\u4f5c\u70ba\u8072\u5b78\u6a21\u578b\uff0c RTF 100K-LM Model Test data 0.15 \u6e2c\u8a66\u8a9e\u6599\u90e8\u5206\u9078\u64c7 1. \u6717\u8b80\u8a9e\u901f\u4e4b TCC300\u30012. \u81ea\u767c\u6027\u8a9e\u97f3\u4e14\u80cc\u666f\u7121\u566a\u97f3\u4e4b NER-clean\u3001 WER (%) Oracle (%) EDO(%)
Chain-CLDNN 3. \u81ea\u767c\u6027\u8a9e\u97f3\u4e14\u5177\u96dc\u8a0a\u4e4b NER-other\uff0c\u97f3\u7bc0\u8fa8\u8b58\u7387\u5982\u8868 13\uff0c\u5f8c\u52a0\u5165\u4e09\u7d44 Tri-gram \u8a9e\u8a00\u6a21 13.66 0.05 TCC300 7.12 6.06 5.19
6.2 \u52a0 \u5927 \u8a13 \u7df4 \u8a9e \u6599 \u5c0d \u8fa8 \u8b58 \u7387 \u7684 \u5f71 \u97ff (Impact of Increasing Training Corpus on Recognition Rates) \u5728\u6df1\u5ea6\u5b78\u7fd2(Deep learning)\u6216\u662f\u6a5f\u5668\u5b78\u7fd2\u4e2d\uff0c\u5e38\u5e38\u6703\u906d\u9047\u6a21\u578b\u904e\u5ea6\u64ec\u5408(Over-fitting)\u4e4b\u554f \u984c\uff0c\u82e5\u80fd\u9806\u5229\u89e3\u6c7a\uff0c\u5247\u80fd\u4f7f\u5f97\u6a21\u578b\u8a13\u7df4\u5f97\u66f4\u70ba\u6df1\u5c64\uff0c\u65b9\u6cd5\u9664\u4e86\u672c\u7814\u7a76\u6240\u4f7f\u7528\u7684\u6279\u6b21\u6b63\u898f \u578b\uff0c\u5206\u5225\u5c07\u8a5e\u5178\u5927\u5c0f\u8a2d\u5b9a\u70ba\uff1a\u516b\u842c\u8a5e\u3001\u5341\u842c\u8a5e\u53ca\u5341\u4e8c\u842c\u8a5e\uff0c\u5206\u5225\u6e2c\u8a66\u5176\u6700\u4f73\u8fa8\u8b58\u7d50\u679c Chain-CLDNN NER-clean 24.80 9.27 2.25 [ TCCAINER-sp ] (Oracle)\uff0c\u5373\u7576\u8072\u5b78\u6a21\u578b\u8f38\u51fa\u4e4b\u97f3\u7d20\u5e8f\u5217\u7686\u5b8c\u5168\u6b63\u78ba\u60c5\u6cc1\u4e0b\uff0c\u8a9e\u8a00\u6a21\u578b\u8fa8\u8b58\u51fa\u4e4b\u8a5e\u932f\u8aa4\u7387\uff0c NER-other 31.69 11.57 3.26 \u4ee5\u53ca EDO(Error Due to OOVs)\uff0c\u5373 OOV \u9020\u6210\u4e4b\u932f\u8aa4\u7387\uff0c\u5e73\u5747\u4e00\u500b OOV \u5f71\u97ff 2.103 \u500b\u8a5e\uff0c \u5716 6. TCC300 \u5973\u6027/\u7537\u6027\u6e2c\u8a66\u8a9e\u8005\u4e4b\u97f3\u7bc0\u8fa8\u8b58\u7387 \u500b\u6e2c\u8a66\u8a9e\u6599\u5206\u5225\u70ba 0.27\u30010.48 \u53ca 0.59\u3002\u7136\u800c\u672c\u5be6\u9a57\u5ba4\u4e4b\u6587\u5b57\u8a9e\u6599\u5eab\u5927\u591a\u53d6\u81ea\u65b0\u805e\u6587\u7ae0\uff0c [Figure 6. Syllable error rate of TCC300 female/male testers] \u6700\u5f8c\u7d50\u5408\u8072\u5b78\u6a21\u578b\u89e3\u78bc\u8a08\u7b97\u51fa\u8a5e\u932f\u8aa4\u7387(WER)\uff0c\u5982\u8868 14 \u81f3\u8868 16 \u6240\u793a\u3002RTF \u90e8\u5206\u5c0d\u65bc\u4e09 \u8868 16. \u5341\u4e8c\u842c\u8a5e\u8a9e\u8a00\u6a21\u578b\u8fa8\u8b58\u7d50\u679c
\u5316\u4e4b\u5916\uff0c\u53e6\u4e00\u500b\u65b9\u6cd5\u5c31\u662f\u76f4\u63a5\u589e\u52a0\u8a13\u7df4\u8a9e\u6599\uff0c\u7136\u800c\u5728\u8cc7\u6599\u6709\u9650\u7684\u60c5\u5f62\u4e0b\uff0c\u53ef\u4ee5\u900f\u904e\u8cc7\u6599 \u8f49 \u63db \u6280 \u8853 \u4f86 \u589e \u52a0 \u8a13 \u7df4 \u8cc7 \u6599 \uff0c \u6b64 \u6982 \u5ff5 \u5728 \u5f71 \u50cf \u8655 \u7406 (Image Processing) \u9818 \u57df \u5df2 \u88ab \u5be6 \u73fe (Krizhevsky, Sutskever & Hinton, 2012)\uff0c\u5716\u7247\u53ef\u4ee5\u900f\u904e\u65cb\u8f49(Rotation)\u3001\u7ffb\u8f49(Flip)\u3001\u7e2e\u653e (Zoom)\u3001\u5e73\u79fb(Shift)\u3001\u5c3a\u5ea6\u8f49\u63db(Rescale)\u7b49\u65b9\u6cd5\u7522\u751f\u65b0\u7684\u5716\u7247\u3002 \u8868 12. \u4f7f\u7528\u4e0d\u540c\u8a13\u7df4\u8a9e\u6599\u4e4b Chain-CLDNN \u6a21\u578b\u6bd4\u8f03 120K-LM domain \u76f8\u5c0d\u504f\u5411 TCC300 \u6e2c\u8a66\u96c6\uff0c\u800c NER \u6e2c\u8a66\u96c6\u5247\u591a\u70ba\u8ac7\u8a71\u6027\u7bc0\u76ee\uff0c\u56e0\u6b64\u672c\u5be6\u9a57\u5229\u7528 Model Test data NER \u4e4b\u8a13\u7df4\u8a9e\u6599\u9010\u5b57\u7a3f\u9032\u884c\u8a9e\u8a00\u6a21\u578b\u4e4b\u8abf\u9069\uff0c\u5982\u5f0f(4)\u6240\u793a\uff0c\u5be6\u9a57\u7d50\u679c\u5982\u8868 17 \u6240\u793a\uff0cWER WER (%) Oracle (%) EDO(%) [Table 12. Comparison of Chain-CLDNN models using different training corpora] Model Training data SER (%) TCC300 13.66 \u7372\u5f97\u5927\u5e45\u5ea6\u7684\u6539\u5584\u3002 0.3 0.7 adapt ori ner LM L M L M \uf03d TCC300 6.56 5.34 4.52 \uf02b (4) Chain-CLDNN [ TCCAINER-sp ] NER-clean 24.72 9.05 2.02 \u8a9e\u97f3\u8fa8\u8b58\u65b9\u9762\uff0c\u4ea6\u80fd\u4f7f\u7528\u985e\u4f3c\u7684\u65b9\u6cd5\uff0c\u6bd4\u5982\u6539\u8b8a\u97f3\u6a94\u4e4b\u97f3\u9ad8(Pitch)\u3001\u7bc0\u594f(Tempo)\u3001 \u8a9e\u901f(Speed)\u7b49\u7522\u751f\u51fa\u5047\u9020\u4e4b\u8cc7\u6599\uff0c\u64f4\u5145\u8a9e\u6599\u5eab\uff0c\u672c\u7814\u7a76\u9664\u4e86\u4f7f\u7528 TCC300\u3001NER \u53ca AIShell \u8a9e\u6599\u5eab\uff0c\u4ea6\u5229\u7528\u4e0a\u8ff0\u65b9\u6cd5\u7522\u751f\u8a9e\u901f 1.1 \u53ca 0.9 \u4e4b\u64fe\u52d5\u8a9e\u6599(Speed perturbation data)\uff0c\u4e26\u52a0 \u5165\u8a13\u7df4\u8a9e\u6599\u3002 Chain-CLDNN TCC300+AIShell 11.97 TCC300+AIShell_sp 11.49 NER-other 31.61 11.42 2.92 \u8868 13. Chain-CLDNN \u6a21\u578b\u5c0d\u65bc\u5404\u6e2c\u8a66\u96c6\u4e4b\u97f3\u7bc0\u932f\u8aa4\u7387 \u8868 17. \u5341\u4e8c\u842c\u8a5e\u8abf\u9069\u8a9e\u8a00\u6a21\u578b\u8fa8\u8b58\u7d50\u679c [Table 13. The syllable error rate of the Chain-CLDNN model for each test set] [Table 17. 120K word adaptation LM recognition results]
\u5be6\u9a57\u7d50\u679c\u5982\u8868 11 \u6240\u793a\uff0c\u9996\u5148\u91dd\u5c0d\u4e00\u822c CLDNN \u6a21\u578b\uff0c\u52a0\u5165 AIShell \u8a9e\u6599\u5eab\uff0c\u8a13\u7df4\u8a9e TCC300+AIShell+NER_sp 8.92 Model Test data 120K-LM-Adapt SER (%) Model Test data \u6599\u7531\u539f\u672c\u7684 24 \u5c0f\u6642\u589e\u52a0\u5230 186.4 \u5c0f\u6642\uff0c\u96d6\u7136 AIShell \u8a9e\u6599\u5eab\u4f86\u81ea\u4e2d\u570b\u5404\u5730\u53e3\u97f3\uff0c\u4f46\u662f\u5c0d TCC300 WER (%) Oracle (%) EDO(%) 8.92 \u65bc\u97f3\u7bc0\u8fa8\u8b58\u7387\u4e4b\u76f8\u5c0d\u6539\u5584\u7387\u4ecd\u9ad8\u9054\u7d04 15.5%\uff0c\u82e5\u4ee5\u8a9e\u8005\u500b\u5225\u5206\u6790\uff0c\u5982\u5716 6 \u6240\u793a\uff0c\u5247\u53ef\u4ee5 \u767c\u73fe\u5230\uff0c\u4e3b\u8981\u964d\u4f4e\u97f3\u7bc0\u932f\u8aa4\u7387\u4e4b\u8ca2\u737b\u4f86\u81ea\u539f\u672c\u97f3\u7bc0\u932f\u8aa4\u7387\u9ad8\u4e4b\u8a9e\u8005\uff0c\u5c0d\u65bc\u932f\u8aa4\u7387\u4f4e\u4e4b\u8a9e \u8005\u7121\u592a\u591a\u6539\u5584\uff0c\u63db\u53e5\u8a71\u8aaa\uff0c\u8fa8\u8b58\u7cfb\u7d71\u66f4\u5177\u5f37\u5065\u6027(Robustness)\u3002 Chain-CLDNN [ TCCAINER-sp ] NER-clean TCC300 7.79 5.87 4.52 16.89 NER-other Chain-CLDNN NER-clean 15.12 4.00 2.02 [ TCCAINER-sp ] 22.14 NER-other 21.66 4.74 2.92 \u63a5\u8457\u52a0\u5165\u5c6c\u65bc\u53f0\u7063\u8154\u8abf\u4e4b NER \u81ea\u767c\u6027\u8a9e\u6599\uff0c\u8a13\u7df4\u8a9e\u6599\u589e\u52a0\u81f3 297.9 \u5c0f\u6642\uff0c\u4e26\u4f7f\u7528\u4e0a \u8868 14. \u516b\u842c\u8a5e\u8a9e\u8a00\u6a21\u578b\u8fa8\u8b58\u7d50\u679c \u8ff0\u4e4b\u64fe\u52d5\u8a9e\u901f\u65b9\u6cd5\uff0c\u589e\u52a0\u81f3 900.7 \u5c0f\u6642\uff0c\u9010\u4e00\u8a13\u7df4\u51fa CLDNN \u93c8\u5f0f\u6a21\u578b\uff0c\u5be6\u9a57\u7d50\u679c\u5982\u8868 [Table 14. 80K word LM recognition results] 7. \u7d50\u8ad6\u8207\u672a\u4f86\u5c55\u671b (Conclusion and Future Prospects) 12\u3001\u5716 7 \u6240\u793a\u3002 Model Test data 80K-LM WER (%) Oracle (%) EDO(%) \u672c\u8ad6\u6587\u4f7f\u7528 Kaldi speech recognition toolkit \u4f86\u5be6\u73fe\u7d50\u5408\u5377\u7a4d\u985e\u795e\u7d93\u7db2\u8def\u3001\u9577\u77ed\u671f\u8a18\u61b6\u53ca\u6df1 \u5c64\u985e\u795e\u7d93\u7db2\u8def\u7684\u8072\u5b78\u6a21\u578b(CLDNN)\uff0c\u7d93\u904e\u5404\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u6a21\u578b\u7684\u6bd4\u8f03\u5f8c\uff0c\u78ba\u5b9a\u9577\u77ed\u671f\u8a18 \u8868 11. Model Training data TCC300 7.73 6.74 5.85 \u61b6\u5c0d\u65bc\u8072\u5b78\u6a21\u578b\u7684\u5efa\u69cb\u4e0a\u52a9\u76ca\u826f\u591a\uff0c\u4e5f\u78ba\u5b9a\u5377\u7a4d\u985e\u795e\u7d93\u7db2\u8def\u5c0d\u65bc\u7279\u5fb5\u7684\u5b78\u7fd2\u5c0d\u6574\u9ad4\u6a21\u578b SER (%) CLDNN TCC300 15.23 TCC300+AIShell 12.87 Chain-CLDNN [ TCCAINER-sp ] NER-clean 24.95 9.39 \u6709\u5e6b\u52a9\uff0c\u4e14\u52a0\u5165\u5927\u91cf\u4e0d\u540c\u4f86\u6e90\u4e4b\u8a13\u7df4\u8a9e\u6599(NER\u3001AIShell)\uff0c\u4e26\u4f7f\u7528\u8cc7\u6599\u589e\u5f37\u8f49\u63db\u6280\u8853\uff0c 2.48 \u80fd\u4f7f\u6a21\u578b\u4e4b\u5f37\u5065\u5ea6\u63d0\u5347\uff0c\u6700\u5f8c\u518d\u4ee5\u5be6\u9a57\u5ba4 4.4 \u5104\u8a5e\u5f59\u91cf\u6587\u672c\u8a13\u7df4 Tri-gram \u8a9e\u8a00\u6a21\u578b\uff0c\u4ee5 \u5716 7. \u8a13\u7df4\u8a9e\u6599\u91cf\u5c0d\u97f3\u7bc0\u8fa8\u8b58\u7387\u4e4b\u5f71\u97ff NER-other 31.92 11.92 3.91 \u5efa\u69cb\u4e2d\u6587\u5927\u8a5e\u5f59\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\uff0c\u5f9e\u5be6\u9a57\u7d50\u679c\u986f\u793a\uff0c\u7cfb\u7d71\u4e4b\u8a5e\u8fa8\u8b58\u7387\u8207\u8a9e\u8a00\u6a21\u578b\u6709\u975e\u5e38\u5bc6 [Figure 7. The effect of the amount of training corpus on the syllable recognition rate] \u5207\u7684\u95dc\u806f\uff0c\u4e5f\u5c31\u662f\u8aaa\uff0c\u6e2c\u8a66\u8a9e\u6599\u4e4b\u9818\u57df\u4f9d\u5b58\u6027(domain dependence)\u76f8\u7576\u9ad8\u3002
\u672c\u5be6\u9a57\u5efa\u69cb\u4e4b\u4e2d\u6587\u8fa8\u8b58\u7cfb\u7d71\u96d6\u7136\u5728\u6717\u8b80\u8a9e\u901f\u53ca\u81ea\u767c\u6027\u4e14\u7121\u566a\u97f3\u74b0\u5883\u4e0b\u7684\u8fa8\u8b58\u7387
(6.56%\u300115.12%)\u6709\u4e0d\u932f\u7684\u8868\u73fe\uff0c\u4f46\u662f\u5728\u81ea\u767c\u6027\u4e14\u74b0\u5883\u96dc\u8a0a\u9ad8\u7684\u74b0\u5883\u4e0b\uff0c\u8a5e\u932f\u8aa4\u7387\u4ecd\u9ad8\u9054
21.66%\uff0c\u56e0\u6b64\u5728\u8072\u5b78\u6a21\u578b\u65b9\u9762\u5982\u4f55\u6297\u566a\uff0c\u4ea6\u662f\u4e00\u500b\u7814\u7a76\u8ab2\u984c\uff0c\u53e6\u5916\u8a31\u591a\u7814\u7a76\u52a0\u5165 I-vector
", "html": null, "text": "" } } } }