{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:27:32.667436Z" }, "title": "Computational Linguistics & Chinese Language Processing Aims and Scope", "authors": [ { "first": "Jhih-Jie", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing-Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" }, { "first": "Hai-Lun", "middle": [], "last": "Tu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fu Jen Catholic University", "location": { "addrLine": "New Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Ching-Yu", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing-Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" }, { "first": "Chiao-Wen", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing-Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "chiaowen@nlplab.cc" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing-Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" }, { "first": "Chia-Cheng", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing-Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" }, { "first": "Li-Mei", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Fu Jen Catholic University", "location": { "addrLine": "New Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "D", "middle": [], "last": "Kimbrough Oller", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing-Hua University", "location": { "settlement": "Hsinchu", "country": "Taiwan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a method for Chinese spelling check that automatically learns to correct a sentence with potential spelling errors. In our approach, a character-based neural machine translation (NMT) model is trained to translate the potentially misspelled sentence into correct one, using right-and-wrong sentence pairs from newspaper edit logs and artificially generated data. The method involves extracting sentences contain edit of spelling correction from edit logs, using commonly confused right-and-wrong word pairs to generate artificial right-and-wrong sentence pairs in order to expand our training data , and training the NMT model. The evaluation on the United Daily News (UDN) Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data can significantly improve the performance of Chinese spelling check system.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present a method for Chinese spelling check that automatically learns to correct a sentence with potential spelling errors. In our approach, a character-based neural machine translation (NMT) model is trained to translate the potentially misspelled sentence into correct one, using right-and-wrong sentence pairs from newspaper edit logs and artificially generated data. The method involves extracting sentences contain edit of spelling correction from edit logs, using commonly confused right-and-wrong word pairs to generate artificial right-and-wrong sentence pairs in order to expand our training data , and training the NMT model. The evaluation on the United Daily News (UDN) Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data can significantly improve the performance of Chinese spelling check system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Spelling check is a common yet important task in natural language processing. It plays an important role in a wide range of applications such as word processors, assisted writing systems, and search engines. For example, search engine without spelling check is not user-friendly, while assisted writing system must perform spelling check as the minimal requirement. Web search engines such as Google (www.google.com) and Bing One solution to the lack of training data is to create artificial one for training. Researches on artificial error generation for English have shown great potential in improving underlying models for writing error correction (Felice & Yuan, 2014; Rei, Felice, Yuan, & Briscoe, 2017) . In other words, by generating artificial errors to increase data, we might have a chance to make spelling check models better and stronger. However, very few works have focused on generating artificial errors for Chinese.", "cite_spans": [ { "start": 651, "end": 672, "text": "(Felice & Yuan, 2014;", "ref_id": null }, { "start": 673, "end": 708, "text": "Rei, Felice, Yuan, & Briscoe, 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In this paper, we present AccuSpell, a system that automatically learns to generate the corrected sentence for a potentially misspelled sentence using neural machine translation (NMT) model. The system is built on a new dataset consisting of edit logs of journalists from the United Daily News (UDN). Moreover, we collect a number of confusion set for generating artificial errors to augment the data for training. The evaluation on the UDN Edit Logs and SIGHAN-7 Shared Task shows that adding artificial error data can significantly improve the performance of Chinese spelling check system. The model is deployed on Web and an example AccuSpell searches for the sentence \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\" ('The moon is so beautiful tonight, and I want a drink.') is shown in Figure 1 . AccuSpell has determined that \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002\" is the most probably corrected sentence. AccuSpell learns how to effectively correct a given sentence during training by using more data, including real edit logs and artificially generated data. We will describe how to Chinese Spelling Check based on Neural Machine Translation 3 create artificial data and training process in detail in Section 3.", "cite_spans": [], "ref_spans": [ { "start": 759, "end": 767, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\" ('The moon is so beautiful tonight, and I want a drink.') At run-time, AccuSpell starts with a sentence or paragraph submitted by the user (e.g., \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\"), which was first divided into clauses. Each clause then is splitted into Chinese characters before being fed to the NMT model. Finally, the model outputs an n-best list of sentences. In our prototype, AccuSpell returns the best sentence to the user directly (see Figure 1) ; alternatively, the best sentence returned by AccuSpell can be passed on to other applications such as automatic essay rater and assisted writing systems.", "cite_spans": [], "ref_spans": [ { "start": 439, "end": 448, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Figure 1. An example the Web version of AccuSpell searches for input \"\u4eca\u665a\u6708\u8272", "sec_num": null }, { "text": "The rest of the article is organized as follows. We review the related work in the next section. Then we describe how to extract the misspelled sentences from newspaper edit logs and how to generate artificial sentences with typos in Section 3. We also present our method for automatically learning to correct typos in a given sentence. Section 4 describes the resources and datasets we used in the experiment. In our evaluation, over two set of test data, we compare the performance of several models trained on both real and artificial data with the model trained on only real data in Section 5. Finally, we summarize and point out the future work in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example the Web version of AccuSpell searches for input \"\u4eca\u665a\u6708\u8272", "sec_num": null }, { "text": "Error Correction has been an area of active research, which involves Grammatical Error Correction (GEC) and Spelling Error Correction (SEC). Recently, researchers have begun applying neural machine translation models to both GEC and SEC, and gained significant improvement (e.g., Yuan & Briscoe, 2016; Xie, Avati, Arivazhagan, Jurafsky, & Ng, 2016) . However, compared to English, relatively little work has been done on Chinese error correction. In our work, we address the spelling error correction task, that focuses on generating corrections related to typos in Chinese text written by native speakers.", "cite_spans": [ { "start": 280, "end": 301, "text": "Yuan & Briscoe, 2016;", "ref_id": null }, { "start": 302, "end": 348, "text": "Xie, Avati, Arivazhagan, Jurafsky, & Ng, 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Early work on Chinese spelling check typically uses rule-based and statistical approaches. Rule-based approaches usually use dictionary to identify typos and confusion set to find possible corrections, while statistical methods use the noisy channel model to find candidates of correction for a typo and language model to calculate the likelihood of the corrected sentences. Chang (1995) proposed an approach that combines rule-based method and statistical method to automatically correct Chinese spelling errors. The approach involves confusing character substitution mechanism and bigram language model. They used a confusion set to replace each character in the given sentence with its corresponding confusing characters one by one, and use a bigram language model built from a newspaper corpus to score all modified sentences in an attempt to find the best corrected sentence. Zhang, Huang, Zhou, and Pan (2000) pointed out that Chang (1995) 's method can only address character substitution errors, other kinds of errors such as character deletion and insertion cannot be handled. They proposed an approach using confusing word substitution and trigram language model to extend the method proposed by Chang (1995) .", "cite_spans": [ { "start": 375, "end": 387, "text": "Chang (1995)", "ref_id": "BIBREF1" }, { "start": 881, "end": 915, "text": "Zhang, Huang, Zhou, and Pan (2000)", "ref_id": null }, { "start": 933, "end": 945, "text": "Chang (1995)", "ref_id": "BIBREF1" }, { "start": 1206, "end": 1218, "text": "Chang (1995)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In recent years, Statistical Machine Translation (SMT) has been applied to Chinese spelling check. Wu, Chen, Yang, Ku and Liu (2010) presented a system using a new error model and a common error template generation method to detect and correct Chinese character errors that can reduce false alarm rate significantly. The idea of error model is adopted from the noisy channel model, a framework of SMT, which is used in many NLP tasks such as spelling check and machine translation. Chiu, Wu and Chang (2013) proposed a data-driven method that detect and correct Chinese errors based on phrasal statistical machine translation framework. They used word segmentation and dictionary to detect possible spelling errors, and correct the errors by using SMT model built from a large corpus.", "cite_spans": [ { "start": 103, "end": 132, "text": "Chen, Yang, Ku and Liu (2010)", "ref_id": null }, { "start": 495, "end": 507, "text": "Chang (2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "More recently, Neural Machine Translation (NMT) has been adopted in error correction task and has achieved state-of-the-art performance. Yuan and Briscoe (2016) presented the very first NMT model for grammatical error correction of English sentences and proposed a two-step approach to handle the rare word problem in NMT. The word-based NMT models usually suffer from rare word problem. Thus, a neural network-based approach using character-based model for language correction was proposed by Xie et al. (2016) to avoid the problem of out-of-vocabulary words. Chollampatt and Ng (2018) proposed a multilayer convolutional encoder-decoder neural network to correct grammatical, orthographic, and collocation errors. Until now, most work on error correction done by using NMT model aimed Chinese Spelling Check based on Neural Machine Translation 5 at grammatical errors for English text. In contrast, we focus on correcting Chinese spelling errors.", "cite_spans": [ { "start": 137, "end": 160, "text": "Yuan and Briscoe (2016)", "ref_id": null }, { "start": 494, "end": 511, "text": "Xie et al. (2016)", "ref_id": null }, { "start": 561, "end": 586, "text": "Chollampatt and Ng (2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Building an error correction system using machine learning techniques typically require a considerable amount of error-annotated data. Unfortunately, limited availability of error-annotated data is holding back progress in the area of automatic error correction. Felice and Yuan (2014) presented a method that generates artificial errors for correcting grammatical mistakes made by learners of English as a second language. They are the first to use linguistic information such as part-of-speech to refine the contexts of occurring errors and replicate them in native error-free text, but also restricting the method to five error types. Rei et al. (2017) investigated two alternative approaches for artificially generating all types of writing errors. They extracted error patterns from an annotated corpus and transplanting them into error-free text. In addition, they built a phrase-based SMT error generator to translate the grammatically correct text into incorrect one.", "cite_spans": [ { "start": 638, "end": 655, "text": "Rei et al. (2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In a study closer to our work, Gu and Lang (2017) applied sequence-to-sequence (seq2seq) model to construct a word-based Chinese spelling error corrector. They established their own error corpus for training and evaluation by transplanting errors into an error-free news corpus. Comparing with traditional methods, their model can correct errors more effectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In contrast to the previous research in Chinese spelling check, we present a system that uses newspaper edit logs to train an NMT model for correcting typos in Chinese text. We also propose a method to generate artificial error data to enhance the NMT model. Additionally, to avoid rare word problem, our NMT model is trained at character level. The experiment results show that our model achieves significantly better performance, especially at an extremely low false alarm rate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Submitting a misspelled sentence (e.g., \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\") to a spelling check system with limited training data often does not work very well. Spelling check systems typically are trained on data of limited size and scope. Unfortunately, it is difficult to obtain a sufficiently large training set that cover most common errors, corrections, and contexts. When encountering new and unseen errors and contexts, these systems might not be able to correct such errors. To develop a more effective spelling check system, a promising approach is to automatically generate artificial errors in presumably correct sentences for expanding the training data, leading the system to cope with a wider variety of errors and contexts. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3." }, { "text": "We focus on correcting spelling errors in a given sentence by formulating the Chinese spelling check as a machine translation problem. A sentence with typos is treated as the source sentence, which is translated into a target sentence with errors corrected. The plausible target sentence predicted by a neural machine translation model is then returned as the output of the system. The returned sentence can be viewed by the users directly as suggestion for correcting a misspelled sentence, or passed on to other applications such as automatic essay rater and assisted writing systems. Thus, it is important that the misspelled characters in a given sentence be corrected as many as possible. At the same time, the system should avoid making false corrections. Therefore, our purpose is to return a sentence with most spelling errors corrected, while keeping false alarms reasonably low. We now formally state the problem that we are addressing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "We are given a possibly misspelled sentence X with n characters x 1 ,x 2 ,...,x n . Our goal is to return the correctly spelled sentence Y with m characters y 1 ,y 2 ,...,y m . For this, we prepare a dataset of right-and-wrong sentence pairs in order to train a neural machine translation (NMT) model. The sentences come from real edit logs and artificially-generated data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement:", "sec_num": null }, { "text": "In the rest of this section, we describe our solution to this problem. First, we describe the process of automatically learning to correct misspelled sentences in Section 3.2. More specifically, we describe the preprocessing of edit logs in Section 3.2.1, and how to artificially generate similar sentences with edits in Section 3.2.2. We then describe the process of training NMT model in Section 3.2.3. Finally, we show how AccuSpell corrects a given sentence at run-time by applying NMT model in Section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement:", "sec_num": null }, { "text": "We attempt to train a neural machine translation (NMT) model using right-and-wrong sentence pairs from edit logs and artificial data, which to translate a misspelled sentence into a correct one. In this training process, we first extract the sentences with spelling errors from edit logs (Section 3.2.1) and generate artificial misspelled sentences from a set of error-free sentences (Section 3.2.2). We then use these data to train the NMT model (Section 3.2.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Correct Misspelled Sentence", "sec_num": "3.2" }, { "text": "In the first stage of training process, we extract a set of sentences with spelling errors annotated by simple edit tags (i.e., < [-, -] > for deletion and <{+, +} > for insertion). For example, the sentence \"\u5e0c\u671b\u672a\u4f86\u4e3b\u8981\u5cf6\u5dbc\u90fd\u6709\u5b8c\u5584\u7684 [-\u99ac-] {+\u78bc+}\u982d\uff0c\" (Hope that the main islands will have perfect docks in the future.) contains the edit tags \" [-\u99ac-] {+\u78bc+}\" that means the original character \"\u99ac\" (pronounced 'ma') was replaced with \"\u78bc\" Chinese Spelling Check based on Neural Machine Translation 7 (pronounced 'ma').", "cite_spans": [ { "start": 130, "end": 136, "text": "[-, -]", "ref_id": null }, { "start": 223, "end": 228, "text": "[-\u99ac-]", "ref_id": null }, { "start": 331, "end": 336, "text": "[-\u99ac-]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Extracting Misspelled Sentences from Edit Logs", "sec_num": "3.2.1" }, { "text": "The input to this stage are a set of edit logs in HTML format, containing the name of editor, the action of edit (1 is insertion and 3 is deletion), the target content and some CSS attributes, as shown in Figure 2 . We first convert HTML files to simple text files by removing HTML tags and using simple edit tags \"{+ +}\" and \" [--] \" to represent the edit actions of insertion and deletion respectively. For example, the sentence in HTML format \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457\u4f48\u5e03\u5c40\u660e\u5e74\uff0c\" is converted to \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457 [-\u4f48-] {+\u5e03+}\u5c40\u660e\u5e74\uff0c\" (\"Foreign investment is not in a hurry to layout next year,\").", "cite_spans": [ { "start": 328, "end": 332, "text": "[--]", "ref_id": null }, { "start": 622, "end": 627, "text": "[-\u4f48-]", "ref_id": null } ], "ref_spans": [ { "start": 205, "end": 213, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2. An example of edit logs in HTML format Figure 3. Examples of different edit types in edit logs", "sec_num": null }, { "text": "After that, we attempt to extract the sentences that contain at least one typo. As shown in Figure 3 , the edit logs could contain many kinds of edits, including spelling correction, content changes, and style modification (such as synonyms replacement). Among these edits, we are only concerned with spelling correction. However, lack of edit type annotation makes it difficult to directly identify spelling errors. Thus, we consider consecutive single-character edit pairs of deletion and insertion (e.g., \"[-\u4f48-]{+\u5e03+}\" or \"{+\u5e03+}[-\u4f48-]\") as spelling correction, and extract the sentences containing such edit pairs. Furthermore, we use a set of rules to filter out some kinds of edits such as time-related and digital-related. Figure 3 shows some edited sentences, the fifth, sixth, seventh, eighth and eleventh sentences are regarded as sentences with spelling errors according these simple rules. The output of this stage is a set of sentences with spelling errors annotated using simple edit tags, as shown in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 100, "text": "Figure 3", "ref_id": null }, { "start": 727, "end": 735, "text": "Figure 3", "ref_id": null }, { "start": 1013, "end": 1021, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Figure 2. An example of edit logs in HTML format Figure 3. Examples of different edit types in edit logs", "sec_num": null }, { "text": "Although this approach for extracting the edited sentences involving spelling correction can obtain quite a few results, there is still a room for improvement. For example, the edited sentence \"\u50f9\u503c\u4e0a\u767e\u842c\u7684\u597d\u79ae[-\u901a\u901a-]{+\u7d71\u7d71+}\u5e36\u56de\u5bb6\u3002\" ('Bring millions of good gifts home') contains a consecutive two-character edit pair \" [-\u901a \u901a -]{+ \u7d71 \u7d71 +} \" (both pronounced 'tong tong'), which is also spelling error correction. However, it is not extracted because we only consider consecutive single-character edit pairs. In some cases, an edited sentence might be wrongly regarded as misspelled sentence. For example, the sentence \"\u9019 \u9805\u8a08\u756b\u5c07\u6301\u7e8c\u52df\u6b3e\u5230\u4eca\u5e74 [-\u8056-] {+\u8036+}\u8a95\u7bc0\uff0c\" ('This project will continue to raise funds until this Christmas,') contains an edit pair \"[-\u8056-]{+\u8036+}\" about style modification. Consider the context of the edited character, the word \"\u8056\u8a95\u7bc0\" (pronounced 'sheng dan jie', it means the birthday of the holy child Jesus) and \"\u8036\u8a95\u7bc0\" (pronounced 'ye dan jie', it means the birthday of Jesus) are both correct, and they almost mean the same thing. For such case, using word segmentation and meaning similarity measure of two words may be helpful.", "cite_spans": [ { "start": 619, "end": 624, "text": "[-\u8056-]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 4. Example outputs for the step of extracting misspelled sentences", "sec_num": null }, { "text": "In the second stage of training process, we create a set of artificial misspelled sentences for expanding our training data. These generated data are expected to make the Chinese spelling checker more effective. The input to this stage is a set of presumably error-free sentences from published texts with word segmentation done using a word segmentation tool provided by the CKIP Project (Ma & Chen, 2003) . Artificially misspelled sentences are generated by injecting errors into these error-free sentences. Although a correct word could be misspelled as any other Chinese word, some right-and-wrong word pairs are more likely to happen than others. In order to generate realistic spelling errors, we use a confusion set consisting of commonly confused right-and-wrong word pairs (see Table 1 ). The wrong words in confusion set are used to replace counterpart correct words in the sentences. For example, we use error-free sentence \"\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u9418\" ('also apologized to the patient for ten minutes') to generate three misspelled sentences, as shown in Table 2 . Figure 5 shows the procedure for generating artificial misspelled sentences using the MapReduce framework to speed up the process. \u2022Map procedure: In Step (1), for each word in the given (presumably) error-free sentence with length not longer than 20 words, we obtain the corresponding confused words. For example, the confusion set of word \"\u8ce0\u7f6a\" contains two confused wrong words:", "cite_spans": [ { "start": 389, "end": 406, "text": "(Ma & Chen, 2003)", "ref_id": null } ], "ref_spans": [ { "start": 787, "end": 794, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1051, "end": 1058, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1061, "end": 1069, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Generating Artificially Misspelled Sentences", "sec_num": "3.2.2" }, { "text": "\"\u57f9\u7f6a\" and \"\u966a\u7f6a\". The original word is then replaced with its corresponding confused words in Steps (2a) and (2b). To work with MapReduce framework, we then format the output data to key-value pair in Step (3a) and (3b). In order to group the generated misspelled sentences according to replacement (e.g., \"\u8ce0\u7f6a\" is replaced with \"\u57f9\u7f6a\" ), we use a right-and-wrong word pair (e.g., \"\u8ce0\u7f6a|||\u57f9\u7f6a\") to be the key, and a right-and-wrong sentence pair (e.g., \"\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u9418|||\u4e5f\u8ddf\u60a3 \u8005\u57f9\u7f6a\u4e86\u5341\u5206\u9418\") to be the value. Finally, the key-value pair is outputted in Step (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Artificially Misspelled Sentences", "sec_num": "3.2.2" }, { "text": "\u2022Reduce procedure: In this procedure, the inputs are the key-value pairs outputted by Mapper. For each word pair, there might be too many sentence pairs. Thus, in Step (1), we set a threshold N to limit the number of sentences generated. In order to randomly sample a set of sentences, we make these sentence pairs redistributed by shuffling in Step (2), and output the first N of sentence pairs in Step (3) .", "cite_spans": [ { "start": 404, "end": 407, "text": "(3)", "ref_id": "BIBREF147" } ], "ref_spans": [], "eq_spans": [], "section": "Generating Artificially Misspelled Sentences", "sec_num": "3.2.2" }, { "text": "The output of this stage is a set of right-and-wrong sentence pairs, as shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Generating Artificially Misspelled Sentences", "sec_num": "3.2.2" }, { "text": "The confusion set plays an important role in this stage, so it is critical to decide what kinds of confusion set to use. There are several available word-level and character-level confusion sets. However, compare to word-level, a Chinese character could be confused with more other characters based on shape and sound similarity. For example, the character \"\u8ce0\" is confused with 23 characters with similar shape and 21 characters with similar sound in a character-level confusion set, while the word \"\u8ce0\u7f6a\" is confused with only two words in a word-level confusion set. Moreover, an occurring typo might involve not only the character itself but also the context. If we use the character-level confusion set, an error-free sentence would produce numerous and probably unrealistic artificial misspelled sentences. Therefore, we decide to use word-level confusion sets. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating Artificially Misspelled Sentences", "sec_num": "3.2.2" }, { "text": "In the third and final stage of training process, we train a character-based neural machine translation (NMT) model for developing a Chinese spelling checker, which translates a potentially misspelled sentence into a correct one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "The architecture of NMT model typically consists of an encoder and a decoder. The encoder consumes the source sentence X = [x 1 ,x 2 ,...,x I ] and the decoder generates translated target sentence Y = [y 1 ,y 2 ,...,y J ]. For the task of correcting spelling errors, a potentially misspelled sentence is treated as the source sentence X, which is translated into the target sentence Y with errors corrected. To train the NMT model, we use a set of right-and-wrong sentence pairs from edit logs (Section 3.2.1) and artificially-generated data (Section 3.2.2) as target-and-source training sentence pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "In the training phase, the model is given (X, Y) pairs. At encoding time, the encoder reads and transforms a source sentence X, which is projected to a sequence of embedding vectors e = [e 1 ,e 2 ,...,e I ], into a context vector c:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "c = q(h 1 ,h 2 ,...,h I ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "where q is some nonlinear function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "We use a bidirectional recurrent neural network (RNN) encoder to compute a sequence of hidden state vectors h = [h 1 ,h 2 ,...,h I ]. The bidirectional RNN encoder consists of two independent encoders: a forward and a backward RNN. The forward RNN encodes the normal sequence, and the backward RNN encodes the reversed sequence. A hidden state vector h i at time i is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "fh i = ForwardRNN(h i\u22121 ,e i ) (2) bh i = BackwardRNN(h i+1 ,e i ) (3) h i = [fh i ||bh i ] (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "where || denotes the vector concatenation operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "At decoding time, the decoder is trained to output a target sentence Y by predicting the next character y j based on the context vector c and all the previously predicted characters {y 1 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y 2 ,...,y j\u22121 }: 1 2 1 1 (Y | X) ( | , , , ; ) J j j j p py y y y c \uf02d \uf03d \uf03d \uf0d5 \uf04b", "eq_num": "(5)" } ], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "The conditional probability is modeled as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "' 1 2 1 1 ( | , ,..., ; ) ( , , ) j j j j p y y y y c g y h c \uf02d \uf02d \uf03d (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "where g is a nonlinear function, and h' j is the hidden state vector of the RNN decoder at time j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "We use an attention-based RNN decoder that focuses on the most relevant information in the source sentence rather than the entire source sentence. Thus, the conditional probability in Equation 5 is redefined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "' 1 2 1 1 ( | , ,..., ; ) ( , , ) j j j j p y y y y g y h \uf02d \uf02d \uf03d j e c", "eq_num": "(7)" } ], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "where the hidden state vector h' j is computed as follow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 ( , , ) ' ' j j j h f y h \uf02d \uf02d \uf03d j c (8) 1 I j j i i i c a h \uf03d \uf03d \uf0e5 (9) 1 exp(score( , )) exp(score( , )) ' ' ' j i ji ' I j i i h h a h h \uf03d \uf03d \uf0e5", "eq_num": "(10)" } ], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "Unlike Equation 6, here the probability is conditioned on a different context vector c j for each target character y j . The context vector c j follows the same computation as in Bahdanau, Cho, and Bengio (2014) . We use the global attention approach (Luong, Pham & Manning, 2015) with general score function to compute the attention weight a ji :", "cite_spans": [ { "start": 179, "end": 211, "text": "Bahdanau, Cho, and Bengio (2014)", "ref_id": "BIBREF0" }, { "start": 251, "end": 280, "text": "(Luong, Pham & Manning, 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "T score( , ) ' ' j i j a i h h h W h \uf03d (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "Instead of implementing an NMT model from scratch, we use OpenNMT (Klein, Kim, Deng, Senellart, & Rush, 2017) , an open source toolkit for neural machine translation and sequence modeling, to train the model. The training details and hyper-parameters of our model will be described in Section 4.2.", "cite_spans": [ { "start": 74, "end": 109, "text": "Kim, Deng, Senellart, & Rush, 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Model", "sec_num": "3.2.3" }, { "text": "Once the NMT model is automatically trained for correcting spelling errors, we apply the model at run time. AccuSpell then corrects a given potentially misspelled sentence with the character-based NMT model using the procedure in Figure 6 .", "cite_spans": [], "ref_spans": [ { "start": 230, "end": 238, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Run-time Error Correction", "sec_num": "3.3" }, { "text": "With a character-based NMT model, the input sentence is expected to follow the format that tokens are space-separated. Thus, in Step (1), the characters in the given sentence are separated with space. For example, \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\" is transformed into \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u676f\u3002\". In Step (2), the source sentence is fed to our NMT model. During processing, the encoder first transforms the source sentence into a sequence of vectors. The decoder then computes the probabilities of predicted target sentences given the vectors of source sentence. Finally, a beam search is used to find a target sentence that approximately maximizes the conditional probability. Table 4 shows the top three target sentences predicted by our NMT model for the source sentence \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c \u4e00\u676f\u3002\", and the highest-score one \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002\" is returned as the correction.", "cite_spans": [], "ref_spans": [ { "start": 650, "end": 657, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Figure 6. Correcting spelling errors in a sentence", "sec_num": null }, { "text": "\u4e00\u676f\u3002\" predicted by NMT model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 4. Top three target sentences of the source sentence\"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c", "sec_num": null }, { "text": "Predicted Score Rank", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Sentence", "sec_num": null }, { "text": "\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002 -0.0047 1 \u4eca\u665a\u6708\u8272\u4e5f\u7f8e\uff0c\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002 -6.93 2 \u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f\u707c\u4e00\u8036\u3002 -7.36 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Sentence", "sec_num": null }, { "text": "To give useful and clear feedback, we convert the correction result into a informative expression instead present users with the output of NMT model directly. Therefore, in Steps (3a) and (3b), we compare the source sentence with the target sentence to find out the differences between them, and use simple edit tags to mark these differences. Finally in Step (4), the converted result (e.g., \"\u4eca\u665a\u6708\u8272\u5f88\u7f8e\uff0c\u6211\u60f3\u5c0f[-\u707c-]{+\u914c+}\u4e00\u676f\u3002\") is returned by AccuSpell. As shown in Figure 1 , the characters to be deleted (e.g., \"[-\u707c-]\") are colored in red, while the inserted characters (e.g., \"{+\u914c+}\") are colored in green.", "cite_spans": [], "ref_spans": [ { "start": 458, "end": 466, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Target Sentence", "sec_num": null }, { "text": "AccuSpell was designed to correct spelling errors in Chinese texts written by native speakers. As such, AccuSpell will be trained and evaluated using mainly real edit logs and a newspaper corpus. In this section, we first give a brief description of the datasets used in the experiments in Section 4.1, and describe the hyper-parameters for the NMT model in Section 4.2. Then several NMT models with different experimental setting for comparing performance are described in Section 4.3. Finally in Section 4.4, we introduce the evaluation metrics for evaluating the performance of these models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4." }, { "text": "United Daily News (UDN) Edit Logs: UDN Edit Logs was provided to us by UDN Digital. This dataset records the editing actions of daily UDN news from June 2016 to January 2017. There are 1.07 million HTML files with more than 30 million edits of various types, with approximately 11 million insertions and 20 million deletions. However, lack of edit type annotation makes it difficult to directly identify spelling errors. Thus, we extracted a set of annotated sentences involving spelling error correction from this edit logs using the approach described in Section 3.2.1. To train on NMT model, we transformed every annotated sentence into a source-and-target parallel sentence. For example, \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457[-\u4f48-]{+\u5e03+}\u5c40\u660e \u5e74\uff0c\" is transformed into a source sentence \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457\u4f48\u5c40\u660e\u5e74\uff0c\" and a target sentence \"\u5916\u8cc7\u4e5f\u4e0d\u6025\u8457\u5e03\u5c40\u660e\u5e74\uff0c\". In total, there are 238,585 sentences extracted from UDN Edit Logs, and each sentence contains only edits related to spelling errors. We divided these extracted sentences into two parts: one (226,913 sentences) for training NMT models, and the other (11,943 sentences) for evaluation in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "4.1" }, { "text": "The UDN news dataset was also provided by UDN Digital. The dataset consists of published newswire data from 2004 to 2017, which contains approximately 1.8 million news articles with over 530 million words. Unlike UDN Edit Logs, UDN are composed of news articles which had been edited and published. We used the presumably error-free sentences in this dataset to generate artificially misspelled sentences, as described in Section 3.2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "United Daily News (UDN):", "sec_num": null }, { "text": "Unrecommended word \u5df4\u5427 (pronounced 'ba') \u555e\u5df4('dumb') \u555e\u5427 \u80cc\u63f9 (pronounced 'bei') \u80cc\u8457('carrying') \u80cc\u9ed1\u934b('take the blame')", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u63f9\u8457 \u63f9\u9ed1\u934b \u5228\u924b (pronounced 'bao') \u5228\u51b0('shaved ice') \u924b\u51b0 \u676f\u76c3 (pronounced 'bei') \u5e02\u9577\u676f('mayor cup') \u5e02\u9577\u76c3 \u6fb9\u6de1 (pronounced 'dan') \u6158\u6fb9('miserable') \u6de1\u6cca ('indifferent')", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u6158\u6de1 \u6fb9\u6cca \u95c6\u677f (pronounced 'ban') \u8001\u95c6('boss') \u8001\u677f", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "Confusion Set: We used five distinct confusion sets collected from different sources:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u2022 \u806f\u5408\u5831\u7d71\u4e00\u7528\u5b57(Uniform Words List of UDN): The dataset of \u806f\u5408\u5831\u7d71\u4e00\u7528\u5b57 provided by UDN Digital contains 1,056 easily confused word pairs. As shown in Table 5 , the confused word pairs indicate that which words are recommended and which ones should not be used for UDN news articles. However, not all the unrecommended words are wrong because the suggestions are just preference rules for writing news articles for the UDN journalists. For example, a confused word pair [\"\u5e02\u9577\u676f\", \"\u5e02\u9577\u76c3\"](' Mayor CUP') in Table 5 , the former is recommended and the latter is not recommended, but they are both correct and in common use. In our work, we collect all the word pairs, and consider them as right-and-wrong word pairs", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 5", "ref_id": null }, { "start": 491, "end": 498, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u2022 \u6771\u6771\u932f\u5225\u5b57(Kwuntung Typos Dictionary): This dataset was collected from the Web (www.kwuntung.net/check/), which contains a set of commonly confused right-and-wrong word pairs. For each word pair, there is one distinct character with similar pronunciation or shape between right and wrong word. We obtain 38,125 different right-and-wrong word pairs in total, which constitutes the main part of our confusion set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u2022 \u65b0\u7de8\u5e38\u7528\u932f\u5225\u5b57\u9580\u8a3a(New Common Typos Diagnosis): This dataset comes from the print publication: \u65b0\u7de8\u932f\u5225\u5b57\u9580\u8a3a (\u8521\u6709\u79e9, 2003) and contains 492 right-and-wrong word pairs.", "cite_spans": [ { "start": 96, "end": 107, "text": "(\u8521\u6709\u79e9, 2003)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u2022 \u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178(Dictionary of Common Typos): This dataset is from a print publication: \u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178 (\u8521\u69ae\u5733, 2012). There are 601 right-andwrong word pairs in total.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u2022 \u570b\u4e2d\u932f\u5b57\u8868(The Typos List for Middle School): This dataset contains a set of commonly misused right-and-wrong word pairs for middle school students. There are 1,720 word pairs in original. However, some pairs are composed of phrases (e.g., \"\u89c0\u5ff5 \u4e0d\u4f73\" and \"\u70ba\u81ea\u5df1\u7684\u672a\u4f86\u92ea\u8def\") instead of words. To ensure that all pairs are at word level, we used some rules to transform the phrase pairs into word pairs. For example, the right-and-wrong phrase pair [\"\u70ba\u81ea\u5df1\u7684\u672a\u4f86\u92ea\u8def\", \"\u70ba\u81ea\u5df1\u7684\u672a\u4f86\u6355\u8def\"] ('Pave the way for your own future') is transformed to the word pair [\"\u92ea\u8def\", \"\u6355 \u8def\"] (pronounced 'pu lu' and 'bu lu'). Moreover, we discarded the pairs cannot be transformed such as [\"\u5341\u4f86\u679d\u7684\u6383\u5177\", \"\u5341\u4f86\u96bb\u7684\u6383\u5177\"] ('A dozen brooms.'). After that, 1,551 word pairs remained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "The confused word pairs of five confusion sets are combined into a collection with over 40,000 word pairs. However, for a given confused word pair, the judgments in different confusion sets might be inconsistent. Consider a confused word pair [\"\u9418\u9336\",\"\u9418 \u8868\"]('Clock', pronounced 'zhong biao'). \"\u9418\u9336\" is right and \"\u9418\u8868\" is wrong in Kwuntung Typos Dictionary, while \" \u9418 \u8868 \" is adopted and \" \u9418 \u9336 \" is not recommended in Uniform Words List of UDN. Furthermore, the confusion sets are not guaranteed to be absolutely correct. To resolve these problems, we used the Chinese dictionary published by Ministry of Education of Taiwan as the gold standard. After filtering out the invalid word pairs, the new confusion set CFset with 33,551 distinct commonly confused word pairs were obtained. Table 6 shows the number of word pairs of all confusion sets. Test Data: We used two test sets for evaluation, and Table 7 shows the statistical analysis of them in detail:", "cite_spans": [], "ref_spans": [ { "start": 778, "end": 785, "text": "Table 6", "ref_id": "TABREF3" }, { "start": 893, "end": 900, "text": "Table 7", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u2022 UDN Edit Logs: As mentioned earlier, UDN Edit Logs were partitioned into two independent parts, for training and testing respectively. The test part contains 11,943 sentences and we only used 1,175 sentences for evaluation, 919 out of which contain at least one error.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "\u2022 SIGHAN-7: We also used the dataset provided by SIGHAN 7 Bake-off 2013 (Wu, Liu & Lee, 2013) . This dataset contains two subtasks: Subtask 1 is for error detection and Subtask 2 is for error correction. In our work, we focus on evaluating error correction, so we used Subtask 2 as an additional test set. There are 1,000 sentences with spelling errors in Subtask 2, and the average length of sentences is approximately 70 characters. To be consistent with UDN Edit Logs, we segmented these sentences into 6,101 clauses, and 1,222 of which contain at least one error.", "cite_spans": [ { "start": 77, "end": 93, "text": "Liu & Lee, 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recommended word", "sec_num": null }, { "text": "We trained several models using the same hyper-parameters in our experiments. For all models, the source and target vocabulary sizes are limited to 10K since the models are trained at character level. For source and target characters, the character embedding vector size is set to 500. We trained the models with sequences length up to 50 characters for both source and target sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyper-parameters of NMT Model", "sec_num": "4.2" }, { "text": "The encoder is a 2-layer bidirectional long-short term memory (LSTM) networks, which consists of a forward LSTM and a backward LSTM, and the decoder is also a 2layer LSTM. Both the encoder and the decoder have 500 hidden units. We use the Adam Algorithm (Kingma & Ba, 2014) as the optimization method to train our models with learning rate 0.001, and the maximum gradient norm is set to 5. Once a model is trained, beam search with beam size set to 5 is used to find a translation that approximately maximizes the probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hyper-parameters of NMT Model", "sec_num": "4.2" }, { "text": "Our experimental evaluation focuses on writing of native speakers. Therefore, we used UDN Edit Logs and the artificially generated misspelled sentences as the training data. To investigate whether adding artificially generated data improves the performance of our Chinese spelling check system, we compared the results produced by several models trained on different combination of datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "4.3" }, { "text": "In addition, we use some additional features on source and target words in the form of discrete labels to train the NMT model 1 . As Liu et al. (2011) stated, around 75% of typos were related to the phonological similarity between the correct and the incorrect characters, and about 45% were due to visual similarity. Thus, we use the pronunciation and shape of a character from the Unihan Database 2 as the additional feature of the source and target characters. As an example, for the character \"\u8a63\", the pronunciation feature is \"\u3127\" (without considering the tone) and the shape features are \"\u8a00\" and \"\u65e8\". On the other hand, a spelling error might involve not only the character itself but also the context, so we use the context (with window size 1) of a character as additional features to train another model. Table 8 . Features for the sentence \"\u6211\u60f3\u5c0f\u914c\u4e00\u676f\u3002\" Table 8 gives an example to illustrate the pronunciation, shape, and context features.", "cite_spans": [ { "start": 133, "end": 150, "text": "Liu et al. (2011)", "ref_id": null } ], "ref_spans": [ { "start": 813, "end": 820, "text": "Table 8", "ref_id": null }, { "start": 859, "end": 866, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Models Compared", "sec_num": "4.3" }, { "text": "Feature \u6211 \u60f3 \u5c0f \u914c \u4e00 \u676f \u3002 Sound \u3128\u311b (wo) \u3112\u3127\u3124 (xiang) \u3112\u3127\u3120 (xiao) \u3113\u3128\u311b (zhuo) \u3127 (yi) \u3105\u311f (bei) N Shape (\u6208,\u6211) (\u5fc3,\u76f8) (\u5c0f,\u5c0f) (\u9149,\u52fa) (\u4e00,\u4e00) (\u6728,\u4e0d) (N,N) Context (BEG,\u60f3) (\u6211,\u5c0f) (\u60f3,\u914c) (\u5c0f,\u4e00) (\u914c,\u676f) (\u4e00,\u3002) (\u676f,END)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "4.3" }, { "text": "There are totally eight models trained for comparing, and only last two were trained with features. The eight models evaluated and compared are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "4.3" }, { "text": "\u2022 UDN-only: The model was trained on 226,913 sentence pairs from the training part of UDN Edit Logs. \u2022 Artificial-only: The model was trained on 899,385 artificially generated sentence pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "4.3" }, { "text": "\u2022 FEAT-Sound & Shape: The model was trained on the same data in UDN +Artificial (1:3) model with pronunciation and shape of character features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "4.3" }, { "text": "\u2022 FEAT-Context: The model was trained on the same data in UDN + Artificial (1:3) model with context features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Compared", "sec_num": "4.3" }, { "text": "Chinese spelling check systems are usually compared based on two main metrics, precision and recall. We use the metrics provided by SIGHAN-8 Bake-off 2015 for Chinese spelling check shared task (Tseng, Lee, Chang, & Chen, 2015) , which include False Positive Rate, Accuracy, Precision, Recall, and F1, to evaluate our systems.", "cite_spans": [ { "start": 202, "end": 227, "text": "Lee, Chang, & Chen, 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "The confusion matrix is used for calculating these evaluation metrics. In the matrix, TP (True Positive) is the number of sentences with spelling errors that are correctly identified by the developed system; FP (False Positive) is the number of sentences in which non-existent errors are identified; TN (True Negative) is the number of sentences without spelling errors which are correctly identified as such; FN (False Negative) is the number of sentences with spelling errors that are not correctly identified. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "S1 \u5e0c\u671b\u85c9\u6b64\u9f13\u52f5\u81ea\u5df1\u548c\u4ed6\u4eba\u8981\u7a4d\u6975\u6a02\u89c0\u5be6\u73fe\u5922\u60f3\u3002 0 S2 PM2.5 \u5c0d\u4eba\u9ad4\u5065\u5eb7\u5371\u5bb3\u5927\uff0c 11, \u5371 S3 \u56e0\u70ba\u96e3\u4ee5\u9054\u5230\u9023\u6578\u9580\u6abb\uff0c 8, \u7f72 S4 \u4ed6\u9084\u8a18\u5f97\u81ea\u5df1\u7576\u5e74\u9084\u662f\u5b78\u6821\u68d2\u7403\u968a\u54e1\uff0c 2, \u9084, 6, \u5df1 S5 \u525b\u63a8\u52d5\u7684\u793e\u6703\u4f4f\u5b85\u4e5f\u8981\u8a2d\u4e00\u5b9a\u6bd4\u4f8b\u7684\u5927\u967d\u5149\u96fb\u3002 8, \u5b85 S6 \u7f8e\u9e97\u7684\u52c7\u58eb\u5c71\u982d\u5c07\u88ab\u638f\u7a7a\u4e86\u55ce\uff1f 0 S7 \u672a\u4f86\u767c\u5c55\u9700\u8981\u65b0\u7684\u80fd\u529b\u3001\u65b0\u7684\u52d5\u80fd\uff0c 15, \u529b S8 \u5b78\u751f\u56e0\u5b97\u6559\u3001\u7a2e\u65cf\u3001\u570b\u7c4d\u800c\u906d\u7f9e\u8fb1\u8005\u5927\u5e45\u589e\u52a0\u3002 7, \u7a2e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "For example, given 8 test sentences with gold standards shown in Table 9 . Assume that our system outputs the results as shown in Table 10 , the evaluation metrics will be measured as follows: ", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 72, "text": "Table 9", "ref_id": null }, { "start": 130, "end": 138, "text": "Table 10", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "\u2022 FPR = 0.5 (= 1/2) Notes: {S7}/{S1, S7}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "4.4" }, { "text": "In this section, we report the results of experimental evaluation using the resources and metrics described in previous chapter. Specifically, we report the results of our evaluation, which contains two test sets evaluated by false positive rate (FPR), accuracy, precision, recall, and F1 score. First, we present the results of several models evaluated on two test sets in Section 5.1. We then give some analysis and discussion of the errors in the two test sets in Section 5.2. Table 11 shows the evaluation results of UDN Edit Logs. As we can see, all models trained on edit logs and artificially generated data perform better than the one trained on only edit logs. Moreover, the model trained on only edit logs performs slightly worse, while the model trained on only artificially generated data performs the very worst on all metrics. Even though the model trained with sound and shape features performs relatively poorly on FPR, it has the best performance on accuracy, precision, recall, and F1 score. For the other test set, SIGHAN-7, the evaluation results are shown in Table 12 . UDN + Artificial (1:4) performs substantially better than the other models, noticeably improving on all metrics. Interestingly, in contrast to the results of UDN Edit Logs, the model trained on only edit logs has significantly worse performance than others, while the model trained on only artificially generated data performs reasonably well. We note that there is no obvious improvement in the performance of the model trained with additional features of either sound and shape or context.", "cite_spans": [], "ref_spans": [ { "start": 480, "end": 488, "text": "Table 11", "ref_id": "TABREF0" }, { "start": 1080, "end": 1088, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "In general, we obtain extremely low average FPR evaluated on the two test sets. There are three obvious differences between the results of two test sets. First, the model trained on only edit logs (UDN-only) and the model trained on only artificially generated data (Artificial-only) have the opposite results on UDN Edit Logs and SIGHAN-7. As we can see, UDN-only performs well on UDN Edit Logs but very poorly on SIGHAN-7. In contrast, Artificial-only has worst performance on UDN Edit Logs but acceptable performance on SIGHAN-7. Second, we obtain relatively high precision compared with recall on UDN Edit Logs, while higher recall than precision on SIGHAN-7. Third, in Table 13 , it is worth noting that the model trained with sound and shape features has significantly better accuracy, recall, and F1 score on UDN Edit Logs. However, on SIGHAN-7, only the recall is a little better than the model trained without using features. ", "cite_spans": [], "ref_spans": [ { "start": 674, "end": 682, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.1" }, { "text": "The nature of our two test sets are different, UDN Edit Logs are produced by newspaper editors, while SIGHAN-7 are collected from essays written by junior high students. Therefore, we analyze and discuss the details of the two test sets in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.2" }, { "text": "We use the confusion sets provided by SIGHAN 7 Bake-off 2013 (Wu et al., 2013 , which contains a set of characters with similar pronunciation and shape, to analyze the relations between typos and the corresponding corrections in our test data. There are 919 typos in UDN Edit Logs and 1,266 typos in SIGHAN-7. As shown in Table 14 , the analysis results of UDN Edit Logs and SIGHAN-7 are similar. Most of typos are related to similar pronunciation, and over 35% of typos are due to similar shape. Moreover, around 30% of typos are associated with similar pronunciation as well as shape. Table 15 and 16 show some analysis of evaluation results of UDN Edit Logs and SIGHAN-7 respectively. As we can see, according to the analysis of the errors which were not corrected by models, there is no significant difference among these different models. In both UDN Edit Logs and SIGHAN-7, around half of the spelling errors not corrected are related to similar pronunciation no matter which model we used. It is worth discussing that there are some special cases in the test sets. For example, an error character \"\u6016\" (pronounced 'bu') occurring in some words such as \"\u6016\u544a\u6b04\" (pronounced 'bu gao lan') and \"\u6016\u7f6e\" (pronounced 'bu zhi') should be corrected to \"\u4f48\" (pronounced 'bu') in SIGHAN-7. However, the correction predicted by our models is \"\u5e03\" since we used the Chinese dictionary published by Ministry of Education of Taiwan as the gold standards of our training data. According to the dictionary, \"\u4f48\u7f6e\" and \"\u4f48\u544a\u6b04\" are invalid, while \"\u5e03\u7f6e\" ('decorate') and \"\u5e03\u544a\u6b04\" ('bulletin board') are legal. Another case is related to grammatical errors. Our models aim to correct spelling errors, but there are some sentences with grammatical errors in SIGHAN-7 such as \"\u8981\u5982\u4f55 \u5728\u7ad9\u8d77\u4f86\u5462\uff1f\" ('How to stand up again?') and \"\u54ea\u6fc0\u7684\u8d77\u7f8e\u9e97\u7684\u6d6a\u82b1\uff1f\" (How can it stir up the beautiful spray?), where \" \u5728 \" (pronounced 'zai ' ) and \" \u7684 \" (pronounced 'de') should be \"\u518d\" (pronounced 'zai') and \"\u5f97\" (pronounced 'de') respectively. These kinds of errors are involved the dependency structure of sentences. In the predicted results of our models, we found that the model trained on only artificially generated data cannot correct such errors. Other models using edit logs have slightly better performance on correcting these kinds of errors, but there isn't too much of a difference.", "cite_spans": [ { "start": 38, "end": 60, "text": "SIGHAN 7 Bake-off 2013", "ref_id": null }, { "start": 61, "end": 77, "text": "(Wu et al., 2013", "ref_id": null } ], "ref_spans": [ { "start": 322, "end": 330, "text": "Table 14", "ref_id": "TABREF0" }, { "start": 587, "end": 595, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.2" }, { "text": "Besides the test data, we also found that the model trained with additional features could correct some new and unseen errors. For example, the sentence \"\u4ed6\u5728\u6587\u5b78\u65b9\u9762\u6709\u5f88\u9ad8\u7684\u9020 \u916f\u3002\" with a typo \"\u916f\" (pronounced 'zhi'), which is not corrected by a model trained without features because our training data does not cover this typo. However, the sentence is correctly translated into \"\u4ed6\u5728\u6587\u5b78\u65b9\u9762\u6709\u5f88\u9ad8\u7684\u9020\u8a63\u3002\" by the model trained with sound and shape features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 15. Distribution of the relations between not corrected typos and corrections of the evaluation results using UDN Edit Logs", "sec_num": null }, { "text": "Many avenues exist for future research and improvement of our system. For example, the method for extracting misspelled sentences from newspaper edit logs could be improved. When extracting, we only consider the sentences contain consecutive single-character edit pairs. However, two-character edit pairs could also involve spelling correction. Moreover, we could investigate how to use character-level confusion sets to expand the scale of confused word pairs. If we have more possibly confused word pairs, we could generate more comprehensive artificial error data. Additionally, an interesting direction to explore is expanding the scope of error correction to include grammatical errors. Yet another direction of research would be to consider focusing on implementing the neural machine translation model for Chinese spelling check.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "In our work, we pay more attention to the aspect of data and methods of augmenting data for CSC. We collect a series of confusion set from the Web, including \u6771\u6771\u932f\u5225\u5b57 (Kwuntung Typos Dictionary), \u65b0\u7de8\u5e38\u7528\u932f\u5225\u5b57\u9580\u8a3a(New Common Typos Diagnosis), \u5e38 \u7528\u932f\u5225\u5b57(Dictionary of Common Typos), \u570b\u4e2d\u932f\u5b57\u8868(The Typos List for Middle School). To augment more data for training an NMT model, we develop a way of injecting artificial errors into error-free sentences with the confusion sets. In addition, we compare the different ratio of mixture of real and artificial data and more artificial data improves the performance. Finally, we conduct experiments on models with additional features (e.g., pronunciation, shape components, and context words) to show that phonological, visual, and context information can improve the recall and reveal the ability to generalize common typos.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "In summary, we have proposed a novel method for learning to correct typos in Chinese text. The method involves combining real edit logs and artificially generated errors to train a neural machine translation model that translates a potentially erroneous sentence into correct one. The results prove that adding artificially generated data successfully improves the overall performance of error correction. 2014. Adam: A method for stochastic optimization. In arXiv preprint arXiv:1412.6980.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "Klein, G., Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017) . Opennmt: Opensource toolkit for neural machine translation. In arXiv preprint arXiv:1701.02810. ", "cite_spans": [ { "start": 11, "end": 65, "text": "Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6." }, { "text": "This thesis set to explore novel and effective end-to-end extractive methods for spoken document summarization. To this end, we propose a neural summarization approach leveraging a hierarchical modeling structure with an attention mechanism to understand a document deeply, and in turn to select representative sentences as \uf02a \u4e2d\u83ef\u96fb\u4fe1\u7814\u7a76\u9662\u5de8\u91cf\u8cc7\u6599\u7814\u7a76\u6240 \u5289\u6148\u6069 \u7b49 its summary. Meanwhile, for alleviating the negative effect of speech recognition errors, we make use of acoustic features and subword-level input representations for the proposed approach. Finally, we conduct a series of experiments on the Mandarin Broadcast News (MATBN) Corpus. The experimental results confirm the utility of our approach which improves the performance of state-of-the-art ones. (Narayan, Papasarantopoulos, Cohen & Lapata, 2017; Narayan et al., 2018a; Narayan et al., 2018b; ", "cite_spans": [ { "start": 746, "end": 796, "text": "(Narayan, Papasarantopoulos, Cohen & Lapata, 2017;", "ref_id": "BIBREF21" }, { "start": 797, "end": 819, "text": "Narayan et al., 2018a;", "ref_id": null }, { "start": 820, "end": 842, "text": "Narayan et al., 2018b;", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u9996\u5148\u6211\u5011\u5c07\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u5b9a\u7fa9\u70ba\u4e00\u5e8f\u5217\u6a19\u8a18\u554f\u984c\uff0c\u4e3b\u8981\u662f\u91dd\u5c0d\u6587\u4ef6\u4e2d\u7684\u8a9e\u53e5\u9032\u884c\u6458 \u8981\u7684\u6a19\u8a3b\u3002\u5176\u4e2d\u6458\u8981\u985e\u5225\u53ef\u5206\u70ba\u6458\u8981\u548c\u975e\u6458\u8981\uff0c\u5206\u5225\u4ee5 1 \u548c 0 \u8868\u793a\uff0c\u56e0\u6b64\u6211\u5011\u5c07\u4efb\u52d9\u76ee \u6a19\u5b9a\u7fa9\u70ba\u6700\u5927\u5316\u985e\u5225\u6a5f\u7387\uff0c\u4ea6\u70ba\u6700\u5927\u5316\u4f3c\u7136\u6027\uff0c\u4e26\u53ef\u5c07\u76ee\u6a19\u51fd\u5f0f\u5b9a\u7fa9\u70ba\u4e0b\u5f0f\uff1a | , | , ,", "eq_num": "(1)" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u7576\u7d66\u5b9a\u4e00\u6587\u4ef6 \u6642\uff0c\u5176\u70ba\u4e00\u8a9e\u53e5\u5e8f\u5217 , \u2026 , \uff0c\u6211\u5011\u7684\u65b9\u6cd5\u6703\u5f9e \u4e2d\u9078\u53d6 \u500b\u8a9e \u53e5\u7d93\u7531\u6392\u5e8f\u5f8c\u4f5c\u70ba\u5176\u6458\u8981\u3002\u5c0d\u65bc\u6bcf\u500b\u8a9e\u53e5 \u2208 \uff0c\u6211\u5011\u6703\u9810\u6e2c\u4e00\u5206\u6578 | , , \uff0c\u4f5c \u70ba \u5224 \u5b9a \u662f \u5426 \u70ba \u6458 \u8981 \u7684 \u4f9d \u64da \u2208 0, 1 \u3002 \u4e4b \u5f8c \u6703 \u4f9d \u7167 \u8a9e \u53e5 \u88ab \u8996 \u70ba \u6458 \u8981 \u7684 \u5206 \u6578 1| , ,", "eq_num": "\u5c0d\u6240\u6709\u8a9e\u53e5\u9032\u884c\u6392\u5e8f\uff0c\u53d6\u524d \u500b\u8a9e\u53e5\u4f5c\u70ba\u6b64\u6587\u4ef6\u6458\u8981\u3002" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u3002\u7531\u65bc\u6211\u5011\u4f7f\u7528\u7684\u8a13\u7df4\u8a9e\u6599\u662f\u4ee5 \u65b0\u805e\u70ba\u4e3b\uff0c\u800c\u5927\u591a\u6578\u65b0\u805e\u7684\u4e3b\u65e8\u901a\u5e38\u5ea7\u843d\u65bc\u958b\u982d\u5e7e\u53e5\uff0c\u56e0\u6b64\u4ee5\u5012\u5e8f\u65b9\u5f0f\u8f38\u5165\u6587\u7ae0\uff0c\u80fd\u4f7f \u5f97 RNN \u5c0d\u91cd\u8981\u8cc7\u8a0a\u8a18\u61b6\u66f4\u6df1\u3002\uff0c\u56e0\u6b64\u53ef\u5b9a\u7fa9\u4e0b\u5217\u7b97\u5f0f\uff1a ,", "eq_num": "(2)" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u6211\u5011\u7684\u6458\u8981\u9078\u53d6\u5668\u4e3b\u8981\u6703\u5c07\u6587\u4ef6\u4e2d\u6bcf\u500b\u8a9e\u53e5\u6a19\u793a\u70ba 1 (\u6458\u8981) \u6216 0 (\u975e\u6458\u8981)\u3002\u5728\u6b64\u90e8\u5206\uff0c \u6211\u5011\u5c07\u6703\u4f7f\u7528\u53e6\u5916\u4e00\u500b RNN\uff0c\u5176\u4e2d\u8f38\u5165\u4e00\u6a23\u4ee5\u8a9e\u53e5\u5411\u91cf\u70ba\u4e3b\uff0c\u800c\u8a9e\u53e5\u5411\u91cf\u540c\u6a23\u662f\u7d93\u7531\u8a9e \u53e5\u7de8\u78bc\u5668\u6240\u7522\u751f\u3002\u6b64\u8655\u8207\u6587\u4ef6\u7de8\u78bc\u5668\u4e0d\u540c\u4e4b\u8655\u5728\u65bc\uff0c\u6458\u8981\u9078\u53d6\u6642\u662f\u4ee5\u6587\u4ef6\u7684\u6b63\u5e8f\u8f38\u5165\uff0c \u56e0\u6b64\u53ef\u5b9a\u7fa9\u6210\u4e0b\u5217\u65b9\u7a0b\u5f0f\uff1a ,", "eq_num": "(4)" } ], "section": "Abstract", "sec_num": null }, { "text": "softmax MLP (6) Figure 5 . Basic architecture with attention mechanism] ", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176\u4e2d \u70ba\u96b1\u85cf\u5c64\u8f38\u51fa\uff0c \u2022 \u70ba\u4e00 RNN \u67b6\u69cb\uff0c\u5176\u8f38\u5165\u5305\u542b\u524d\u4e00\u6642\u9593\u9ede\u7684\u96b1\u85cf\u5c64\u8f38\u51fa \u548c\u7576\u524d\u6642\u9593\u9ede\u7684\u8a9e\u53e5\u8f38\u5165 \u3002\u70ba\u4e86\u5728\u9078\u53d6\u6458\u8981\u6642\u80fd\u53c3\u8003\u5230\u6574\u7bc7\u6587\u7ae0\u7684\u4e3b\u65e8\uff0c\u6211\u5011\u5c07 \u521d\u59cb\u7684\u96b1\u85cf\u5c64 \u8a2d\u5b9a\u70ba\u6587\u4ef6\u5411\u91cf \u3002\u6b64\u8209\u53ef\u4ee5\u540c\u6642\u53c3\u8003\u5c40\u90e8 (\u55ae\u4e00\u8a9e\u53e5) \u53ca\u6574\u9ad4 (\u6587 \u4ef6) \u7684\u8cc7\u8a0a\uff0c\u56e0\u6b64\u80fd\u66f4\u597d\u7684\u8fa8\u5225\u8a9e\u53e5\u3002\u6700\u5f8c\u6211\u5011\u6703\u900f\u904e (6) \u8a08\u7b97\u6bcf\u500b\u8a9e\u53e5\u7684\u985e\u5225 \uff0c\u5176 \u4e2d MLP \u2022 \u70ba\u4e00\u7c21\u55ae\u7684\u524d\u5411\u5f0f\u985e\u795e\u7d93\u7db2\u8def(Feed-forward Neural Networks) \u4e4b\u5f8c\u7d93\u7531\u4e00\u500b softmax \u51fd\u5f0f\u5f97\u5230\u8a9e\u53e5\u985e\u5225\u7684\u6a5f\u7387 | , ,", "eq_num": "\uff0c\u4e26\u4f9d\u64da 1|" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2032 , ;", "eq_num": "(7)" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W ; (9) \u2299 (10) \u2032\u2032 ,", "eq_num": "(8)" } ], "section": "Abstract", "sec_num": null }, { "text": "\u5c0d\u65bc\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u800c\u8a00\uff0c\u503c\u5f97\u6ce8\u610f\u7684\u662f\u6458\u8981\u7d50\u679c\u61c9\u8a72\u76e1\u53ef\u80fd\u5305\u542b\u66f4\u591a\u539f\u6587\u4ef6\u4e2d\u91cd\u8981 \u7684\u8cc7\u8a0a\u3002\u56e0\u6b64\uff0c\u82e5\u6211\u5011\u5e0c\u671b\u6458\u8981\u80fd\u5920\u5305\u542b\u66f4\u591a\u91cd\u8981\u8cc7\u8a0a\uff0c\u61c9\u8a72\u8981\u64f7\u53d6\u51fa\u90a3\u4e9b\u548c\u6587\u4ef6\u4e2d\u6bcf \u500b \u8a9e \u53e5 \u90fd \u6709 \u4e00 \u5b9a \u95dc \u806f \u6027 \u7684 \u8a9e \u53e5 \uff0c \u6240 \u4ee5 \u6211 \u5011 \u5617 \u8a66 \u5728 \u6211 \u5011 \u7684 \u67b6 \u69cb \u4e2d \u52a0 \u5165 \u6ce8 \u610f \u529b \u6a5f \u5236 (Attention Mechanism)(Bahdanau et al., 2015)\u3002\u6ce8\u610f\u529b\u6a5f\u5236\u53ef\u4ee5\u627e\u5230\u6bcf\u500b\u8a9e\u53e5\u8207\u5176\u4ed6\u53e5\u7684 \u95dc\u806f\u6027\uff0c\u56e0\u6b64\u6211\u5011\u53ef\u4ee5\u5c07\u6a21\u578b\u6539\u826f\u6210\u5982\u5716 5 \u7684\u67b6\u69cb\u3002 \u5716 5. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b -\u7d50\u5408\u6ce8\u610f\u529b\u6a5f\u5236 [", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u70ba\u4e86\u7d50\u5408\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u53ef\u4ee5\u5148\u7c21\u55ae\u5b9a\u7fa9\u6211\u5011\u7684\u6458\u8981\u4efb\u52d9\u5982\u4e0b\u5f0f\uff1a | , , , ,", "eq_num": "(13)" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176\u4e2d \u662f\u900f\u904e\u6ce8\u610f\u529b\u6a5f\u5236\u8a08\u7b97\u51fa\u7684\u4e0a\u4e0b\u6587\u5411\u91cf\uff0c\u800c \u5247\u662f\u6458\u8981\u9078\u53d6\u5668\u7684\u96b1\u85cf\u5c64\u8cc7\u8a0a\uff0c \u2022 \u4ee3\u8868\u6574\u500b\u6458\u8981\u9078\u53d6\u5668\uff0c\u6b64\u5f0f\u662f\u8868\u793a\u6458\u8981\u9078\u53d6\u5668\u7684\u76ee\u6a19\uff0c\u4e3b\u8981\u662f\u8981\u9810\u6e2c\u8a9e\u53e5\u7684\u6458\u8981\u985e \u5225\u6a5f\u7387 | , , \u3002\u7531\u65bc\u6211\u5011\u5728\u6458\u8981\u9078\u53d6\u6642\u7d50\u5408\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u56e0\u6b64\u53ef\u4ee5\u91cd\u65b0\u5b9a\u7fa9 (4) \u70ba \u4e0b\u5f0f\uff1a , ,", "eq_num": "(14)" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u50b3\u7d71\u7684\u6458\u8981\u6a21\u578b\u8a13\u7df4\u76ee\u6a19\u4e00\u822c\u90fd\u662f\u4f7f\u7528\u6700\u5927\u4f3c\u7136\u8a55\u4f30 (Maximum Likelihood Estimation, MLE)\uff0c\u4e5f\u5c31\u662f\u8981\u6700\u5927\u5316 | , \u220f | , , \uff0c\u56e0\u6b64\u6703\u9078\u64c7\u4ea4\u53c9\u4e82\u5ea6 (Cross Entropy) \u8a08\u7b97\u640d\u5931 (loss)\uff0c\u76ee\u6a19\u51fd\u5f0f\u53ef\u5b9a\u7fa9\u70ba\u4e0b\u5217\u65b9\u7a0b\u5f0f\uff1a log | , ,", "eq_num": "(18)" } ], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u56e0\u6b64\uff0c\u6211\u5011\u4f7f\u7528\u5f37\u5316\u5b78\u7fd2(Sutton & Barto, 1998) \u8f14\u52a9\u6a21\u578b\u8a13\u7df4\uff0c\u7531\u65bc\u57fa\u672c\u7684\u5f37\u5316\u5b78 \u7fd2\u6a5f\u5236\u9700\u8981\u734e\u52f5\u51fd\u6578 (Reward Function)\uff0c\u6b64\u51fd\u6578\u4e3b\u8981\u662f\u7528\u4f86\u5224\u65b7\u7576\u524d\u6a21\u578b\u6240\u9810\u6e2c\u7684\u7d50\u679c \u662f\u5426\u70ba\u6b63\u78ba\uff0c\u82e5\u6b63\u78ba\u5247\u9f13\u52f5\u8a13\u7df4\uff0c\u53cd\u4e4b\u5247\u6703\u61f2\u7f70\u3002\u800c\u734e\u52f5\u51fd\u6578\u7684\u8a2d\u5b9a\u4e0d\u50cf\u640d\u5931\u51fd\u6578\u90a3\u9ebc \u56b4\u82db\uff0c\u56e0\u6b64\u6211\u5011\u4f7f\u7528\u6458\u8981\u8a55\u4f30\u6307\u6a19 ROUGE \u4f5c\u70ba\u734e\u52f5\u51fd\u6578\uff0c\u800c\u8a13\u7df4\u76ee\u6a19\u5247\u53ef\u6539\u6210\u6700\u5c0f\u5316 \u734e\u52f5\u671f\u671b\u503c\uff1a ~ (19) \u5176\u4e2d \u662f\u6307 | , \uff0c \u2022 \u662f\u734e\u52f5\u51fd\u6578\uff0c\u800c \u662f\u7d93\u904e\u53d6\u6a23 (Sample) \u5f8c\u5f97\u5230\u7684\u9810\u6e2c \u6458\u8981\u3002\u4f46\u662f\u9810\u6e2c\u6458\u8981 \u7684\u53ef\u80fd\u6027\u6709\u7121\u9650\u591a\u7a2e\uff0c\u6211\u5011\u7121\u6cd5\u6bcf\u6b21\u8a13\u7df4\u90fd\u627e\u5230\u6240\u6709\u53ef\u80fd\u4e14\u8a08 \u7b97\u5176\u671f\u671b\u503c\u4f86\u8abf\u6574\u53c3\u6578\uff0c\u9019\u662f\u5f88\u8017\u8cbb\u6210\u672c\u7684\u3002\u56e0\u6b64\u6211\u5011\u5c07 (19) \u6539\u6210 (20)\uff0c\u6bcf\u6b21\u8a13\u7df4\u53ea \u53d6\u4e00\u500b\u6a23\u672c\u52a0\u901f\u5176\u8a13\u7df4\uff0c\u4e26\u53ef\u5c07\u68af\u5ea6 (Gradient) \u51fd\u5f0f\u6539\u6210 (21)\uff0c\u4f7f\u5176\u8a13\u7df4\u4e0a\u66f4\u70ba\u5bb9\u6613\uff1a (20) log | , ,", "eq_num": "(21)" } ], "section": "Abstract", "sec_num": null }, { "text": "With the rapid growth of information, browsing social media on the Internet is becoming a part of people's daily lives. Social platforms give us the latest information in real time, for example, sharing personal life and commenting on social events. However, with the vigorous development of social platforms, lots of rumors and fake messages are appearing on the Internet. Most of the social platforms use manual reporting or statistics to distinguish rumors, which are very inefficient. In this paper, we propose a multimodal feature fusion approach to rumor detection by combining image captioning model with deep attention networks. First, for images extracted from tweets, we apply Image Caption model to generate captions by Convolutional Neural Networks (CNNs) and Sequence-to-Sequence (Seq2Seq) model. Second, words in captions and text contents from tweets are represented as vectors by word embedding models and combined with social features in tweets with early and late fusion strategies. Finally, we design Multi-layer and Multi-cell Bi-directional Recurrent Neural Networks (BRNNs) with attention mechanism to find word dependency and learn the most important features for classification. From the experimental results, the best F-measure of 0.89 can be obtained for our proposed Multi-cell BRNN based on Gated Recurrent Units (GRUs) with attention using early fusion of all features except for user features. This shows the potential of our proposed approach to rumor detection. Further investigation is needed for data in larger scales. \u5716 3. Seq2Seq \u67b6\u69cb\u793a\u610f\u5716\uff0c\u8f38\u5165\"ABC\"\u4ee5\u7522\u751f\"WXYZ\" (Sutskever et al., 2014) [ Figure 3 . The architecture of Seq2Seq Model, which outputs \"WXYZ\" for input \"ABC\" \u4e4b\u6ce8\u610f\u529b\u6a5f\u5236\u67b6\u69cb (Bahdanau et al., 2015) [ Figure 4 . The architecture of attention mechanism combining Bidirectional RNNs (Bahdanau et al., 2015) ] Multi-Head Attention (Vaswani et al., 2017) [ Figure 11 . Multi-Head Attention (Vaswani et al., 2017) ] ", "cite_spans": [ { "start": 1840, "end": 1883, "text": "Multi-Head Attention (Vaswani et al., 2017)", "ref_id": null }, { "start": 1919, "end": 1941, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF59" } ], "ref_spans": [ { "start": 1589, "end": 1613, "text": "(Sutskever et al., 2014)", "ref_id": null }, { "start": 1616, "end": 1624, "text": "Figure 3", "ref_id": null }, { "start": 1699, "end": 1731, "text": "\u4e4b\u6ce8\u610f\u529b\u6a5f\u5236\u67b6\u69cb (Bahdanau et al., 2015)", "ref_id": "FIGREF1" }, { "start": 1734, "end": 1742, "text": "Figure 4", "ref_id": null }, { "start": 1814, "end": 1837, "text": "(Bahdanau et al., 2015)", "ref_id": "FIGREF1" }, { "start": 1886, "end": 1895, "text": "Figure 11", "ref_id": null } ], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5982\u5716 4 \u6240\u793a\uff0c\u8f38\u5165\u6587\u4ef6 , , \u2026 , \u4e4b\u5f8c\uff0c\u9996\u5148\uff0c\u5148\u900f\u904e\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def(BRNN) \u5f97\u5230\u5404\u500b\u96b1\u85cf\u5c64\u7684\u72c0\u614b , , \u2026 , \uff0c\u5176\u4e2d = , \u3002\u5047\u8a2d\u7576\u524d Decoder \u7684\u72c0\u614b\u70ba \uff0c\u5247 \u8f38\u5165\u8207\u8f38\u51fa\u4e4b\u9593\u7684\u95dc\u4fc2\u53ef\u4ee5\u8868\u793a\u70ba\uff1a , , , , \u2026 , ,", "eq_num": "(1)" } ], "section": "\u95dc\u9375\u8a5e\uff1a\u8b20\u8a00\u6aa2\u6e2c\u3001\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u3001\u6ce8\u610f\u529b\u6a5f\u5236\u3001\u5716\u50cf\u63cf\u8ff0\u3001\u7279\u5fb5\u878d\u5408", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u5176\u4e2d d k \u70ba key \u7684\u7dad\u5ea6\u3002\u7576\u7dad\u5ea6\u8d8a\u5927\uff0cQ \u8207 K \u7684\u5167\u7a4d\u4e5f\u6703\u8d8a\u5927\uff0c\u56e0\u6b64\u9664\u4ee5\u4e00\u500b\u8abf\u6574\u6578 \uff0c \u9632\u6b62\u8a72\u6578\u503c\u7d50\u679c\u904e\u5927\uff0c\u6700\u5f8c\u900f\u904e softmax \u51fd\u6578\u5c07\u7d50\u679c\u6b63\u898f\u5316\uff0c\u5c07\u7372\u5f97\u7684\u6b0a\u91cd\u8207 V \u76f8\u4e58\uff0c \u66f4\u65b0\u5176\u5411\u91cf\u7684\u6578\u503c\u3002 \u70ba\u4e86\u52a0\u901f\u904b\u7b97\uff0c\u6211\u5011\u63a1\u7528 Multi-Head Attention \u67b6\u69cb\uff0c\u5982\u5716 11 \u6240\u793a\uff1a \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43 \u5716 11.", "eq_num": "(4)" } ], "section": "\u95dc\u9375\u8a5e\uff1a\u8b20\u8a00\u6aa2\u6e2c\u3001\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u3001\u6ce8\u610f\u529b\u6a5f\u5236\u3001\u5716\u50cf\u63cf\u8ff0\u3001\u7279\u5fb5\u878d\u5408", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u7d93\u904e h \u500b\u4e0d\u540c\u6295\u5f71\u7dda\u6027\u8f49\u63db\u5f8c\uff0c\u53ef\u4ee5\u5c07 h \u500b scaled dot-product attention \u795e\u7d93\u7db2\u8def\u9032\u884c\u5e73 \u884c\u904b\u7b97\uff0c\u4e26\u5c07\u6bcf\u4e00\u6b21\u7684\u7d50\u679c\u9032\u884c\u4e32\u63a5\uff0c\u6700\u5f8c\u518d\u7d93\u904e\u4e00\u5c64\u7dda\u6027\u8f49\u63db\u5f97\u5230 multi-head attention \u7684\u7d50\u679c\u3002\u5982\u4e0b\u6240\u793a\uff1a , ,", "eq_num": ", \u2026 , , , (5)" } ], "section": "\u95dc\u9375\u8a5e\uff1a\u8b20\u8a00\u6aa2\u6e2c\u3001\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u3001\u6ce8\u610f\u529b\u6a5f\u5236\u3001\u5716\u50cf\u63cf\u8ff0\u3001\u7279\u5fb5\u878d\u5408", "sec_num": null }, { "text": "Child speech samples have traditionally been collected by visiting children's homes or inviting families into a research laboratory. LENA (Language Environment Analysis) software, a system that collects audio data without research assistants' presence and parses out audio data into several categories automatically, was developed in 2004 in the United States (LENA Research Foundation, 2020). The software has been used for observing English-speaking individuals (Gilkerson & Richards, 2008; Greenwood, Thiemann-Bourque, Walker, Buzhardt & Gilkerson, 2011; Suskind et al., 2013) , Chinese-speaking families (Gilkerson et al., 2015; Lee, Jhang, Relyea, Chen & Oller, 2018; , preterm infants (Caskey, Stephens, Tucker & Vohr, 2011 , multilingual speakers (Liu & Kager, 2017; Oller, 2010; Orena, Polka & Srouji, 2018) , individuals with disorders (Ambrose, VanDam & Moeller, 2014; Charron et al., 2016; Oller et al., 2010; Thiemann-Bourque, Warren, Brady, Gilkerson & Richards, 2014; VanDam, Ambrose & Moeller, 2012; Warren et al., 2010) , and older adults (Li, Vikani, Harris & Lin, 2014) . The number of studies on the quantity of linguistic input, conversational turns, and child vocalizations in Chinese-speaking home environments have been limited. The present study observed changes in the quantity of linguistic input, conversational turns, and child vocalizations which occur between 5 and 30 months of age in Chinese-speaking families using LENA.", "cite_spans": [ { "start": 464, "end": 492, "text": "(Gilkerson & Richards, 2008;", "ref_id": "BIBREF81" }, { "start": 493, "end": 557, "text": "Greenwood, Thiemann-Bourque, Walker, Buzhardt & Gilkerson, 2011;", "ref_id": "BIBREF86" }, { "start": 558, "end": 579, "text": "Suskind et al., 2013)", "ref_id": "BIBREF114" }, { "start": 608, "end": 632, "text": "(Gilkerson et al., 2015;", "ref_id": "BIBREF83" }, { "start": 633, "end": 672, "text": "Lee, Jhang, Relyea, Chen & Oller, 2018;", "ref_id": "BIBREF92" }, { "start": 691, "end": 729, "text": "(Caskey, Stephens, Tucker & Vohr, 2011", "ref_id": "BIBREF72" }, { "start": 754, "end": 773, "text": "(Liu & Kager, 2017;", "ref_id": "BIBREF96" }, { "start": 774, "end": 786, "text": "Oller, 2010;", "ref_id": "BIBREF98" }, { "start": 787, "end": 815, "text": "Orena, Polka & Srouji, 2018)", "ref_id": "BIBREF101" }, { "start": 845, "end": 878, "text": "(Ambrose, VanDam & Moeller, 2014;", "ref_id": "BIBREF66" }, { "start": 879, "end": 900, "text": "Charron et al., 2016;", "ref_id": "BIBREF74" }, { "start": 901, "end": 920, "text": "Oller et al., 2010;", "ref_id": "BIBREF99" }, { "start": 921, "end": 981, "text": "Thiemann-Bourque, Warren, Brady, Gilkerson & Richards, 2014;", "ref_id": "BIBREF115" }, { "start": 982, "end": 1014, "text": "VanDam, Ambrose & Moeller, 2012;", "ref_id": "BIBREF116" }, { "start": 1015, "end": 1035, "text": "Warren et al., 2010)", "ref_id": "BIBREF118" }, { "start": 1055, "end": 1087, "text": "(Li, Vikani, Harris & Lin, 2014)", "ref_id": "BIBREF94" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Research has shown that linguistic input, including the quantity and quality of caregiver speech and turn taking sequences, plays an important role in the child's vocal development (Caskey et al., 2011; Hart & Risley, 1995; Rowe, 2012; Suskind et al., 2013) . This in turn serves as a strong predictor of their later vocabulary growth (Hart & Risley, 1995; Ram\u00edrez Esparza, Garc\u00edaSierra & Kuhl, 2014) . Studies have also found that early vocal production is associated with future speech and language development. Rescorla et al. (2000) indicated that some children who were identified as late talkers at two years of age continued to exhibit language delay and were identified as children with Specific Language Impairment at three years of age. Gilkerson et al. (2018) also showed that school-age language and cognitive outcomes (9-13 years old) and quantity of adult talk and adult-child interaction during 18 to 24 months of age are related. ", "cite_spans": [ { "start": 181, "end": 202, "text": "(Caskey et al., 2011;", "ref_id": "BIBREF72" }, { "start": 203, "end": 223, "text": "Hart & Risley, 1995;", "ref_id": "BIBREF88" }, { "start": 224, "end": 235, "text": "Rowe, 2012;", "ref_id": "BIBREF109" }, { "start": 236, "end": 257, "text": "Suskind et al., 2013)", "ref_id": "BIBREF114" }, { "start": 335, "end": 356, "text": "(Hart & Risley, 1995;", "ref_id": "BIBREF88" }, { "start": 357, "end": 400, "text": "Ram\u00edrez Esparza, Garc\u00edaSierra & Kuhl, 2014)", "ref_id": null }, { "start": 514, "end": 536, "text": "Rescorla et al. (2000)", "ref_id": "BIBREF107" }, { "start": 747, "end": 770, "text": "Gilkerson et al. (2018)", "ref_id": "BIBREF82" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Linguistic input from adults or siblings is identified as one of the largest influences on children's verbal performances, including that of preterm infants (Caskey et al., 2011) . Children understand five times more words than the words they produce (Ingram, 1989 ), suggesting that a substantial number of words need to be heard before a child speaks. Roy et al. (2009) reported that adult word input frequencies and age of acquisition of words is highly correlated. Adult word input between 10 and 36 months of age has been found to be related to a child's IQ at 3 years (Hart & Risley, 1995) . Gilkerson and Richards (2009) also found that children who scored higher on language assessments tended to have talkative parents. The number of words parents spoke to children between two and six months of age predicted language ability at two years of age. Parents who earned at least a bachelor's degree talked more to their children than less educated parents. Also, first-born children were spoken to more than later born children.", "cite_spans": [ { "start": 157, "end": 178, "text": "(Caskey et al., 2011)", "ref_id": "BIBREF72" }, { "start": 251, "end": 264, "text": "(Ingram, 1989", "ref_id": "BIBREF89" }, { "start": 354, "end": 371, "text": "Roy et al. (2009)", "ref_id": "BIBREF110" }, { "start": 574, "end": 595, "text": "(Hart & Risley, 1995)", "ref_id": "BIBREF88" }, { "start": 598, "end": 627, "text": "Gilkerson and Richards (2009)", "ref_id": "BIBREF80" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Input and Conversational Turn", "sec_num": "1.1" }, { "text": "Children may be at risk of learning languages if they do not have sufficient language exposure (Velleman & Vihman, 2002) . Many scholars have claimed that language acquisition takes place even when the linguistic input that children are exposed to is addressed to them indirectly (Akhtar, Jipson & Callanan, 2001; Oshima-Takane, 1988; Oshima-Takane, Goodz & Derevensky, 1996) . Other scholars argued that speech addressed directly to children has a stronger effect on children's language learning (Oller, 2010; Pearson, Fernandez, Lewedeg & Oller, 1997; Shneidman, Arroyo, Levine & Goldin-Meadow, 2013; Shneidman & Goldin Meadow, 2012; Weisleder & Fernald, 2013) . The same phenomenon has been posited by Shneidman et al. (2013) and Shneidman and GoldinMeadow (2012) , who found that direct speech has a more important role in early word learning than indirect speech in children who grew up in communities where indirect speech was the major linguistic input.", "cite_spans": [ { "start": 95, "end": 120, "text": "(Velleman & Vihman, 2002)", "ref_id": "BIBREF117" }, { "start": 280, "end": 313, "text": "(Akhtar, Jipson & Callanan, 2001;", "ref_id": "BIBREF65" }, { "start": 314, "end": 334, "text": "Oshima-Takane, 1988;", "ref_id": "BIBREF102" }, { "start": 335, "end": 375, "text": "Oshima-Takane, Goodz & Derevensky, 1996)", "ref_id": "BIBREF103" }, { "start": 497, "end": 510, "text": "(Oller, 2010;", "ref_id": "BIBREF98" }, { "start": 511, "end": 553, "text": "Pearson, Fernandez, Lewedeg & Oller, 1997;", "ref_id": "BIBREF105" }, { "start": 554, "end": 602, "text": "Shneidman, Arroyo, Levine & Goldin-Meadow, 2013;", "ref_id": "BIBREF112" }, { "start": 603, "end": 635, "text": "Shneidman & Goldin Meadow, 2012;", "ref_id": null }, { "start": 636, "end": 662, "text": "Weisleder & Fernald, 2013)", "ref_id": "BIBREF119" }, { "start": 705, "end": 728, "text": "Shneidman et al. (2013)", "ref_id": "BIBREF112" }, { "start": 733, "end": 766, "text": "Shneidman and GoldinMeadow (2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Input and Conversational Turn", "sec_num": "1.1" }, { "text": "In addition to receiving speech and language input, children also respond to the input (Hart & Risley, 1995) . Mother-child vocal interactions have been discussed in several studies (Gratier et al., 2015; Gros-Louis, West, Goldstein & King, 2006; Jaffe et al., 2001) . From 3 to 4 months of age, infants start to use pragmatic, semantic, and syntactic factors to predict when a conversational turn will end and begin (Gratier et al., 2015) . However, studies on linguistic input and conversational turn-taking in Chinese-speaking environments, especially vocalizations produced in home environments, are, as of yet, few in number. Studies investigating the relationship among linguistic input, conversational turns, and children's vocalizations should shed some light on our understanding of the relationship between different types of linguistic input and language development.", "cite_spans": [ { "start": 87, "end": 108, "text": "(Hart & Risley, 1995)", "ref_id": "BIBREF88" }, { "start": 182, "end": 204, "text": "(Gratier et al., 2015;", "ref_id": "BIBREF84" }, { "start": 205, "end": 246, "text": "Gros-Louis, West, Goldstein & King, 2006;", "ref_id": "BIBREF87" }, { "start": 247, "end": 266, "text": "Jaffe et al., 2001)", "ref_id": "BIBREF91" }, { "start": 417, "end": 439, "text": "(Gratier et al., 2015)", "ref_id": "BIBREF84" } ], "ref_spans": [], "eq_spans": [], "section": "Linguistic Input and Conversational Turn", "sec_num": "1.1" }, { "text": "Although the LENA system was mostly utilized in American-English environments, the system has yielded valid and reliable speech and language estimates in other languages (French: Canault, Le Normand, Foudil, Loundon & Thai-Van, 2016; Spanish: Weisleder & Fernald, 2013, Chinese (Mandarin and Shanghai dialect) : Gilkerson et al., 2015; Korean: Pae et al., 2016; Dutch: Busch, Sangen, Vanpoucke & van Wieringen, 2018; Vietnamese: Ganek & Eriks-Brophy, 2018) . After comparing Chinese speech samples analyzed by the LENA system with the same samples transcribed by a native Chinese transcriber, Gilkerson et al. (2015) indicated that the validity of the LENA system in identifying and estimating adult words, child vocalizations, and conversational turns is reasonably accurate. observed 22 Chinese-speaking families and their typically developing children between 3 and 23 months of age in Shanghai for a period of 6 months. A total of 19 recordings were made by each family. The 22 families were divided into two groups based on the speech output of the first three recordings. One group of families had fewer adult words (Group A), while the other group had a higher rate of adult words (Group B) in their first three recordings. The authors provided monthly feedback to the families regarding strategies to increase their linguistic input to and interaction with their children. The results overall showed that adult words and conversational turns increased during the first three months, but decreased during the last three months. However, Group A showed increased number of adult words in the last few recordings, which was not observed in Group B. The study indicates that the LENA system can be used to track children's vocal, speech, and language development and/or treatment progress. The authors also found that their number of conversational turns correlated positively with the MacArthur-Bates Communicative Development Inventories -Verbal (Fensen et al., 2007) and Minnesota Child Developmental Inventory Expressive Language (Ireton, 1992) scores for the change from baseline to 3 months. LENA estimates have also shown reliable and valid results when compared with scores of standardized assessments (Richards et al., 2017) , including -Preschool Language Scale -4th Edition (Zimmerman, Steiner & Pond, 2002) and the Receptive-Expressive Emergent Language Test -3rd Edition (Bzoch, League & Brown, 2003) . Table 1 shows adult word count (AWC), conversational turn count (CT), and child vocalization count (CV) per hour from various ages, settings, and population. Depending on the children's age and the recording environment, children received different linguistic input and produced different number of words. AWC ranged from 889 to 1966. CT ranged from 17 to 75. CV ranged from 73 to 188 per hour. Gilkerson and Richards (2008) examined a corpus of spontaneous speech data in English-speaking families and created normative estimates for CV and CT each month when children were between 2 and 48 months of age. Here only Studies reported AWC, CT, and CV per hour in families with 0-3-year- ", "cite_spans": [ { "start": 170, "end": 233, "text": "(French: Canault, Le Normand, Foudil, Loundon & Thai-Van, 2016;", "ref_id": null }, { "start": 234, "end": 309, "text": "Spanish: Weisleder & Fernald, 2013, Chinese (Mandarin and Shanghai dialect)", "ref_id": null }, { "start": 312, "end": 335, "text": "Gilkerson et al., 2015;", "ref_id": "BIBREF83" }, { "start": 336, "end": 361, "text": "Korean: Pae et al., 2016;", "ref_id": null }, { "start": 362, "end": 416, "text": "Dutch: Busch, Sangen, Vanpoucke & van Wieringen, 2018;", "ref_id": null }, { "start": 417, "end": 456, "text": "Vietnamese: Ganek & Eriks-Brophy, 2018)", "ref_id": null }, { "start": 593, "end": 616, "text": "Gilkerson et al. (2015)", "ref_id": "BIBREF83" }, { "start": 1952, "end": 1973, "text": "(Fensen et al., 2007)", "ref_id": null }, { "start": 2038, "end": 2052, "text": "(Ireton, 1992)", "ref_id": "BIBREF90" }, { "start": 2214, "end": 2237, "text": "(Richards et al., 2017)", "ref_id": "BIBREF108" }, { "start": 2289, "end": 2322, "text": "(Zimmerman, Steiner & Pond, 2002)", "ref_id": "BIBREF121" }, { "start": 2388, "end": 2417, "text": "(Bzoch, League & Brown, 2003)", "ref_id": "BIBREF68" }, { "start": 2815, "end": 2844, "text": "Gilkerson and Richards (2008)", "ref_id": "BIBREF81" } ], "ref_spans": [ { "start": 2420, "end": 2427, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 3037, "end": 3105, "text": "Studies reported AWC, CT, and CV per hour in families with 0-3-year-", "ref_id": null } ], "eq_spans": [], "section": "Assessing Vocal Development Using an Automated Approach", "sec_num": "1.2" }, { "text": "Because of the laborious coding required for estimating linguistic input from the ambient environment, studies focusing on child speech development are usually based on a limited set of recordings. To our knowledge, only three studies (Gilkerson et al., 2015; Lee et al., 2018; reported observations in Chinese-learning children's natural environments using LENA. In view of this, LENA was adopted for data collection and processing in the present study. This paper explores the relationship among children's vocalization, the linguistic input children received, and amount of interaction adults and children had per hour (e.g., total number of AWC/total length of a recording). However, the recordings included times when families were asleep. Thus, the present study investigated the research questions using the total length of the recording without LENA-determined silence time (i.e., quiet, sleep time) to calculate another set of average numbers of AWC, CT, and CV per hour (e.g., total number of AWC/(total length of a recording without silence time in the recording)). Periods of silence were removed to ensure that the analysis only included times when children were most likely to be awake. Analyzing results by removing periods of silence time from LENA recordings has also been reported in several other studies (Marchman, Mart\u00ednez, Hurtado, Gr\u00fcter & Fernald, 2017; Sacks et al., 2013) . Since children at 0-2 years old sleep an average of 12.7 hours a day and children at 2-3 years old sleep an average of 12 hours a day (Galland, Taylor, Elder, & Herbison, 2012) , the results of the present study could have been influenced by long sleeping times. Therefore, the present study aimed to compare the results when silence time was included with the results when silence time was removed from the analyses.", "cite_spans": [ { "start": 235, "end": 259, "text": "(Gilkerson et al., 2015;", "ref_id": "BIBREF83" }, { "start": 260, "end": 277, "text": "Lee et al., 2018;", "ref_id": "BIBREF92" }, { "start": 1324, "end": 1377, "text": "(Marchman, Mart\u00ednez, Hurtado, Gr\u00fcter & Fernald, 2017;", "ref_id": "BIBREF97" }, { "start": 1378, "end": 1397, "text": "Sacks et al., 2013)", "ref_id": "BIBREF111" }, { "start": 1534, "end": 1576, "text": "(Galland, Taylor, Elder, & Herbison, 2012)", "ref_id": "BIBREF77" } ], "ref_spans": [], "eq_spans": [], "section": "The Present Study", "sec_num": "1.3" }, { "text": "The present study investigated the following questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Present Study", "sec_num": "1.3" }, { "text": "1. Do adult word count (AWC), conversational turn count (CT), and child vocalization count (CV) increase as children grow older?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Present Study", "sec_num": "1.3" }, { "text": "2. Are there different patterns in AWC, CT, and CV when LENA-determined silence time is removed?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Present Study", "sec_num": "1.3" }, { "text": "3. Are both AWC and CT effective contributors to the number of CV at 5, 10, 14, 21, and 30 months?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Present Study", "sec_num": "1.3" }, { "text": "4. Do AWC, CT, and CV show cross-language differences?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Present Study", "sec_num": "1.3" }, { "text": "2. Methods", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Present Study", "sec_num": "1.3" }, { "text": "Seven Chinese-speaking families and their children (two males and five females) participated in the study. The families lived in Tainan, Taiwan, an environment where Mandarin Chinese and Southern Min (Taiwanese) were mostly spoken. All the children were born full-term without hearing or neurodevelopmental disorders. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Participants", "sec_num": "2.1" }, { "text": "The digital language processor (DLP), a recording device developed along with the LENA Pro system (LENA Research Foundation, 2020), was used to collect data. Before each recording session started, a child wore a specially designed vest with a DLP (Figure 1) . The caregiver turned the DLP on to start a recording session and switched the DLP off after 16 hours of recording. The recording file was automatically uploaded and processed (Figure 2 ) once the DLP was connected to a computer with the LENA Pro software. The LENA Pro software identified speech and other sounds from each recording and generated counts at 5-minute, hour, day, and month intervals. The authors retrieved the counts/reports (Figure 3 ) from the software for further analysis. A set of two recordings were made at each age: 5, 10, 14, 21, and 30 months old. A total of 70 recordings were analyzed (i.e., 7 children x 5 ages x 2 recordings). All the recordings were 16 hours in length except for 6 of the recordings due to insufficient power of the device used on the recording day. The 6 recordings were between 11 and 14 hours in length.", "cite_spans": [], "ref_spans": [ { "start": 247, "end": 257, "text": "(Figure 1)", "ref_id": null }, { "start": 435, "end": 444, "text": "(Figure 2", "ref_id": null }, { "start": 700, "end": 709, "text": "(Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Recording Procedure", "sec_num": "2.2" }, { "text": "The audio data was processed and categorized by the LENA Pro software into eight sound categories:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Processing by the LENA Software", "sec_num": "2.3" }, { "text": "(1) the key child who wore a vest with the DLP, (2) other child, (3) adult male, (4) adult female, (5) overlapping sounds, (6) noise, (7) electronic sounds (e.g., TV), and (8) silence (i.e., silence, quiet, or vegetative sounds such as sneezes, coughs, or snores). Each category was further identified as clear and unclear (i.e., quiet and distant) subcategories. After the eight sound categories were identified, the LENA system determined adult word count (AWC), communication turn count (CT), and child vocalization count (CV).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Processing by the LENA Software", "sec_num": "2.3" }, { "text": "AWC measured the total number of words spoken around the key child. Using acoustic features in speech signal (e.g., formants, pitch, segment duration, silence duration), adult sounds were identified as phones using American-English phone parsing models. Speech segments were identified based on differential acoustic energy patterns, and no specific adult words were identified. AWC included both speech directed to the key child and speech directed to others. In Mandarin Chinese, one syllable represents one spoken syllable, whereas one word may contain one or more spoken syllables. For example, \u7a97\u6236 chuang hu (window) has two spoken syllables but counts as one word. Gilkerson et al. (2015) compared syllable count (e.g., \u7a97\u6236 chuang hu = two syllables) and word count (\u7a97\u6236 chuang hu = one word) transcribed by a trained native Chinese human transcriber with AWC and found that both comparisons showed valid and reliable estimates of adult word count. The authors suggested that since the comparisons were both reliable, researchers can use LENA-determined AWC (syllable count) in future studies. The authors also indicated that since all languages have phonemes and syllables, and the acoustic features of consonants and vowels are similar across languages, using acoustic information to estimate adult word count should not be affected by language differences.", "cite_spans": [ { "start": 670, "end": 693, "text": "Gilkerson et al. (2015)", "ref_id": "BIBREF83" } ], "ref_spans": [], "eq_spans": [], "section": "Adult Word Count (AWC)", "sec_num": "2.3.1" }, { "text": "Conversational turn count (CT) refers to the total number of conversational turns the child engaged in with other speakers. A conversational turn is defined as a child speaking and an adult or a child responding, or an adult or a child speaking and the child responding within 5 seconds. Both intentional and unintentional vocal production and responses can be counted as turns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conversational Turn Count (CT)", "sec_num": "2.3.2" }, { "text": "Child vocalization count (CV) is the total number of speech-related vocalizations the child produces. A CV would be identified if there was a 300 millisecond or longer vocal break between the key child's vocalization. Cries, laughs, and vegetative sounds such as sneezes, coughs were excluded from child vocalization count. Similar to AWC, the LENA system did not identify specific words or syllables in utterances. If a child says \"ma\" or \"I want that I want that I want that\" without pauses between words, each utterance is counted as one ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Child Vocalization Count (CV)", "sec_num": "2.3.3" }, { "text": "Five categories retrieved from the LENA reports were used for further analyses in the present study: (1) the length of each recording, (2) adult word count (AWC), (3) communication turn count (CT), (4) child vocalization count (CV), and (5) length of silence in each recording. The total number of words or sounds in each recording may differ depending on the length of the recording. Since the length of each recording was different, the average number of AWC, CT, and CV per hour retrieved from each recording was first calculated. Next, the average number of AWC, CT, and CV per hour retrieved from each recording without silence were calculated. Two sets of statistical measures were then analyzed. First, six one-way repeated measure ANOVAs were performed to explore whether there were any changes in the three variables (the average number of AWC, CT, and CV per hour) across time as well as when silence was included or excluded. Next, ten multiple regressions were performed at the ages of 5, 10, 14, 21, and 30 months to examine how much AWC and CT contribute to CV at each age and whether or not silence was included. Figure 4A shows the average number of adult word count (AWC), conversational turn (CT), and child vocalization (CV) per hour and their standard deviations from the recordings made at 5, 10, 14, 21, and 30 months. The average number of AWC per hour shows an increase from 5 to 10 months and a gradual decrease from 10 to 30 months. However, the differences among the five ages are not statistically significant, which is similar to the finding of Gilkerson and Richards (2008) . The authors stated that AWC and chronological age in English-speaking families were not significantly correlated. The results in the present study also showed that the number of child vocalizations increased slowly with age, even when the child received a fair amount of linguistic input from the environment. That is, children heard an average of 412 to 752 adult words per hour from 5 months to 30 months old. However, the average number of child vocalizations only increased from 27 to 90 vocalizations per hour from 5 to 30 months.", "cite_spans": [ { "start": 1574, "end": 1603, "text": "Gilkerson and Richards (2008)", "ref_id": "BIBREF81" } ], "ref_spans": [ { "start": 1128, "end": 1137, "text": "Figure 4A", "ref_id": null } ], "eq_spans": [], "section": "Data Analyses", "sec_num": "2.4" }, { "text": "The average number of CT per hour also shows a gradual increase from 5 (5 per hour) to 30 (23 per hour) months. The differences among the five ages are statistically significant [F(4, 24) = 3.318, p < .05]. A post hoc analysis indicates that the average number of CT per hour at 21 months (18 per hour) is significantly higher than at 5 months (5 per hour) [t(6) = 3.716, p < .05]. The increased number of CT indicates that the adults became more and more responsive to their children's utterances, and vice versa. The adults may have initiated the conversation when they thought that their children were ready to talk, or responded to their child utterances right away. The children may have also learned to gain other people's attention by producing sounds. Or, they may have learned to respond to adults' speech right away as they grew older. 5, 10, 14, 21, ", "cite_spans": [], "ref_spans": [ { "start": 846, "end": 860, "text": "5, 10, 14, 21,", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Changes of AWC, CT, and CV Overtime", "sec_num": "3.1" }, { "text": "Periods of silence were removed from recordings to ensure that only the times when children were most likely to be awake were included in the analysis. Figure 4B shows the average number of AWC, CT, and CV per hour and their standard deviations after removing the periods of LENA-determined silence from the recordings. The standard deviations of the average AWC per hour was high at all five ages as shown in both Figures 4A and 4B . However, the variability across families is even higher after the periods of silence were removed. The percentage of silence (i.e., (silence time/total length of recording) x 100) decreased with age (i.e., 5 mo: 73%, 10 mo: 66%, 14 mo: 62%, 21 mo: 59%, 30 mo: 48%), which was in line with Galland's et al. (2012) finding that children's sleep time decreased with age.", "cite_spans": [ { "start": 724, "end": 747, "text": "Galland's et al. (2012)", "ref_id": "BIBREF77" } ], "ref_spans": [ { "start": 152, "end": 161, "text": "Figure 4B", "ref_id": null }, { "start": 415, "end": 432, "text": "Figures 4A and 4B", "ref_id": null } ], "eq_spans": [], "section": "Changes of AWC, CT, and CV Overtime after Removing Silence", "sec_num": "3.2" }, { "text": "As expected, the mean number of the three variables was at least twice as high without silence as with silence. Without silence time, the average number of CT and CV per hour also gradually increased from 5 (CT: 23; CV: 120 per hour) to 30 months (CT: 48; CV: 190 per hour). But, the differences among the five ages were not statistically significant. The average number of AWC per hour showed an increase from 5 (1733 per hour) to 10 (1945 per hour) months and a gradual decrease from 10 to 30 (1252 per hour) months. Yet, the differences among the five ages were not statistically significant either. Also, the average number of CT per hour was significantly different across ages before silence was removed, but was not significant after silence was removed. The average number of CT (i.e., increased with age), and the periods of silence (i.e., decreased with age) may account for the change.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Changes of AWC, CT, and CV Overtime after Removing Silence", "sec_num": "3.2" }, { "text": "In addition, the AWC and CT in the present study from the data across the five ages with silence removed (AWC: 1734; CT: 39 per hour) were more similar to Chinese-speaking data from (AWC baseline: 1758; CT baseline: 63 per hour) than the results with silence included (AWC: 634; CT: 14 per hour). Zhang et al.'s (2015) results were more similar to results when silence was excluded in the present study because the authors instructed their Chinese-speaking families to record for 12 hours during the daytime. The finding also suggests that LENA-determined silence was identified as reasonably accurate.", "cite_spans": [ { "start": 297, "end": 318, "text": "Zhang et al.'s (2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Changes of AWC, CT, and CV Overtime after Removing Silence", "sec_num": "3.2" }, { "text": "Multiple regressions were performed at each age to explore the relationship among AWC, CT, and CV. The results showed that the numbers of AWC and CT could predict the numbers of CV at 10 months and 30 months. At 10 months, the results of the regression indicated that the model explained 88.1% of the variance and that the model was a significant predictor of the number of CV, F(2,4) = 23.306, p = .006. While the number of CT contributed significantly to the model (B = 3.677, p = .003), the number of AWC did not (B = -.008, p = .222). That is, the increase of one unit of CT could contribute to the increase of 3.677 units of CV. At 30 months, the results of the regression indicated that the model explained 95% of the variance and that the model was a significant predictor of the number of CV, F(2,4) = 57.9, p = .001. While the number of CT contributed significantly to the model (B = 3.899, p = .002), the number of AWC did not (B = -.044, p = .266). That is, the increase of one unit of CT could contribute to the increase of 3.899 units of CV.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationships among AWC, CT, and CV", "sec_num": "3.3" }, { "text": "Multiple regressions were performed at each age to explore the relationship among AWC, CT, and CV after the removal of the silence. The results showed that the numbers of AWC and CT could successfully predict the numbers of CV at 10 months, 21 months and 30 months. At 10 months, the results of the regression indicated that the model explained 85.4% of the variance and the model was a significant predictor of the number of CV, F(2,4) = 18.614, p = .009. While the number of CT contributed significantly to the model (B = 4.194, p = .004), the number of AWC did not (B = -.017, p = .168). That is, the increase of one unit of CT could contribute to the increase of 4.194 units of CV. At 21 months, the results of the regression indicated that the model explained 91.3% of the variance and that the model was a significant predictor of the number of CV, F(2,4) = 32.397, p = .003. While the number of CT contributed significantly to the model (B = 3.656, p = .001), the number of AWC did not (B = -.054, p = .058). That is, the increase of one unit of CT could contribute to the increase of 3.656 units of CV. At 30 months, the results of the regression indicated that the model explained 93.9% of the variance and that the model was a significant predictor of the number of CV, F(2,4) = 47.429, p = .002. While the number of CT contributed significantly to the model (B = 4.077, p = .01), the number of AWC did not (B = -.028, p = .664). That is, the increase of one unit of CT could contribute to the increase of 4.077 units of CV. Both sets of analyses indicated that speech directed to children or speech spoken right before or after child vocalizations (i.e. CT) imposed stronger effects to children's vocalizations than speech that was not spoken in temporal proximity to children's vocalizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationships among AWC, CT, and CV after Removing Silence", "sec_num": "3.4" }, { "text": "With silence time included, the average number of AWC, CT, and CV across the five ages was 634, 14, and 52 per hour (i.e., 634*12 hr=7608, 14*12 hr=168, 52*12 hr=624 per 12-hour day) respectively. Compared with the English normative percentile estimates for AWC, CT, and CV in Gilkerson and Richards (2009) , the Chinese-speaking families' AWC in the present study were at the 10 th -20 th percentile, and CT and CV were below the 10 th percentile. With silence excluded, the average number of AWC, CT, and CV across the five ages was 1734, 39, and 150 per hour (20808, 468, 1800 per 12-hour day) respectively. Compared with the English normative percentile estimates for AWC, CT, and CV in Gilkerson and Richards (2009) , the Chinese-speaking families' AWC in the present study were at the 80 th -90 th percentile, and CV and CT were at the 40 th -50 th percentile, which were much higher than when silence was included. As discussed earlier, the results with silence excluded were more similar to Zhang et al.'s (2015) AWC and CT baseline values; the results with silence excluded can be compared to the results in Gilkerson and Richards (2009) . These results showed that the Chinese-speaking caregivers in the present study were on the talkative end of the English normative estimates. However, the Chinese-speaking adults and children were not vocally engaged at similar rates as AWC because the percentile of CT and CV were much lower than percentile of AWC. Gilkerson and Richards (2009) found that children who were first-born, were girls, or had parents with higher education tended to receive more adult talk each day. In the present study, the three factors might have also contributed to high AWC in the present study: 1) All seven mothers were highly educated, having received at least a bachelor's degree, 2) five out of the seven children were first born, and 3) five of the seven children were girls. However, unlike the results reported in Gilkerson and Richards (2009) , the talkative caregivers in the present study did not have talkative children. Figure 5 shows longitudinal CT and CV changes in the English-speaking families from Gilkerson and Richards (2008) and the Chinese-speaking families from the present study. Both groups of families showed a gradual increase with age. When silence was included, the Chinese-speaking families showed overall lower CT and CV than the English-speaking families. However, when silence was removed, the Chinese-speaking families showed higher values than the English-speaking families. The group differences could be explained by the fact that the LENA-determined silence not only included times when families were sleeping but also when families were awake but quiet. The results of the two sets of data would be more comparable if the English samples also exclude LENA-determined silence. Another possible reason for the group differences is sample size. More participants and detailed analyses are needed to explore possible cultural differences or confirm the results.", "cite_spans": [ { "start": 277, "end": 306, "text": "Gilkerson and Richards (2009)", "ref_id": "BIBREF80" }, { "start": 691, "end": 720, "text": "Gilkerson and Richards (2009)", "ref_id": "BIBREF80" }, { "start": 999, "end": 1020, "text": "Zhang et al.'s (2015)", "ref_id": null }, { "start": 1117, "end": 1146, "text": "Gilkerson and Richards (2009)", "ref_id": "BIBREF80" }, { "start": 1465, "end": 1494, "text": "Gilkerson and Richards (2009)", "ref_id": "BIBREF80" }, { "start": 1957, "end": 1986, "text": "Gilkerson and Richards (2009)", "ref_id": "BIBREF80" }, { "start": 2152, "end": 2181, "text": "Gilkerson and Richards (2008)", "ref_id": "BIBREF81" } ], "ref_spans": [ { "start": 2068, "end": 2076, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Cross-language Comparison", "sec_num": "3.5" }, { "text": "Limitations were identified in the present study and can be addressed in future research. First, a differentiation of the number of child-initiated conversational turns and adult-initiated conversational turns would help examine parent-child interaction patterns and identify the relationship between CT and CV. Now, CT consists of both when a child speaks and an adult responds, and when an adult speaks and the child responds. The LENA Advanced Data Extractor (ADEX, LENA Research Foundation, 2020) would be useful in future research because it provides a more detailed output, including utterances or words of male adults, female adults, the key child, and other children.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Future Directions", "sec_num": "3.6" }, { "text": "Second, to ensure that the key child is really taking turns with another speaker or vice versa, the content of the adult words and child vocalizations requires human coding because the LENA system does not identify the content of the speech sample. For example, it is possible that a parent was holding the key child while talking to another person, but the LENA system may count this parent's utterances as if she or he were talking to the key child. Third, regarding the unit of speech samples, the LENA system categorizes adult and child speech samples in different units. AWC refers to the number of individual words adults speak, while CV means the number of speech-related utterances produced by the children. When a child produces prelinguistic sounds in a sequence or one breath, the LENA system may count these sounds as one CV. However, when the child starts to produce words or a mixture of babbling and words, the LENA system may still recognize those word strings/vocalizations as one CV. Again, human coding of the recording would be able to identify children's utterances in word or syllable units when the child starts to produce words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Future Directions", "sec_num": "3.6" }, { "text": "Furthermore, the results of the present study were only compared with the English normative estimates because Chinese normative estimates using LENA are not available. Developing a Chinese version of the LENA normative estimates would enhance people's understanding of the effects of early vocal development and adult-child interactions on later development in the Chinese-learning children. Including a larger cohort of participants (i.e., with different socio-economic status, later-born children, male children) to collect a corpus would best represent the Chinese-learning children's speech capacity at the age. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and Future Directions", "sec_num": "3.6" }, { "text": "The LENA automated approach has provided researchers with a new recording method that has automatic parsing capacities. The researchers investigated longitudinal changes in the average AWC, CT, and CV with and without silence time, relationship among the three variables, and cross-language comparison in Chinese-learning families with children ranging in age from 5 to 30 months. The percentage of LENA-determined silence decreased with age, indicating that the children's awake time increased as they age. The results also showed that a typically developing Chinese-learning child in the present study listened to an average of 1734 adult words, engaged in 39 conversational turns, and produced 150 vocalizations per hour from 5 to 30 months of age when he or she was awake. Child vocalizations and conversational turns increased over time, but adult word count did not show a clear pattern. When the periods of silence were included, the number of AWC and CT predicted the numbers of CV at 10 months and 30 months. After the periods of silence were removed, the results showed that the numbers of AWC and CT predicted the numbers of CV at 10, 21, and 30 months. This result suggests that the speech produced in temporal proximity to children's vocalizations or directed to children exerted a stronger influence on the number of child vocalizations than the quantity of adult words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." }, { "text": "\u8a79\u4eac\u7ff0 \u7b49 model based on memory networks to include a multi-hop mechanism to process a set of sentences in small quantity, and the question-answering task is used as the verification application. The model saves the knowledge in memory first and then finds the relevant memory through the attention mechanism, and the output module reasons the final answer. All experiments have used the bAbI dataset provided by Facebook. There are 20 different kinds of Q&A tasks in the data set that can be used to evaluate the model in different aspects. This approach reduces the number of memory associations through the calculation of associations between memories. In addition to reducing the calculation weight of 26.8%, it can also improve the accuracy of the model, which can increase by about 9.2% in the experiment. The experiments also used a smaller amount of data to verify the system for improving the case of insufficient data set. (Sukhbaatar et al., 2015) (Henaff et al., 2017) (Seo et al., 2017) ", "cite_spans": [ { "start": 929, "end": 954, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF142" }, { "start": 955, "end": 976, "text": "(Henaff et al., 2017)", "ref_id": "BIBREF131" }, { "start": 977, "end": 995, "text": "(Seo et al., 2017)", "ref_id": "BIBREF141" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4." }, { "text": "https://opennmt.net/OpenNMT/data/word_features/ 2 http://www.unicode.org/charts/unihan.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Jhih-Jie Chen et al", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by Chiang Ching-Kuo Foundation for International Scholarly Exchange in Taiwan to Li-mei Chen for international collaboration with Dr. Kim Oller at the University of Memphis. A special thank you is extended to the families of the children in this longitudinal study for their support of this project.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null }, { "text": "With the rapid advancement of machine learning and deep learning, a great breakthrough has been achieved in many areas of natural language processing in recent years. Complex language tasks, such as article classification, abstract extraction, question answering, machine translation, and image description generation, have been solved by neural networks. In this paper, we propose a new ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null }, { "text": "Please send application to:The ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "To Register\uff1a", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. In arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A new approach for automatic chinese spelling correction", "authors": [ { "first": "C.-H", "middle": [], "last": "Chang", "suffix": "" } ], "year": 1995, "venue": "Proceedings of Natural Language Processing Pacific Rim Symposium", "volume": "95", "issue": "", "pages": "278--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, C.-H. (1995). A new approach for automatic chinese spelling correction. In Proceedings of Natural Language Processing Pacific Rim Symposium, 95, 278-283.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Distraction-based neural networks for modeling documents", "authors": [ { "first": "Q", "middle": [], "last": "Chen", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ling", "suffix": "" }, { "first": "S", "middle": [], "last": "Wei", "suffix": "" }, { "first": "H", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2016, "venue": "Proc. of IJCAI", "volume": "", "issue": "", "pages": "2754--2760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Q., Zhu, X., Ling, Z., Wei, S., & Jiang, H. (2016). Distraction-based neural networks for modeling documents. In Proc. of IJCAI 2016, 2754-2760.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural summarization by extracting sentences and words", "authors": [ { "first": "J", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2016, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "484--494", "other_ids": { "DOI": [ "10.18653/v1/P16-1046" ] }, "num": null, "urls": [], "raw_text": "Cheng, J. & Lapata, M. (2016). Neural summarization by extracting sentences and words. In Proc. of ACL, 484-494. doi: 10.18653/v1/P16-1046", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "B", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "C", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "F", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Y", "middle": [], "last": "\u2026bengio", "suffix": "" } ], "year": 2014, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/D14-1179" ] }, "num": null, "urls": [], "raw_text": "Cho, K., van Merri\u00ebnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., \u2026Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. of EMNLP 2014, 1724-1734. doi: 10.3115/v1/D14-1179", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks", "authors": [ { "first": "S", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "M", "middle": [], "last": "Auli", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proc. of NAACL-HLT 2016", "volume": "", "issue": "", "pages": "93--98", "other_ids": { "DOI": [ "10.18653/v1/N16-1012" ] }, "num": null, "urls": [], "raw_text": "Chopra, S., Auli, M., & Rush, A. M. (2016). Abstractive Sentence Summarization with Attentive Recurrent Neural Networks. In Proc. of NAACL-HLT 2016, 93-98. doi: 10.18653/v1/N16-1012", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Hierarchical Pitman-Yor-Dirchlet language model", "authors": [ { "first": "J.-T", "middle": [], "last": "Chien", "suffix": "" } ], "year": 2015, "venue": "Speech, and Language Processing", "volume": "23", "issue": "", "pages": "1259--1272", "other_ids": { "DOI": [ "10.1109/TASLP.2015.2428632" ] }, "num": null, "urls": [], "raw_text": "Chien, J.-T. (2015). Hierarchical Pitman-Yor-Dirchlet language model. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(8), 1259-1272. doi: 10.1109/TASLP.2015.2428632", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "R", "middle": [], "last": "Collobort", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "M", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "M", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "P", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collobort, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, M., & Kuksa, P. (2011). Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12, 2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Long Short-Term Memory", "authors": [ { "first": "S", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hochreiter, S. & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Spoken Document Retrieval Using Multilevel Knowledge and Semantic Verification", "authors": [ { "first": "C.-L", "middle": [], "last": "Huang", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2007, "venue": "Speech, and Language Processing", "volume": "15", "issue": "", "pages": "2551--2590", "other_ids": { "DOI": [ "10.1109/TASL.2007.907429" ] }, "num": null, "urls": [], "raw_text": "Huang, C.-L. & Wu, C.-H. (2007). Spoken Document Retrieval Using Multilevel Knowledge and Semantic Verification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 15(8), 2551-2590. doi: 10.1109/TASL.2007.907429", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On using very large target vocabulary for neural machine translation", "authors": [ { "first": "S", "middle": [], "last": "Jean", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "R", "middle": [], "last": "Memisevic", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1412.2007" ] }, "num": null, "urls": [], "raw_text": "Jean, S., Cho, K., Memisevic, R., & Bengio, Y. (2014). On using very large target vocabulary for neural machine translation. In arXiv preprint arXiv: 1412.2007.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A convolutional neural network for modeling sentences", "authors": [ { "first": "N", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "E", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "P", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "655--665", "other_ids": { "DOI": [ "10.3115/v1/P14-1062" ] }, "num": null, "urls": [], "raw_text": "Kalchbrenner, N., Grefenstette, E., & Blunsom, P. (2014). A convolutional neural network for modeling sentences. In Proc. of ACL 2014, 655-665. doi: 10.3115/v1/P14-1062", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Y", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": { "DOI": [ "10.3115/v1/D14-1181" ] }, "num": null, "urls": [], "raw_text": "Kim, Y. (2014). Convolutional neural networks for sentence classification. In Proc. of EMNLP 2014, 1746-1751. doi: 10.3115/v1/D14-1181", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Character-aware neural language models", "authors": [ { "first": "Y", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "D", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2016, "venue": "Proc. of AAAI 2016", "volume": "", "issue": "", "pages": "2741--2749", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, Y., Jernite, Y., Sontag, D., & Rush, A. M. (2016). Character-aware neural language models. In Proc. of AAAI 2016, 2741-2749.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Molding CNNs for text: Non-linear, non-consecutive convolutions", "authors": [ { "first": "T", "middle": [], "last": "Lei", "suffix": "" }, { "first": "R", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP 2015", "volume": "", "issue": "", "pages": "1565--1575", "other_ids": { "DOI": [ "10.18653/v1/D15-1180" ] }, "num": null, "urls": [], "raw_text": "Lei, T., Barzilay, R., & Jaakkola, T. (2015). Molding CNNs for text: Non-linear, non-consecutive convolutions. In Proc. of EMNLP 2015, 1565-1575. doi: 10.18653/v1/D15-1180", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Combining Relevance Language Modeling and Clarity Measure for Extractive Speech Summarization", "authors": [ { "first": "S.-H", "middle": [], "last": "Liu", "suffix": "" }, { "first": "K.-Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "H.-M", "middle": [], "last": "Wang", "suffix": "" }, { "first": "H.-C", "middle": [], "last": "Yen", "suffix": "" }, { "first": "W.-L", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2015, "venue": "Speech, and Language Processing", "volume": "23", "issue": "", "pages": "957--969", "other_ids": { "DOI": [ "10.1109/TASLP.2015.2414820" ] }, "num": null, "urls": [], "raw_text": "Liu, S.-H., Chen, K.-Y., Chen, B., Wang, H.-M., Yen, H.-C., & Hsu, W.-L. (2015). Combining Relevance Language Modeling and Clarity Measure for Extractive Speech Summarization. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(6), 957-969. doi: 10.1109/TASLP.2015.2414820", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents", "authors": [ { "first": "R", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "F", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Proc. of AAAI 2017", "volume": "", "issue": "", "pages": "3075--3081", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nallapati, R., Zhai, F., & Zhou, B. (2017). SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents. In Proc. of AAAI 2017, 3075-3081.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Abstractive text summarization using sequence-to-sequence RNNs and beyond", "authors": [ { "first": "R", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "C", "middle": [], "last": "Santos", "suffix": "" }, { "first": "C", "middle": [], "last": "Gu\u0307l\u00e7ehre", "suffix": "" }, { "first": "B", "middle": [], "last": "Xiang", "suffix": "" } ], "year": 2016, "venue": "Proc. of CoNLL", "volume": "", "issue": "", "pages": "280--290", "other_ids": { "DOI": [ "10.18653/v1/K16-1028" ] }, "num": null, "urls": [], "raw_text": "Nallapati, R., Zhou, B., dos Santos, C., Gu\u0307l\u00e7ehre, C., & Xiang, B. (2016). Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proc. of CoNLL 2016, 280-290. doi: 10.18653/v1/K16-1028", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Ranking Sentences for Extractice Summarization with Reinforcement Learning", "authors": [ { "first": "S", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proc. of NAACL 2018", "volume": "", "issue": "", "pages": "1747--1759", "other_ids": { "DOI": [ "10.18653/v1/N18-1158" ] }, "num": null, "urls": [], "raw_text": "Narayan, S., Cohen, S. B., & Lapata, M. (2018). Ranking Sentences for Extractice Summarization with Reinforcement Learning. In Proc. of NAACL 2018, 1747-1759. doi: 10.18653/v1/N18-1158", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Document Modeling with External Attention for Sentence Extraction", "authors": [ { "first": "S", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "R", "middle": [], "last": "Cardenas", "suffix": "" }, { "first": "N", "middle": [], "last": "Papasarantopoulos", "suffix": "" }, { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "J", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Y", "middle": [], "last": "\u2026chang", "suffix": "" } ], "year": 2018, "venue": "Proc. of ACL 2018", "volume": "", "issue": "", "pages": "2020--2030", "other_ids": { "DOI": [ "10.18653/v1/P18-1188" ] }, "num": null, "urls": [], "raw_text": "Narayan, S., Cardenas, R., Papasarantopoulos, N., Cohen, S. B., Lapata, M., Yu, J., \u2026Chang, Y. (2018). Document Modeling with External Attention for Sentence Extraction. In Proc. of ACL 2018, 2020-2030. doi: 10.18653/v1/P18-1188", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Neural extractive summarization with side information", "authors": [ { "first": "S", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "N", "middle": [], "last": "Papasarantopoulos", "suffix": "" }, { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1704.04530" ] }, "num": null, "urls": [], "raw_text": "Narayan, S., Papasarantopoulos, N., Cohen, S. B., & Lapata, M. (2017). Neural extractive summarization with side information. In arXiv preprint arXiv: 1704.04530.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A deep reinforced model for abstractive summarization", "authors": [ { "first": "R", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "C", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "R", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1705.04304" ] }, "num": null, "urls": [], "raw_text": "Paulus, R., Xiong, C., & Socher, R. (2017). A deep reinforced model for abstractive summarization. In arXiv preprint arXiv:1705.04304.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A neural attention model for abstractive sentence summarization", "authors": [ { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "S", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2015, "venue": "Proc. of EMNLP 2015", "volume": "", "issue": "", "pages": "379--389", "other_ids": { "DOI": [ "10.18653/v1/D15-1044" ] }, "num": null, "urls": [], "raw_text": "Rush, A. M., Chopra, S., & Weston, J. (2015). A neural attention model for abstractive sentence summarization. In Proc. of EMNLP 2015, 379-389. doi: 10.18653/v1/D15-1044", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Leveraging Contextual Sentence Relations for Extractive Summarization Using a Neural Attention Model", "authors": [ { "first": "P", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ren", "suffix": "" }, { "first": "F", "middle": [], "last": "Wei", "suffix": "" }, { "first": "J", "middle": [], "last": "Ma", "suffix": "" }, { "first": "M", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2017, "venue": "Proc. of SIGIR 2017", "volume": "", "issue": "", "pages": "95--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ren, P., Chen, Z., Ren, Z., Wei, F., Ma, J., & de Rijke, M. (2017). Leveraging Contextual Sentence Relations for Extractive Summarization Using a Neural Attention Model. In Proc. of SIGIR 2017, 95-104.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Get to the point: Summarization with pointer-generator networks", "authors": [ { "first": "A", "middle": [], "last": "See", "suffix": "" }, { "first": "P", "middle": [], "last": "Liu", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL 2017", "volume": "", "issue": "", "pages": "1073--1083", "other_ids": { "DOI": [ "10.18653/v1/P17-1099" ] }, "num": null, "urls": [], "raw_text": "See, A., Liu, P., & Manning, C. (2017). Get to the point: Summarization with pointer-generator networks. In Proc. of ACL 2017, 1073-1083. doi: 10.18653/v1/P17-1099", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 27 th Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Proceedings of the 27 th Advances in Neural Information Processing Systems, 3104-3112.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Reinforcement Learning: An Introduction", "authors": [ { "first": "R", "middle": [ "S" ], "last": "Sutton", "suffix": "" }, { "first": "A", "middle": [ "G" ], "last": "Barto", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sutton, R. S. & Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Abstractive document summarization with a graph-based attentional neural model", "authors": [ { "first": "J", "middle": [], "last": "Tan", "suffix": "" }, { "first": "X", "middle": [], "last": "Wan", "suffix": "" }, { "first": "J", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL 2017", "volume": "", "issue": "", "pages": "1171--1181", "other_ids": { "DOI": [ "10.18653/v1/P17-1108" ] }, "num": null, "urls": [], "raw_text": "Tan, J., Wan, X., & Xiao, J. (2017). Abstractive document summarization with a graph-based attentional neural model. In Proc. of ACL 2017, 1171-1181. doi: 10.18653/v1/P17-1108", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "One-class classification: Concept learning in the absence of counter-examples. Unpublished doctoral dissertation", "authors": [ { "first": "D", "middle": [ "M J" ], "last": "Tax", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tax, D. M. J. (2001). One-class classification: Concept learning in the absence of counter-examples. Unpublished doctoral dissertation, Technische Universiteit Delft.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Automatic text summarization", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Torres-Moreno", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1002/9781119004752" ] }, "num": null, "urls": [], "raw_text": "Torres-Moreno, J. M. (2014). Automatic text summarization. Hoboken, New Jersey: John Wiley & Sons. doi: 10.1002/9781119004752", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Extractive Speech Summarization Leveraging Convolutional Neural Network Techniques", "authors": [ { "first": "C.-I", "middle": [], "last": "Tsai", "suffix": "" }, { "first": "H.-T", "middle": [], "last": "Hung", "suffix": "" }, { "first": "K.-Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "Proceedings of IEEE SLT 2016", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/SLT.2016.7846259" ] }, "num": null, "urls": [], "raw_text": "Tsai, C.-I., Hung, H.-T., Chen, K.-Y., & Chen, B. (2016). Extractive Speech Summarization Leveraging Convolutional Neural Network Techniques. In Proceedings of IEEE SLT 2016. doi: 10.1109/SLT.2016.7846259", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Pointer Networks", "authors": [ { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "M", "middle": [], "last": "Fortunato", "suffix": "" }, { "first": "N", "middle": [], "last": "Jaitly", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "55", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vinyals, O., Fortunato, M., & Jaitly, N. (2015). Pointer Networks. In Proceedings of Advances in Neural Information Processing Systems 2015. \u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981 55", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "MATBN: A Mandarin Chinese Broadcast News Corpus", "authors": [ { "first": "H.-M", "middle": [], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J.-W", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "S.-S", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2005, "venue": "International Journal of Computational Linguistics & Chinese Language Processing", "volume": "10", "issue": "2", "pages": "219--236", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, H.-M., Chen, B., Kuo, J.-W., & Cheng, S.-S. (2005). MATBN: A Mandarin Chinese Broadcast News Corpus. International Journal of Computational Linguistics & Chinese Language Processing, 10(2), 219-236.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Character-level convolutional networks for text classification", "authors": [ { "first": "X", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "649--657", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, X., Zhao, J., & LeCun, Y. (2015). Character-level convolutional networks for text classification. In Proceedings of Advances in Neural Information Processing Systems 2015, 649-657.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Selective Encoding for Abstractive Sentence Summarization", "authors": [ { "first": "Q", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "N", "middle": [], "last": "Yang", "suffix": "" }, { "first": "F", "middle": [], "last": "Wei", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2017, "venue": "Proc. of ACL 2017", "volume": "", "issue": "", "pages": "1095--1104", "other_ids": { "DOI": [ "10.18653/v1/P17-1101" ] }, "num": null, "urls": [], "raw_text": "Zhou, Q., Yang, N., Wei, F., & Zhou, M. (2017). Selective Encoding for Abstractive Sentence Summarization. In Proc. of ACL 2017, 1095-1104. doi: 10.18653/v1/P17-1101", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bahdanau, D., Cho, K.H., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Information credibility on Twitter", "authors": [ { "first": "C", "middle": [], "last": "Castillo", "suffix": "" }, { "first": "M", "middle": [], "last": "Mendoza", "suffix": "" }, { "first": "B", "middle": [], "last": "Poblete", "suffix": "" } ], "year": 2011, "venue": "Proceedings of WWW 2011", "volume": "", "issue": "", "pages": "675--684", "other_ids": { "DOI": [ "10.1145/1963405.1963500" ] }, "num": null, "urls": [], "raw_text": "Castillo, C., Mendoza, M., & Poblete, B. (2011). Information credibility on Twitter. In Proceedings of WWW 2011, 675-684. doi: 10.1145/1963405.1963500", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Call attention to rumors: Deep attention based recurrent neural networks for early rumor detection", "authors": [ { "first": "T", "middle": [], "last": "Chen", "suffix": "" }, { "first": "X", "middle": [], "last": "Li", "suffix": "" }, { "first": "H", "middle": [], "last": "Yin", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "40--52", "other_ids": { "DOI": [ "10.1007/978-3-030-04503-6_4" ] }, "num": null, "urls": [], "raw_text": "Chen, T., Li, X., Yin, H., & Zhang, J. (2018). Call attention to rumors: Deep attention based recurrent neural networks for early rumor detection. In Proceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining 2018, 40-52. doi: 10.1007/978-3-030-04503-6_4", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "authors": [ { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "B", "middle": [], "last": "Merri\u00ebnboer", "suffix": "" }, { "first": "C", "middle": [], "last": "Van, Gulcehre", "suffix": "" }, { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "F", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/D14-1179" ] }, "num": null, "urls": [], "raw_text": "Cho, K., Merri\u00ebnboer, B. van, Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., \u2026 Bengio, Y. (2014). Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1724-1734. doi: 10.3115/v1/D14-1179", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "authors": [ { "first": "J", "middle": [], "last": "Chung", "suffix": "" }, { "first": "C", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. In Proceedings of NIPS 2014.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Finding structure in time", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Elman", "suffix": "" } ], "year": 1990, "venue": "Cognitive science", "volume": "14", "issue": "2", "pages": "179--211", "other_ids": { "DOI": [ "10.1016/0364-0213(90)90002-E" ] }, "num": null, "urls": [], "raw_text": "Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211. doi: 10.1016/0364-0213(90)90002-E", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining", "authors": [ { "first": "A", "middle": [], "last": "Esuli", "suffix": "" }, { "first": "F", "middle": [], "last": "Sebastiani", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06", "volume": "", "issue": "", "pages": "417--422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Esuli, A. & Sebastiani, F. (2006). SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), 417-422.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Convolutional sequence to sequence learning", "authors": [ { "first": "J", "middle": [], "last": "Gehring", "suffix": "" }, { "first": "M", "middle": [], "last": "Auli", "suffix": "" }, { "first": "D", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "D", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Y", "middle": [ "N" ], "last": "Dauphin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 34th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1243--1252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning 2017, 1243-1252.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Supervised sequence labelling with Recurrent Neural Networks", "authors": [ { "first": "A", "middle": [], "last": "Graves", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graves, A. (2012). Supervised sequence labelling with Recurrent Neural Networks. (p.26). German, Heidelberg: Springer.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Evaluating event credibility on Twitter", "authors": [ { "first": "M", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "P", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "J", "middle": [], "last": "Han", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 12th SIAM International Conference on Data Mining, SDM 2012", "volume": "", "issue": "", "pages": "153--164", "other_ids": { "DOI": [ "10.1137/1.9781611972825.14" ] }, "num": null, "urls": [], "raw_text": "Gupta, M., Zhao, P., & Han, J. (2012). Evaluating event credibility on Twitter. In Proceedings of the 12th SIAM International Conference on Data Mining, SDM 2012, 153-164. doi: 10.1137/1.9781611972825.14", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Multimodal fusion with recurrent neural networks for rumor detection on microblogs", "authors": [ { "first": "Z", "middle": [], "last": "Jin", "suffix": "" }, { "first": "J", "middle": [], "last": "Cao", "suffix": "" }, { "first": "H", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "J", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 25th ACM international conference on Multimedia", "volume": "", "issue": "", "pages": "795--816", "other_ids": { "DOI": [ "10.1145/3123266.3123454" ] }, "num": null, "urls": [], "raw_text": "Jin, Z., Cao, J., Guo, H., Zhang, Y., & Luo, J. (2017). Multimodal fusion with recurrent neural networks for rumor detection on microblogs. In Proceedings of the 25th ACM international conference on Multimedia 2017, 795-816. doi: 10.1145/3123266.3123454", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Deep learning", "authors": [ { "first": "Y", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "G", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2015, "venue": "Nature", "volume": "521", "issue": "", "pages": "436--444", "other_ids": { "DOI": [ "10.1038/nature14539" ] }, "num": null, "urls": [], "raw_text": "LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436-444. doi: 10.1038/nature14539", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Gradient-based learning applied to document recognition", "authors": [ { "first": "Y", "middle": [], "last": "Lecun", "suffix": "" }, { "first": "L", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "P", "middle": [], "last": "Haffner", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the IEEE", "volume": "86", "issue": "11", "pages": "2278--2324", "other_ids": { "DOI": [ "10.1109/5.726791" ] }, "num": null, "urls": [], "raw_text": "LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi: 10.1109/5.726791", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Detecting rumors from microblogs with recurrent neural networks", "authors": [ { "first": "J", "middle": [], "last": "Ma", "suffix": "" }, { "first": "W", "middle": [], "last": "Gao", "suffix": "" }, { "first": "P", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "S", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Jansen", "suffix": "" }, { "first": "K.-F", "middle": [], "last": "Wong", "suffix": "" }, { "first": "M", "middle": [], "last": "Cha", "suffix": "" } ], "year": 2016, "venue": "Proceedings of IJCAI 2016", "volume": "", "issue": "", "pages": "3818--3824", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ma, J., Gao, W., Mitra, P., Kwon, S., Jansen, B. J., Wong, K.-F., \u2026 Cha, M. (2016). Detecting rumors from microblogs with recurrent neural networks. In Proceedings of IJCAI 2016, 3818-3824.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Detect rumor and stance jointly by neural multi-task learning", "authors": [ { "first": "J", "middle": [], "last": "Ma", "suffix": "" }, { "first": "W", "middle": [], "last": "Gao", "suffix": "" }, { "first": "K.-F", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The Web Conference", "volume": "", "issue": "", "pages": "585--593", "other_ids": { "DOI": [ "10.1145/3184558.3188729" ] }, "num": null, "urls": [], "raw_text": "Ma, J., Gao, W., & Wong, K.-F. (2018). Detect rumor and stance jointly by neural multi-task learning. In Proceedings of The Web Conference 2018, 585-593. doi: 10.1145/3184558.3188729", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Recurrent neural network based language model", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "M", "middle": [], "last": "Karafi\u00e1t", "suffix": "" }, { "first": "L", "middle": [], "last": "Burget", "suffix": "" }, { "first": "J", "middle": [], "last": "\u010cernock\u00fd", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" } ], "year": 2010, "venue": "Proceedings of INTERSPEECH 2010", "volume": "", "issue": "", "pages": "1045--1048", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, T., Karafi\u00e1t, M., Burget, L., \u010cernock\u00fd, J., & Khudanpur, S. (2010). Recurrent neural network based language model. In Proceedings of INTERSPEECH 2010, 1045-1048.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Recurrent models of visual attention", "authors": [ { "first": "V", "middle": [], "last": "Mnih", "suffix": "" }, { "first": "N", "middle": [], "last": "Heess", "suffix": "" }, { "first": "A", "middle": [], "last": "Graves", "suffix": "" }, { "first": "K", "middle": [], "last": "Kavukcuoglu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of neural information processing systems", "volume": "", "issue": "", "pages": "2204--2212", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mnih, V., Heess, N., Graves, A. & Kavukcuoglu, K. (2014). Recurrent models of visual attention. In Proceedings of neural information processing systems 2014, 2204-2212.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "BLEU: a method for automatic evaluation of machine translation", "authors": [ { "first": "K", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "T", "middle": [], "last": "Ward", "suffix": "" }, { "first": "W.-J", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics 2002", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Papineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics 2002, 311-318. doi: 10.3115/1073083.1073135", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "M", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "K", "middle": [ "K" ], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE Transactions on Signal Processing", "volume": "45", "issue": "11", "pages": "2673--2681", "other_ids": { "DOI": [ "10.1109/78.650093" ] }, "num": null, "urls": [], "raw_text": "Schuster, M. & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681. doi: 10.1109/78.650093", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Very deep convolutional networks for large-scale image recognition", "authors": [ { "first": "K", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "A", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simonyan, K. & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In Proceedings of ICLR 2015.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "Proceedings of neural information processing systems", "volume": "", "issue": "", "pages": "3104--3112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sutskever, I., Vinyals, O., & Le, Q. V. (2014) Sequence to sequence learning with neural networks. In Proceedings of neural information processing systems 2014, 3104-3112.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Going deeper with convolutions", "authors": [ { "first": "C", "middle": [], "last": "Szegedy", "suffix": "" }, { "first": "W", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jia", "suffix": "" }, { "first": "P", "middle": [], "last": "Sermanet", "suffix": "" }, { "first": "S", "middle": [], "last": "Reed", "suffix": "" }, { "first": "D", "middle": [], "last": "Anguelov", "suffix": "" }, { "first": "A", "middle": [], "last": "Rabinovich", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "1--9", "other_ids": { "DOI": [ "10.1109/CVPR.2015.7298594" ] }, "num": null, "urls": [], "raw_text": "Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., \u2026 Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition 2015, 1-9. doi: 10.1109/CVPR.2015.7298594", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Deep semantic role labeling with self-attention", "authors": [ { "first": "Z", "middle": [], "last": "Tan", "suffix": "" }, { "first": "M", "middle": [], "last": "Wang", "suffix": "" }, { "first": "J", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "X", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "4929--4936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tan, Z., Wang, M., Xie, J., Chen, Y., & Shi, X. (2018). Deep semantic role labeling with self-attention. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence 2018, 4929-4936.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Attention is all you need", "authors": [ { "first": "A", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "N", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "J", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "L", "middle": [], "last": "Jones", "suffix": "" }, { "first": "A", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "I", "middle": [], "last": "\u2026polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of neural information processing systems 2017", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., \u2026Polosukhin, I. (2017). Attention is all you need. In Proceedings of neural information processing systems 2017, 5998-6008.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Show and tell: A neural image caption generator", "authors": [ { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "A", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "S", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "D", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "3156--3164", "other_ids": { "DOI": [ "10.1109/CVPR.2015.7298935" ] }, "num": null, "urls": [], "raw_text": "Vinyals, O., Toshev, A., Bengio, S., & Erhan, D. (2015). Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition 2015, 3156-3164. doi: 10.1109/CVPR.2015.7298935", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Topic aware neural response generation", "authors": [ { "first": "C", "middle": [], "last": "Xing", "suffix": "" }, { "first": "W", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "J", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Y", "middle": [], "last": "Huang", "suffix": "" }, { "first": "M", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "3351--3357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xing, C., Wu, W., Wu, Y., Liu, J., Huang, Y., Zhou, M., \u2026 Ma, W.-Y. (2017). Topic aware neural response generation. In Proceedings of Thirty-First AAAI Conference on Artificial Intelligence 2017, 3351-3357.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Show, attend and tell: Neural image caption generation with visual attention", "authors": [ { "first": "K", "middle": [], "last": "Xu", "suffix": "" }, { "first": "J", "middle": [], "last": "Ba", "suffix": "" }, { "first": "R", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "A", "middle": [], "last": "Courville", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhudinov", "suffix": "" }, { "first": "Y", "middle": [], "last": "\u2026bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of International conference on machine learning 2015", "volume": "", "issue": "", "pages": "2048--2057", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., \u2026Bengio, Y. (2015). Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of International conference on machine learning 2015, 2048-2057.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "A convolutional approach for misinformation identification", "authors": [ { "first": "F", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Liu", "suffix": "" }, { "first": "S", "middle": [], "last": "Wu", "suffix": "" }, { "first": "L", "middle": [], "last": "Wang", "suffix": "" }, { "first": "T", "middle": [], "last": "Tan", "suffix": "" } ], "year": 2017, "venue": "Proceedings of IJCAI 2017", "volume": "", "issue": "", "pages": "3901--3907", "other_ids": { "DOI": [ "/10.24963/ijcai.2017/545" ] }, "num": null, "urls": [], "raw_text": "Yu, F., Liu, Q., Wu, S., Wang, L., & Tan, T. (2017). A convolutional approach for misinformation identification. In Proceedings of IJCAI 2017, 3901-3907. doi: /10.24963/ijcai.2017/545", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "A Multi-task Learning Approach for Image Captioning", "authors": [ { "first": "W", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "B", "middle": [], "last": "Wang", "suffix": "" }, { "first": "J", "middle": [], "last": "Ye", "suffix": "" }, { "first": "M", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "R", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Qiao", "suffix": "" } ], "year": 2018, "venue": "Proceedings of IJCAI 2018", "volume": "", "issue": "", "pages": "1205--1211", "other_ids": { "DOI": [ "10.24963/ijcai.2018/168" ] }, "num": null, "urls": [], "raw_text": "Zhao, W., Wang, B., Ye, J., Yang, M., Zhao, Z., Luo, R., \u2026 Qiao, Y. (2018). A Multi-task Learning Approach for Image Captioning. In Proceedings of IJCAI 2018, 1205-1211. doi: 10.24963/ijcai.2018/168", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Learning words through overhearing", "authors": [ { "first": "References", "middle": [], "last": "Akhtar", "suffix": "" }, { "first": "N", "middle": [], "last": "Jipson", "suffix": "" }, { "first": "J", "middle": [], "last": "Callanan", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "", "suffix": "" } ], "year": 2001, "venue": "Child Development", "volume": "72", "issue": "2", "pages": "416--430", "other_ids": { "DOI": [ "10.1111/1467-8624.00287" ] }, "num": null, "urls": [], "raw_text": "References Akhtar, N., Jipson, J., & Callanan, M. A. (2001). Learning words through overhearing. Child Development, 72(2), 416-430. doi: 10.1111/1467-8624.00287", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Ambrose", "suffix": "" }, { "first": "M", "middle": [], "last": "Vandam", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Moeller", "suffix": "" } ], "year": 2014, "venue": "Ear and Hearing", "volume": "35", "issue": "2", "pages": "139--147", "other_ids": { "DOI": [ "10.1097/AUD.0b013e3182a76768" ] }, "num": null, "urls": [], "raw_text": "Ambrose, S. E., VanDam, M., & Moeller, M. P. (2014). Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss. Ear and Hearing, 35(2), 139-147. doi: 10.1097/AUD.0b013e3182a76768", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Correlation and agreement between Language ENvironment Analysis (lenaTM) and manual transcription for Dutch natural language recordings", "authors": [ { "first": "T", "middle": [], "last": "Busch", "suffix": "" }, { "first": "A", "middle": [], "last": "Sangen", "suffix": "" }, { "first": "F", "middle": [], "last": "Vanpoucke", "suffix": "" }, { "first": "A", "middle": [], "last": "Van Wieringen", "suffix": "" } ], "year": 2018, "venue": "Behavior Research Methods", "volume": "50", "issue": "5", "pages": "1921--1932", "other_ids": { "DOI": [ "10.3758/s13428-017-0960-0" ] }, "num": null, "urls": [], "raw_text": "Busch, T., Sangen, A., Vanpoucke, F., & van Wieringen, A. (2018). Correlation and agreement between Language ENvironment Analysis (lenaTM) and manual transcription for Dutch natural language recordings. Behavior Research Methods, 50(5), 1921-1932. doi: 10.3758/s13428-017-0960-0", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Receptive-expressive emergent language test", "authors": [ { "first": "K", "middle": [ "R" ], "last": "Bzoch", "suffix": "" }, { "first": "R", "middle": [], "last": "League", "suffix": "" }, { "first": "V", "middle": [ "L" ], "last": "Brown", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "PRO--ED", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bzoch, K. R., League, R., & Brown, V. L. (2003). Receptive-expressive emergent language test. Austin, TX : PRO-ED.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "Reliability of the Language ENvironment Analysis system (LENATM) in European French", "authors": [ { "first": "M", "middle": [], "last": "Canault", "suffix": "" }, { "first": "M.-T", "middle": [], "last": "Le Normand", "suffix": "" }, { "first": "S", "middle": [], "last": "Foudil", "suffix": "" }, { "first": "N", "middle": [], "last": "Loundon", "suffix": "" }, { "first": "H", "middle": [], "last": "Thai-Van", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Canault, M., Le Normand, M.-T., Foudil, S., Loundon, N., & Thai-Van, H. (2016). Reliability of the Language ENvironment Analysis system (LENATM) in European French.", "links": null }, "BIBREF72": { "ref_id": "b72", "title": "Importance of parent talk on the development of preterm infant vocalizations", "authors": [ { "first": "M", "middle": [], "last": "Caskey", "suffix": "" }, { "first": "B", "middle": [], "last": "Stephens", "suffix": "" }, { "first": "R", "middle": [], "last": "Tucker", "suffix": "" }, { "first": "B", "middle": [], "last": "Vohr", "suffix": "" } ], "year": 2011, "venue": "Pediatrics", "volume": "128", "issue": "5", "pages": "910--916", "other_ids": { "DOI": [ "10.1542/peds.2011-0609" ] }, "num": null, "urls": [], "raw_text": "Caskey, M., Stephens, B., Tucker, R., & Vohr, B. (2011). Importance of parent talk on the development of preterm infant vocalizations. Pediatrics, 128(5), 910-916. doi: 10.1542/peds.2011-0609", "links": null }, "BIBREF73": { "ref_id": "b73", "title": "Adult talk in the NICU with preterm infants and developmental outcomes", "authors": [ { "first": "M", "middle": [], "last": "Caskey", "suffix": "" }, { "first": "B", "middle": [], "last": "Stephens", "suffix": "" }, { "first": "R", "middle": [], "last": "Tucker", "suffix": "" }, { "first": "B", "middle": [], "last": "Vohr", "suffix": "" } ], "year": 2014, "venue": "Pediatrics", "volume": "133", "issue": "3", "pages": "578--584", "other_ids": { "DOI": [ "10.1542/peds.2013-0104" ] }, "num": null, "urls": [], "raw_text": "Caskey, M., Stephens, B., Tucker, R., & Vohr, B. (2014). Adult talk in the NICU with preterm infants and developmental outcomes. Pediatrics, 133(3), e578-e584. doi: 10.1542/peds.2013-0104", "links": null }, "BIBREF74": { "ref_id": "b74", "title": "Language ENvironment Analysis (LENA) with children with hearing loss: A clinical pilot", "authors": [ { "first": "C", "middle": [], "last": "Charron", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Fitzpatrick", "suffix": "" }, { "first": "E", "middle": [], "last": "Mcsweeney", "suffix": "" }, { "first": "K", "middle": [], "last": "Rabjohn", "suffix": "" }, { "first": "R", "middle": [], "last": "Somerville", "suffix": "" }, { "first": "P", "middle": [], "last": "Steacie", "suffix": "" } ], "year": 2016, "venue": "Canadian Journal of Speech-Language Pathology & Audiology", "volume": "40", "issue": "1", "pages": "93--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charron, C., Fitzpatrick, E. M., McSweeney, E., Rabjohn, K., Somerville, R., & Steacie, P. (2016). Language ENvironment Analysis (LENA) with children with hearing loss: A clinical pilot. Canadian Journal of Speech-Language Pathology & Audiology, 40(1), 93-104.", "links": null }, "BIBREF76": { "ref_id": "b76", "title": "Communicative Development Inventories", "authors": [ { "first": "", "middle": [], "last": "Macarthur-Bates", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "MacArthur-Bates Communicative Development Inventories. Baltimore, MD: Paul H. Brookes.", "links": null }, "BIBREF77": { "ref_id": "b77", "title": "Normal sleep patterns in infants and children: A systematic review of observational studies", "authors": [ { "first": "B", "middle": [ "C" ], "last": "Galland", "suffix": "" }, { "first": "B", "middle": [ "J" ], "last": "Taylor", "suffix": "" }, { "first": "D", "middle": [ "E" ], "last": "Elder", "suffix": "" }, { "first": "P", "middle": [], "last": "Herbison", "suffix": "" } ], "year": 2012, "venue": "Sleep Medicine Reviews", "volume": "16", "issue": "3", "pages": "213--222", "other_ids": { "DOI": [ "10.1016/j.smrv.2011.06.001" ] }, "num": null, "urls": [], "raw_text": "Galland, B. C., Taylor, B. J., Elder, D. E., & Herbison, P. (2012). Normal sleep patterns in infants and children: A systematic review of observational studies. Sleep Medicine Reviews, 16(3), 213-222. doi: 10.1016/j.smrv.2011.06.001", "links": null }, "BIBREF78": { "ref_id": "b78", "title": "A concise protocol for the validation of Language ENvironment Analysis (LENA) conversational turn counts in Vietnamese", "authors": [ { "first": "H", "middle": [ "V" ], "last": "Ganek", "suffix": "" }, { "first": "A", "middle": [], "last": "Eriks-Brophy", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ganek, H. V., & Eriks-Brophy, A. (2018). A concise protocol for the validation of Language ENvironment Analysis (LENA) conversational turn counts in Vietnamese.", "links": null }, "BIBREF80": { "ref_id": "b80", "title": "The power of talk. Impact of adult talk, conversational turns and TV during the critical 0-4 years of child development (LENA Foundation", "authors": [ { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gilkerson, J., & Richards, J. A. (2009). The power of talk. Impact of adult talk, conversational turns and TV during the critical 0-4 years of child development (LENA Foundation Technical Report LTR-01-2). Retrieved from https://www.lena.org/wp-content/uploads/2016/07/LTR-01-2_PowerOfTalk.pdf", "links": null }, "BIBREF81": { "ref_id": "b81", "title": "The LENA Natural Language Study (LENA Foundation", "authors": [ { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gilkerson, J., & Richards, J. A. (2008). The LENA Natural Language Study (LENA Foundation Technical Report LTR-02-2). Retrieved from https://www.lena.org/wp-content/uploads/2016/07/LTR-02-2_Natural_Language_Study. pdf", "links": null }, "BIBREF82": { "ref_id": "b82", "title": "Language experience in the second year of life and language outcomes in late childhood", "authors": [ { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" }, { "first": "S", "middle": [ "F" ], "last": "Warren", "suffix": "" }, { "first": "D", "middle": [ "K" ], "last": "Oller", "suffix": "" }, { "first": "R", "middle": [], "last": "Russo", "suffix": "" }, { "first": "B", "middle": [], "last": "Vohr", "suffix": "" } ], "year": 2018, "venue": "Pediatrics", "volume": "142", "issue": "4", "pages": "", "other_ids": { "DOI": [ "10.1542/peds.2017-4276" ] }, "num": null, "urls": [], "raw_text": "Gilkerson, J., Richards, J. A., Warren, S. F., Oller, D. K., Russo, R., & Vohr, B. (2018). Language experience in the second year of life and language outcomes in late childhood. Pediatrics, 142(4), e20174276. doi: 10.1542/peds.2017-4276", "links": null }, "BIBREF83": { "ref_id": "b83", "title": "Evaluating language environment analysis system performance for Chinese: A pilot study in Shanghai", "authors": [ { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "D", "middle": [], "last": "Xu", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" }, { "first": "X", "middle": [], "last": "Xu", "suffix": "" }, { "first": "F", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "K", "middle": [], "last": "\u2026topping", "suffix": "" } ], "year": 2015, "venue": "Journal of Speech, Language, and Hearing Research", "volume": "58", "issue": "2", "pages": "445--452", "other_ids": { "DOI": [ "10.1044/2015_JSLHR-L-14-0014" ] }, "num": null, "urls": [], "raw_text": "Gilkerson, J., Zhang Y., Xu D., Richards J. A., Xu X., Jiang F., \u2026Topping K. (2015). Evaluating language environment analysis system performance for Chinese: A pilot study in Shanghai. Journal of Speech, Language, and Hearing Research, 58(2), 445-452. doi: 10.1044/2015_JSLHR-L-14-0014", "links": null }, "BIBREF84": { "ref_id": "b84", "title": "Early development of turn-taking in vocal interaction between mothers and infants", "authors": [ { "first": "M", "middle": [], "last": "Gratier", "suffix": "" }, { "first": "E", "middle": [], "last": "Devouche", "suffix": "" }, { "first": "B", "middle": [], "last": "Guellai", "suffix": "" }, { "first": "R", "middle": [], "last": "Infanti", "suffix": "" }, { "first": "E", "middle": [], "last": "Yilmaz", "suffix": "" }, { "first": "E", "middle": [], "last": "Parlato-Oliveira", "suffix": "" } ], "year": 2015, "venue": "Frontiers in Psychology", "volume": "6", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.3389/fpsyg.2015.01167" ] }, "num": null, "urls": [], "raw_text": "Gratier, M., Devouche, E., Guellai, B., Infanti, R., Yilmaz, E., & Parlato-Oliveira, E. (2015). Early development of turn-taking in vocal interaction between mothers and infants. Frontiers in Psychology, 6, 1167. doi: 10.3389/fpsyg.2015.01167", "links": null }, "BIBREF85": { "ref_id": "b85", "title": "Linguistic Input and Child Vocalization of 7 Children 99 from 5 to 30 Months: A Longitudinal Study with LENA Automatic Analysis", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Input and Child Vocalization of 7 Children 99 from 5 to 30 Months: A Longitudinal Study with LENA Automatic Analysis", "links": null }, "BIBREF86": { "ref_id": "b86", "title": "Assessing children's home language environments using automatic speech recognition technology", "authors": [ { "first": "C", "middle": [ "R" ], "last": "Greenwood", "suffix": "" }, { "first": "K", "middle": [], "last": "Thiemann-Bourque", "suffix": "" }, { "first": "D", "middle": [], "last": "Walker", "suffix": "" }, { "first": "J", "middle": [], "last": "Buzhardt", "suffix": "" }, { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" } ], "year": 2011, "venue": "Communication Disorders Quarterly", "volume": "32", "issue": "2", "pages": "83--92", "other_ids": { "DOI": [ "10.1177/1525740110367826" ] }, "num": null, "urls": [], "raw_text": "Greenwood, C. R., Thiemann-Bourque, K., Walker, D., Buzhardt, J., & Gilkerson, J. (2011). Assessing children's home language environments using automatic speech recognition technology. Communication Disorders Quarterly, 32(2), 83-92. doi: 10.1177/1525740110367826", "links": null }, "BIBREF87": { "ref_id": "b87", "title": "Mothers provide differential feedback to infants' prelinguistic sounds", "authors": [ { "first": "J", "middle": [], "last": "Gros-Louis", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "West", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Goldstein", "suffix": "" }, { "first": "A", "middle": [ "P" ], "last": "King", "suffix": "" } ], "year": 2006, "venue": "International Journal of Behavioral Development", "volume": "30", "issue": "6", "pages": "509--516", "other_ids": { "DOI": [ "10.1177/0165025406071914" ] }, "num": null, "urls": [], "raw_text": "Gros-Louis, J., West, M. J., Goldstein, M. H., & King, A. P. (2006). Mothers provide differential feedback to infants' prelinguistic sounds. International Journal of Behavioral Development, 30(6), 509-516.doi: 10.1177/0165025406071914", "links": null }, "BIBREF88": { "ref_id": "b88", "title": "Meaningful differences in the everyday experience of young American children", "authors": [ { "first": "B", "middle": [], "last": "Hart", "suffix": "" }, { "first": "T", "middle": [ "R" ], "last": "Risley", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore, MD: Paul H Brookes Publishing.", "links": null }, "BIBREF89": { "ref_id": "b89", "title": "First Language Acquisition: Method, Description and Explanation", "authors": [ { "first": "D", "middle": [], "last": "Ingram", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ingram, D. (1989). First Language Acquisition: Method, Description and Explanation. New York, NY: Cambridge University Press.", "links": null }, "BIBREF90": { "ref_id": "b90", "title": "Minnesota Child Development Inventory. Minneapolis, MN: Behavior Science Systems", "authors": [ { "first": "H", "middle": [], "last": "Ireton", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ireton, H. (1992). Minnesota Child Development Inventory. Minneapolis, MN: Behavior Science Systems.", "links": null }, "BIBREF91": { "ref_id": "b91", "title": "Rhythms of dialogue in infancy: Coordinated timing in development. Monographs of the Society for Research in Child Development", "authors": [ { "first": "J", "middle": [], "last": "Jaffe", "suffix": "" }, { "first": "B", "middle": [], "last": "Beebe", "suffix": "" }, { "first": "S", "middle": [], "last": "Feldstein", "suffix": "" }, { "first": "C", "middle": [ "L" ], "last": "Crown", "suffix": "" }, { "first": "M", "middle": [ "D" ], "last": "Jasnow", "suffix": "" }, { "first": "P", "middle": [], "last": "Rochat", "suffix": "" }, { "first": "D", "middle": [ "N" ], "last": "\u2026stern", "suffix": "" } ], "year": 2001, "venue": "", "volume": "66", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaffe, J., Beebe, B., Feldstein, S., Crown, C. L., Jasnow, M. D., Rochat, P., \u2026Stern, D. N. (2001). Rhythms of dialogue in infancy: Coordinated timing in development. Monographs of the Society for Research in Child Development, 66(2), i-149.", "links": null }, "BIBREF92": { "ref_id": "b92", "title": "Babbling development as seen in canonical babbling ratios: A naturalistic evaluation of all-day recordings", "authors": [ { "first": "C", "middle": [ "C" ], "last": "Lee", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jhang", "suffix": "" }, { "first": "G", "middle": [], "last": "Relyea", "suffix": "" }, { "first": "L", "middle": [ "M" ], "last": "Chen", "suffix": "" }, { "first": "D", "middle": [ "K" ], "last": "Oller", "suffix": "" } ], "year": 2018, "venue": "Infant Behavior and Development", "volume": "50", "issue": "", "pages": "140--153", "other_ids": { "DOI": [ "10.1016/j.infbeh.2017.12.002" ] }, "num": null, "urls": [], "raw_text": "Lee, C. C., Jhang, Y., Relyea, G., Chen, L. M., & Oller, D. K. (2018). Babbling development as seen in canonical babbling ratios: A naturalistic evaluation of all-day recordings. Infant Behavior and Development, 50, 140-153. doi: 10.1016/j.infbeh.2017.12.002", "links": null }, "BIBREF93": { "ref_id": "b93", "title": "LENA Research Foundation", "authors": [], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LENA Research Foundation. (2020). LENA Research Foundation. Retrieved from http://www.lena.org/", "links": null }, "BIBREF94": { "ref_id": "b94", "title": "Feasibility study to quantify the auditory and social environment of older adults using a digital language processor", "authors": [ { "first": "L", "middle": [], "last": "Li", "suffix": "" }, { "first": "A", "middle": [ "R" ], "last": "Vikani", "suffix": "" }, { "first": "G", "middle": [ "C" ], "last": "Harris", "suffix": "" }, { "first": "F", "middle": [ "R" ], "last": "Lin", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, L., Vikani, A. R., Harris, G. C., & Lin, F. R. (2014). Feasibility study to quantify the auditory and social environment of older adults using a digital language processor.", "links": null }, "BIBREF96": { "ref_id": "b96", "title": "Perception of tones by bilingual infants learning non-tone languages", "authors": [ { "first": "L", "middle": [], "last": "Liu", "suffix": "" }, { "first": "R", "middle": [], "last": "Kager", "suffix": "" } ], "year": 2017, "venue": "Bilingualism: Language and Cognition", "volume": "20", "issue": "3", "pages": "561--575", "other_ids": { "DOI": [ "10.1017/S1366728916000183" ] }, "num": null, "urls": [], "raw_text": "Liu, L., & Kager, R. (2017). Perception of tones by bilingual infants learning non-tone languages. Bilingualism: Language and Cognition, 20(3), 561-575. doi: 10.1017/S1366728916000183", "links": null }, "BIBREF97": { "ref_id": "b97", "title": "Caregiver talk to young Spanish-English bilinguals: Comparing direct observation and parent-report measures of dual-language exposure", "authors": [ { "first": "V", "middle": [ "A" ], "last": "Marchman", "suffix": "" }, { "first": "L", "middle": [ "Z" ], "last": "Mart\u00ednez", "suffix": "" }, { "first": "N", "middle": [], "last": "Hurtado", "suffix": "" }, { "first": "T", "middle": [], "last": "Gr\u00fcter", "suffix": "" }, { "first": "A", "middle": [], "last": "Fernald", "suffix": "" } ], "year": 2017, "venue": "Developmental Science", "volume": "20", "issue": "1", "pages": "", "other_ids": { "DOI": [ "10.1111/desc.12425" ] }, "num": null, "urls": [], "raw_text": "Marchman, V. A., Mart\u00ednez, L. Z., Hurtado, N., Gr\u00fcter, T., & Fernald, A. (2017). Caregiver talk to young Spanish-English bilinguals: Comparing direct observation and parent-report measures of dual-language exposure. Developmental Science, 20(1), e12425. doi: 10.1111/desc.12425", "links": null }, "BIBREF98": { "ref_id": "b98", "title": "All-day recordings to investigate vocabulary development: A case study of a trilingual toddler", "authors": [ { "first": "D", "middle": [ "K" ], "last": "Oller", "suffix": "" } ], "year": 2010, "venue": "Communication Disorders Quarterly", "volume": "31", "issue": "4", "pages": "213--222", "other_ids": { "DOI": [ "10.1177/1525740109358628" ] }, "num": null, "urls": [], "raw_text": "Oller, D. K. (2010). All-day recordings to investigate vocabulary development: A case study of a trilingual toddler. Communication Disorders Quarterly, 31(4), 213-222. doi: 10.1177/1525740109358628", "links": null }, "BIBREF99": { "ref_id": "b99", "title": "Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development", "authors": [ { "first": "D", "middle": [ "K" ], "last": "Oller", "suffix": "" }, { "first": "P", "middle": [], "last": "Niyogi", "suffix": "" }, { "first": "S", "middle": [], "last": "Gray", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" }, { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "S", "middle": [ "F" ], "last": "Xu", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the National Academy of Sciences", "volume": "107", "issue": "30", "pages": "13354--13359", "other_ids": { "DOI": [ "10.1073/pnas.1003882107" ] }, "num": null, "urls": [], "raw_text": "Oller, D. K., Niyogi, P., Gray, S., Richards, J. A., Gilkerson, J., Xu, \u2026Warren, S. F. (2010). Automated vocal analysis of naturalistic recordings from children with autism, language delay, and typical development. Proceedings of the National Academy of Sciences, 107(30), 13354-13359. doi: 10.1073/pnas.1003882107", "links": null }, "BIBREF101": { "ref_id": "b101", "title": "Evaluating the LENA recording system for investigating speech input in a French-English bilingual context", "authors": [ { "first": "A", "middle": [ "J" ], "last": "Orena", "suffix": "" }, { "first": "L", "middle": [], "last": "Polka", "suffix": "" }, { "first": "J", "middle": [], "last": "Srouji", "suffix": "" } ], "year": 2018, "venue": "The Journal of the Acoustical Society of America", "volume": "143", "issue": "3", "pages": "1871--1871", "other_ids": { "DOI": [ "10.1121/1.5036141" ] }, "num": null, "urls": [], "raw_text": "Orena, A. J., Polka, L., & Srouji, J. (2018). Evaluating the LENA recording system for investigating speech input in a French-English bilingual context. The Journal of the Acoustical Society of America, 143(3), 1871-1871. doi: 10.1121/1.5036141", "links": null }, "BIBREF102": { "ref_id": "b102", "title": "Children learn from speech not addressed to them: The case of personal pronouns", "authors": [ { "first": "Y", "middle": [], "last": "Oshima-Takane", "suffix": "" } ], "year": 1988, "venue": "Journal of Child Language", "volume": "15", "issue": "1", "pages": "95--108", "other_ids": { "DOI": [ "10.1017/S0305000900012071" ] }, "num": null, "urls": [], "raw_text": "Oshima-Takane, Y. (1988). Children learn from speech not addressed to them: The case of personal pronouns. Journal of Child Language, 15(1), 95-108. doi: 10.1017/S0305000900012071", "links": null }, "BIBREF103": { "ref_id": "b103", "title": "Birth order effects on early language development: Do secondborn children learn from overheard speech?", "authors": [ { "first": "Y", "middle": [], "last": "Oshima-Takane", "suffix": "" }, { "first": "E", "middle": [], "last": "Goodz", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Derevensky", "suffix": "" } ], "year": 1996, "venue": "Child Development", "volume": "67", "issue": "2", "pages": "621--634", "other_ids": { "DOI": [ "10.2307/1131836" ] }, "num": null, "urls": [], "raw_text": "Oshima-Takane, Y., Goodz, E., & Derevensky, J. L. (1996). Birth order effects on early language development: Do secondborn children learn from overheard speech? Child Development, 67(2), 621-634. doi: 10.2307/1131836", "links": null }, "BIBREF104": { "ref_id": "b104", "title": "Effects of feedback on parent-child language with infants and toddlers in Korea", "authors": [ { "first": "S", "middle": [], "last": "Pae", "suffix": "" }, { "first": "H", "middle": [], "last": "Yoon", "suffix": "" }, { "first": "A", "middle": [], "last": "Seol", "suffix": "" }, { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" }, { "first": "L", "middle": [], "last": "Ma", "suffix": "" }, { "first": "K", "middle": [], "last": "\u2026topping", "suffix": "" } ], "year": 2016, "venue": "First Language", "volume": "36", "issue": "6", "pages": "549--569", "other_ids": { "DOI": [ "10.1177/0142723716649273" ] }, "num": null, "urls": [], "raw_text": "Pae, S., Yoon, H., Seol, A., Gilkerson, J., Richards, J. A., Ma, L., \u2026Topping, K. (2016). Effects of feedback on parent-child language with infants and toddlers in Korea. First Language, 36(6), 549-569. doi: 10.1177/0142723716649273", "links": null }, "BIBREF105": { "ref_id": "b105", "title": "The relation of input factors to lexical learning by bilingual infants", "authors": [ { "first": "B", "middle": [ "Z" ], "last": "Pearson", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Fernandez", "suffix": "" }, { "first": "V", "middle": [], "last": "Lewedeg", "suffix": "" }, { "first": "D", "middle": [ "K" ], "last": "Oller", "suffix": "" } ], "year": 1997, "venue": "Applied Psycholinguistics", "volume": "18", "issue": "1", "pages": "41--58", "other_ids": { "DOI": [ "10.1017/S0142716400009863" ] }, "num": null, "urls": [], "raw_text": "Pearson, B. Z., Fernandez, S. C., Lewedeg, V., & Oller, D. K. (1997). The relation of input factors to lexical learning by bilingual infants. Applied Psycholinguistics, 18(1), 41-58. doi: 10.1017/S0142716400009863", "links": null }, "BIBREF106": { "ref_id": "b106", "title": "Look who's talking: Speech style and social context in language input to infants are linked to concurrent and future speech development", "authors": [ { "first": "N", "middle": [], "last": "Ram\u00edrezesparza", "suffix": "" }, { "first": "A", "middle": [], "last": "Garc\u00edasierra", "suffix": "" }, { "first": "P", "middle": [ "K" ], "last": "Kuhl", "suffix": "" } ], "year": 2014, "venue": "Developmental Science", "volume": "17", "issue": "6", "pages": "880--891", "other_ids": { "DOI": [ "10.1111/desc.12172" ] }, "num": null, "urls": [], "raw_text": "Ram\u00edrezEsparza, N., Garc\u00edaSierra, A., & Kuhl, P. K. (2014). Look who's talking: Speech style and social context in language input to infants are linked to concurrent and future speech development. Developmental Science, 17(6), 880-891. doi: 10.1111/desc.12172", "links": null }, "BIBREF107": { "ref_id": "b107", "title": "Late-talking toddlers: MLU and IPSyn outcomes at 3;0 and 4;0", "authors": [ { "first": "L", "middle": [], "last": "Rescorla", "suffix": "" }, { "first": "K", "middle": [], "last": "Dahlsgaard", "suffix": "" }, { "first": "J", "middle": [], "last": "Roberts", "suffix": "" } ], "year": 2000, "venue": "Journal of Child Language", "volume": "27", "issue": "3", "pages": "643--664", "other_ids": { "DOI": [ "10.1017/S0305000900004232" ] }, "num": null, "urls": [], "raw_text": "Rescorla, L., Dahlsgaard, K., & Roberts, J. (2000). Late-talking toddlers: MLU and IPSyn outcomes at 3;0 and 4;0. Journal of Child Language, 27(3), 643-664. doi: 10.1017/S0305000900004232", "links": null }, "BIBREF108": { "ref_id": "b108", "title": "Automated assessment of child vocalization development using LENA", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" }, { "first": "D", "middle": [], "last": "Xu", "suffix": "" }, { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "U", "middle": [], "last": "Yapanel", "suffix": "" }, { "first": "S", "middle": [], "last": "Gray", "suffix": "" }, { "first": "T", "middle": [], "last": "Paul", "suffix": "" } ], "year": 2017, "venue": "Journal of Speech, Language, and Hearing Research", "volume": "60", "issue": "7", "pages": "2047--2063", "other_ids": { "DOI": [ "10.1044/2017_JSLHR-L-16-0157" ] }, "num": null, "urls": [], "raw_text": "Richards, J. A., Xu, D., Gilkerson, J., Yapanel U., Gray S., & Paul, T. (2017). Automated assessment of child vocalization development using LENA. Journal of Speech, Language, and Hearing Research, 60(7), 2047-2063. doi: 10.1044/2017_JSLHR-L-16-0157", "links": null }, "BIBREF109": { "ref_id": "b109", "title": "A longitudinal investigation of the role of quantity and quality of child-directed speech in vocabulary development", "authors": [ { "first": "M", "middle": [ "L" ], "last": "Rowe", "suffix": "" } ], "year": 2012, "venue": "Child Development", "volume": "83", "issue": "5", "pages": "1762--1774", "other_ids": { "DOI": [ "10.1111/j.1467-8624.2012.01805.x" ] }, "num": null, "urls": [], "raw_text": "Rowe, M. L. (2012). A longitudinal investigation of the role of quantity and quality of child-directed speech in vocabulary development. Child Development, 83(5), 1762-1774. doi: 10.1111/j.1467-8624.2012.01805.x", "links": null }, "BIBREF110": { "ref_id": "b110", "title": "Exploring word learning in a high-density longitudinal corpus", "authors": [ { "first": "B", "middle": [ "C" ], "last": "Roy", "suffix": "" }, { "first": "M", "middle": [ "C" ], "last": "Frank", "suffix": "" }, { "first": "D", "middle": [ "K" ], "last": "Roy", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 31 st Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "2106--2111", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roy, B. C., Frank, M. C., & Roy, D. K. (2009). Exploring word learning in a high-density longitudinal corpus. In Proceedings of the 31 st Annual Meeting of the Cognitive Science Society, 31(31), 2106-2111.", "links": null }, "BIBREF111": { "ref_id": "b111", "title": "Pilot testing of a parent-directed intervention (Project ASPIRE) for underserved children who are deaf or hard of hearing", "authors": [ { "first": "C", "middle": [], "last": "Sacks", "suffix": "" }, { "first": "S", "middle": [], "last": "Shay", "suffix": "" }, { "first": "L", "middle": [], "last": "Repplinger", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Leffel", "suffix": "" }, { "first": "S", "middle": [ "G" ], "last": "Sapolich", "suffix": "" }, { "first": "E", "middle": [], "last": "Suskind", "suffix": "" }, { "first": "D", "middle": [], "last": "\u2026suskind", "suffix": "" } ], "year": 2013, "venue": "Child Language Teaching and Therapy", "volume": "30", "issue": "1", "pages": "91--102", "other_ids": { "DOI": [ "10.1177/0265659013494873" ] }, "num": null, "urls": [], "raw_text": "Sacks, C., Shay, S., Repplinger, L., Leffel, K. R., Sapolich, S. G., Suskind, E., \u2026Suskind, D. (2013). Pilot testing of a parent-directed intervention (Project ASPIRE) for underserved children who are deaf or hard of hearing. Child Language Teaching and Therapy, 30(1), 91-102. doi: 10.1177/0265659013494873", "links": null }, "BIBREF112": { "ref_id": "b112", "title": "What counts as effective input for word learning", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Shneidman", "suffix": "" }, { "first": "M", "middle": [ "E" ], "last": "Arroyo", "suffix": "" }, { "first": "S", "middle": [ "C" ], "last": "Levine", "suffix": "" }, { "first": "S", "middle": [], "last": "Goldin-Meadow", "suffix": "" } ], "year": 2013, "venue": "Journal of Child Language", "volume": "40", "issue": "3", "pages": "672--686", "other_ids": { "DOI": [ "10.1017/S0305000912000141" ] }, "num": null, "urls": [], "raw_text": "Shneidman, L. A., Arroyo, M. E., Levine, S. C., & Goldin-Meadow, S. (2013). What counts as effective input for word learning? Journal of Child Language, 40(3), 672-686. doi: 10.1017/S0305000912000141", "links": null }, "BIBREF113": { "ref_id": "b113", "title": "Children 101 from 5 to 30 Months: A Longitudinal Study with LENA Automatic Analysis Shneidman", "authors": [ { "first": "L", "middle": [ "A" ], "last": "Goldinmeadow", "suffix": "" }, { "first": "S", "middle": [], "last": "", "suffix": "" } ], "year": 2012, "venue": "Developmental Science", "volume": "15", "issue": "5", "pages": "659--673", "other_ids": { "DOI": [ "10.1111/j.1467-7687.2012.01168.x" ] }, "num": null, "urls": [], "raw_text": "Linguistic Input and Child Vocalization of 7 Children 101 from 5 to 30 Months: A Longitudinal Study with LENA Automatic Analysis Shneidman, L. A., & GoldinMeadow, S. (2012). Language input and acquisition in a Mayan village: How important is directed speech? Developmental Science, 15(5), 659-673. doi: 10.1111/j.1467-7687.2012.01168.x", "links": null }, "BIBREF114": { "ref_id": "b114", "title": "An exploratory study of \"quantitative linguistic feedback\": Effect of LENA feedback on adult language production", "authors": [ { "first": "D", "middle": [], "last": "Suskind", "suffix": "" }, { "first": "K", "middle": [ "R" ], "last": "Leffel", "suffix": "" }, { "first": "M", "middle": [ "W" ], "last": "Hernandez", "suffix": "" }, { "first": "S", "middle": [ "G" ], "last": "Sapolich", "suffix": "" }, { "first": "E", "middle": [], "last": "Suskind", "suffix": "" }, { "first": "E", "middle": [], "last": "Kirkham", "suffix": "" }, { "first": "P", "middle": [], "last": "\u2026meehan", "suffix": "" } ], "year": 2013, "venue": "Communication Disorders Quarterly", "volume": "34", "issue": "4", "pages": "199--209", "other_ids": { "DOI": [ "10.1177/1525740112473146" ] }, "num": null, "urls": [], "raw_text": "Suskind, D., Leffel, K. R., Hernandez, M. W., Sapolich, S. G., Suskind, E., Kirkham, E., \u2026Meehan, P. (2013). An exploratory study of \"quantitative linguistic feedback\": Effect of LENA feedback on adult language production. Communication Disorders Quarterly, 34(4), 199-209. doi: 10.1177/1525740112473146", "links": null }, "BIBREF115": { "ref_id": "b115", "title": "Vocal interaction between children with Down Syndrome and their parents", "authors": [ { "first": "K", "middle": [ "S" ], "last": "Thiemann-Bourque", "suffix": "" }, { "first": "S", "middle": [ "F" ], "last": "Warren", "suffix": "" }, { "first": "N", "middle": [], "last": "Brady", "suffix": "" }, { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" } ], "year": 2014, "venue": "American Journal of Speech-Language Pathology", "volume": "23", "issue": "3", "pages": "474--485", "other_ids": { "DOI": [ "10.1044/2014_AJSLP-12-0010" ] }, "num": null, "urls": [], "raw_text": "Thiemann-Bourque, K. S., Warren, S. F., Brady, N., Gilkerson, J., & Richards, J. A. (2014). Vocal interaction between children with Down Syndrome and their parents. American Journal of Speech-Language Pathology, 23(3), 474-485. doi: 10.1044/2014_AJSLP-12-0010", "links": null }, "BIBREF116": { "ref_id": "b116", "title": "Quantity of parental language in the home environments of hard-of-hearing 2-year-olds", "authors": [ { "first": "M", "middle": [], "last": "Vandam", "suffix": "" }, { "first": "S", "middle": [ "E" ], "last": "Ambrose", "suffix": "" }, { "first": "M", "middle": [ "P" ], "last": "Moeller", "suffix": "" } ], "year": 2012, "venue": "The Journal of Deaf Studies and Deaf Education", "volume": "17", "issue": "4", "pages": "402--420", "other_ids": { "DOI": [ "10.1093/deafed/ens025" ] }, "num": null, "urls": [], "raw_text": "VanDam, M., Ambrose, S. E., & Moeller, M. P. (2012). Quantity of parental language in the home environments of hard-of-hearing 2-year-olds. The Journal of Deaf Studies and Deaf Education, 17(4), 402-420. doi: 10.1093/deafed/ens025", "links": null }, "BIBREF117": { "ref_id": "b117", "title": "Whole-word phonology and templates: Trap, Bootstrap, or Some of Each?", "authors": [ { "first": "S", "middle": [ "L" ], "last": "Velleman", "suffix": "" }, { "first": "M", "middle": [ "M" ], "last": "Vihman", "suffix": "" } ], "year": 2002, "venue": "Language, Speech, and Hearing Services in Schools", "volume": "33", "issue": "1", "pages": "9--23", "other_ids": { "DOI": [ "10.1044/0161-1461(2002/002" ] }, "num": null, "urls": [], "raw_text": "Velleman, S. L., & Vihman, M. M. (2002). Whole-word phonology and templates: Trap, Bootstrap, or Some of Each? Language, Speech, and Hearing Services in Schools, 33(1), 9-23. doi: 10.1044/0161-1461(2002/002)", "links": null }, "BIBREF118": { "ref_id": "b118", "title": "What automated vocal analysis reveals about the vocal production and language learning environment of young children with autism", "authors": [ { "first": "S", "middle": [ "F" ], "last": "Warren", "suffix": "" }, { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" }, { "first": "D", "middle": [ "K" ], "last": "Oller", "suffix": "" }, { "first": "D", "middle": [], "last": "Xu", "suffix": "" }, { "first": "U", "middle": [], "last": "Yapanel", "suffix": "" }, { "first": "S", "middle": [], "last": "\u2026gray", "suffix": "" } ], "year": 2010, "venue": "Journal of Autism and Developmental Disorders", "volume": "40", "issue": "5", "pages": "555--569", "other_ids": { "DOI": [ "10.1007/s10803-009-0902-5" ] }, "num": null, "urls": [], "raw_text": "Warren, S. F., Gilkerson, J., Richards, J. A., Oller, D. K., Xu, D., Yapanel, U., \u2026Gray, S. (2010). What automated vocal analysis reveals about the vocal production and language learning environment of young children with autism. Journal of Autism and Developmental Disorders, 40(5), 555-569. doi: 10.1007/s10803-009-0902-5", "links": null }, "BIBREF119": { "ref_id": "b119", "title": "Talking to children matters: Early language experience strengthens processing and builds vocabulary", "authors": [ { "first": "A", "middle": [], "last": "Weisleder", "suffix": "" }, { "first": "A", "middle": [], "last": "Fernald", "suffix": "" } ], "year": 2013, "venue": "Psychological Science", "volume": "24", "issue": "11", "pages": "2143--2152", "other_ids": { "DOI": [ "10.1177/0956797613488145" ] }, "num": null, "urls": [], "raw_text": "Weisleder, A., & Fernald, A. (2013). Talking to children matters: Early language experience strengthens processing and builds vocabulary. Psychological Science, 24(11), 2143-2152. doi: 10.1177/0956797613488145", "links": null }, "BIBREF120": { "ref_id": "b120", "title": "Effects of quantitative linguistic feedback to caregivers of young children: A pilot study in China", "authors": [ { "first": "Y", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "X", "middle": [], "last": "Xu", "suffix": "" }, { "first": "F", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "J", "middle": [], "last": "Gilkerson", "suffix": "" }, { "first": "D", "middle": [], "last": "Xu", "suffix": "" }, { "first": "J", "middle": [ "A" ], "last": "Richards", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Topping", "suffix": "" } ], "year": 2015, "venue": "Communication Disorders Quarterly", "volume": "37", "issue": "1", "pages": "16--24", "other_ids": { "DOI": [ "10.1177/1525740115575771" ] }, "num": null, "urls": [], "raw_text": "Zhang, Y., Xu, X., Jiang, F., Gilkerson, J., Xu, D., Richards, J. A., \u2026 Topping, K. J. (2015). Effects of quantitative linguistic feedback to caregivers of young children: A pilot study in China. Communication Disorders Quarterly, 37(1), 16-24. doi: 10.1177/1525740115575771", "links": null }, "BIBREF121": { "ref_id": "b121", "title": "Preschool Language Scale", "authors": [ { "first": "I", "middle": [ "L" ], "last": "Zimmerman", "suffix": "" }, { "first": "V", "middle": [ "G" ], "last": "Steiner", "suffix": "" }, { "first": "R", "middle": [ "E" ], "last": "Pond", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zimmerman, I. L., Steiner, V. G., & Pond, R. E. (2002). Preschool Language Scale [4th ed].", "links": null }, "BIBREF122": { "ref_id": "b122", "title": "TX: The Psychological Corporation", "authors": [ { "first": "San", "middle": [], "last": "Antonio", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "San Antonio, TX: The Psychological Corporation.", "links": null }, "BIBREF123": { "ref_id": "b123", "title": "Neural Machine Translation by Jointly Learning to Align and Translate", "authors": [ { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015).", "links": null }, "BIBREF124": { "ref_id": "b124", "title": "RelNet: End-to-End Modeling of Entities and Relations", "authors": [ { "first": "T", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "A", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1706.07179" ] }, "num": null, "urls": [], "raw_text": "Bansal, T., Neelakantan, A., & McCallum, A. (2017). RelNet: End-to-End Modeling of Entities and Relations. In arXiv preprint arXiv:1706.07179.", "links": null }, "BIBREF125": { "ref_id": "b125", "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", "authors": [ { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "B", "middle": [], "last": "Van Merri\u00ebnboer", "suffix": "" }, { "first": "C", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "F", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Y", "middle": [], "last": "\u2026bengio", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1724--1734", "other_ids": { "DOI": [ "10.3115/v1/D14-1179" ] }, "num": null, "urls": [], "raw_text": "Cho, K., van Merri\u00ebnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., \u2026Bengio, Y. (2014). Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1724-1734. doi: 10.3115/v1/D14-1179", "links": null }, "BIBREF126": { "ref_id": "b126", "title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling", "authors": [ { "first": "J", "middle": [], "last": "Chung", "suffix": "" }, { "first": "C", "middle": [], "last": "Gulcehre", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1412.3555" ] }, "num": null, "urls": [], "raw_text": "Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. In arXiv preprint arXiv:1412.3555.", "links": null }, "BIBREF127": { "ref_id": "b127", "title": "Attention-over-Attention Neural Networks for Reading Comprehension", "authors": [ { "first": "Y", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [], "last": "Wei", "suffix": "" }, { "first": "S", "middle": [], "last": "Wang", "suffix": "" }, { "first": "T", "middle": [], "last": "Liu", "suffix": "" }, { "first": "G", "middle": [], "last": "Hu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annu. Meet", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cui, Y., Chen, Z., Wei, S., Wang, S., Liu, T., & Hu, G. (2017). Attention-over-Attention Neural Networks for Reading Comprehension. In Proceedings of the 55th Annu. Meet.", "links": null }, "BIBREF129": { "ref_id": "b129", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "M.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF130": { "ref_id": "b130", "title": "Finding structure in time", "authors": [ { "first": "J", "middle": [ "L" ], "last": "Elman", "suffix": "" } ], "year": 1990, "venue": "Cogn. Sci", "volume": "14", "issue": "2", "pages": "179--211", "other_ids": { "DOI": [ "10.1016/0364-0213(90)90002-E" ] }, "num": null, "urls": [], "raw_text": "Elman, J. L. (1990) Finding structure in time. Cogn. Sci., 14(2), 179-211. doi: 10.1016/0364-0213(90)90002-E", "links": null }, "BIBREF131": { "ref_id": "b131", "title": "Tracking the World State with Recurrent Entity Networks", "authors": [ { "first": "M", "middle": [], "last": "Henaff", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "A", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lecun", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henaff, M., Weston, J., Szlam, A., Bordes, A., & LeCun, Y. (2017). Tracking the World State with Recurrent Entity Networks. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017).", "links": null }, "BIBREF132": { "ref_id": "b132", "title": "Long Short-Term Memory", "authors": [ { "first": "S", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Comput", "volume": "9", "issue": "8", "pages": "", "other_ids": { "DOI": [ "10.1162/neco.1997.9.8.1735" ] }, "num": null, "urls": [], "raw_text": "Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Comput., 9(8), 1735-1780. doi: 10.1162/neco.1997.9.8.1735", "links": null }, "BIBREF133": { "ref_id": "b133", "title": "Ask Me Anything: Dynamic Memory Networks for Natural Language Processing", "authors": [ { "first": "A", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "O", "middle": [], "last": "Irsoy", "suffix": "" }, { "first": "P", "middle": [], "last": "Ondruska", "suffix": "" }, { "first": "M", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "J", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "I", "middle": [], "last": "Gulrajani", "suffix": "" }, { "first": "R", "middle": [], "last": "\u2026socher", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning", "volume": "48", "issue": "", "pages": "1378--1387", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kumar, A., Irsoy, O., Ondruska, P., Iyyer, M., Bradbury, J., Gulrajani, I., \u2026Socher, R. (2016). Ask Me Anything: Dynamic Memory Networks for Natural Language Processing. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, 48, 1378-1387.", "links": null }, "BIBREF135": { "ref_id": "b135", "title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations", "authors": [ { "first": "Z", "middle": [], "last": "Lan", "suffix": "" }, { "first": "M", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "K", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "P", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "R", "middle": [], "last": "Soricut", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXivpreprintarXiv:1909.11942" ] }, "num": null, "urls": [], "raw_text": "Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In arXiv preprint arXiv:1909.11942.", "links": null }, "BIBREF136": { "ref_id": "b136", "title": "Key-Value Memory Networks for Directly Reading Documents", "authors": [ { "first": "A", "middle": [], "last": "Miller", "suffix": "" }, { "first": "A", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "J", "middle": [], "last": "Dodge", "suffix": "" }, { "first": "A.-H", "middle": [], "last": "Karimi", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, A., Fisch, A., Dodge, J., Karimi, A.-H., Bordes, A., & Weston, J. (2016), Key-Value Memory Networks for Directly Reading Documents. In Proceedings of the 2016", "links": null }, "BIBREF137": { "ref_id": "b137", "title": "Conference on Empirical Methods in Natural Language Processing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1400--1409", "other_ids": { "DOI": [ "10.18653/v1/D16-1147" ] }, "num": null, "urls": [], "raw_text": "Conference on Empirical Methods in Natural Language Processing, 1400-1409. doi: 10.18653/v1/D16-1147", "links": null }, "BIBREF138": { "ref_id": "b138", "title": "Reasoning with Memory Augmented Neural Networks for Language Comprehension", "authors": [ { "first": "T", "middle": [], "last": "Munkhdalai", "suffix": "" }, { "first": "H", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Munkhdalai, T., & Yu, H. (2016). Reasoning with Memory Augmented Neural Networks for Language Comprehension. In arXiv preprint arXiv:161006454.", "links": null }, "BIBREF140": { "ref_id": "b140", "title": "A simple neural network module for relational reasoning", "authors": [ { "first": "A", "middle": [], "last": "Santoro", "suffix": "" }, { "first": "D", "middle": [], "last": "Raposo", "suffix": "" }, { "first": "D", "middle": [ "G T" ], "last": "Barrett", "suffix": "" }, { "first": "M", "middle": [], "last": "Malinowski", "suffix": "" }, { "first": "R", "middle": [], "last": "Pascanu", "suffix": "" }, { "first": "P", "middle": [], "last": "Battaglia", "suffix": "" }, { "first": "", "middle": [], "last": "Lillicrap", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santoro, A., Raposo, D., Barrett, D. G. T., Malinowski, M., Pascanu, R., Battaglia, P., \u2026 Lillicrap, T. (2017) A simple neural network module for relational reasoning. In arXiv preprint arXiv:170601427.", "links": null }, "BIBREF141": { "ref_id": "b141", "title": "Query-Reduction Networks for Question Answering", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Seo", "suffix": "" }, { "first": "S", "middle": [], "last": "Min", "suffix": "" }, { "first": "A", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seo, M. J., Min, S., Farhadi, A., & Hajishirzi, H. (2017). Query-Reduction Networks for Question Answering. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017).", "links": null }, "BIBREF142": { "ref_id": "b142", "title": "End-to-End Memory Networks", "authors": [ { "first": "S", "middle": [], "last": "Sukhbaatar", "suffix": "" }, { "first": "A", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "R", "middle": [], "last": "Fergus", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems", "volume": "2", "issue": "", "pages": "2440--2448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sukhbaatar, S., Szlam, A., Weston, J., & Fergus, R. (2015). End-to-End Memory Networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, 2, 2440-2448.", "links": null }, "BIBREF143": { "ref_id": "b143", "title": "Natural Language Comprehension with the EpiReader", "authors": [ { "first": "A", "middle": [], "last": "Trischler", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ye", "suffix": "" }, { "first": "X", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "P", "middle": [], "last": "Bachman", "suffix": "" }, { "first": "A", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "K", "middle": [], "last": "Suleman", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "128--137", "other_ids": { "DOI": [ "10.18653/v1/D16-1013" ] }, "num": null, "urls": [], "raw_text": "Trischler, A., Ye, Z., Yuan, X., Bachman, P., Sordoni, A., & Suleman, K. (2016). Natural Language Comprehension with the EpiReader. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 128-137. doi: 10.18653/v1/D16-1013", "links": null }, "BIBREF144": { "ref_id": "b144", "title": "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks", "authors": [ { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "S", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ICLR2016", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weston, J., Bordes, A., Chopra, S., & Mikolov, T. (2016). Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. In Proceedings of the ICLR2016.", "links": null }, "BIBREF145": { "ref_id": "b145", "title": "Memory Networks. In arXiv preprint arXiv", "authors": [ { "first": "J", "middle": [], "last": "Weston", "suffix": "" }, { "first": "S", "middle": [], "last": "Chopra", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weston, J., Chopra, S., & Bordes, A. (2014). Memory Networks. In arXiv preprint arXiv:14103916.", "links": null }, "BIBREF146": { "ref_id": "b146", "title": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Activities\uff1a 1. Holding the Republic of China Computational Linguistics Conference (ROCLING) annually. 2. Facilitating and promoting academic research, seminars, training, discussions, comparative evaluations and other activities related to computational linguistics.", "links": null }, "BIBREF147": { "ref_id": "b147", "title": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collecting information and materials on recent developments in the field of computational linguistics, domestically and internationally.", "links": null }, "BIBREF148": { "ref_id": "b148", "title": "Publishing pertinent journals, proceedings and newsletters", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Publishing pertinent journals, proceedings and newsletters.", "links": null }, "BIBREF149": { "ref_id": "b149", "title": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Setting of the Chinese-language technical terminology and symbols related to computational linguistics.", "links": null }, "BIBREF150": { "ref_id": "b150", "title": "Maintaining contact with international computational linguistics academic organizations", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maintaining contact with international computational linguistics academic organizations.", "links": null }, "BIBREF151": { "ref_id": "b151", "title": "Dealing with various other matters related to the development of computational linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dealing with various other matters related to the development of computational linguistics.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Generating artificial misspelled sentence", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "Liu, C.-L., Lai, M.-H., Tien, K.-W., Chuang, Y.-H., Wu, S.-H., & Lee, C.-Y. (2011). Visually and phonologically similar characters in incorrect chinese words: Analyses, identification, and applications. ACM Transactions on Asian Language Information Processing (TALIP), 10(2),10. doi: 10.1145/1967293.1967297 Luong, M.-T., Pham, H., & Manning, C. D. (2015). Effective approaches to attentionbased neural machine translation. In arXiv preprint arXiv:1508.04025. Ma, W.-Y. & Chen, K.-J. (2003). Introduction to ckip chinese word segmentation system for the first international chinese word segmentation bakeoff. In Proceedings of the 2nd SIGHAN on CLP, 168-171. doi: 10.3115/1119250.1119276 Rei, M., Felice, M., Yuan, Z., and Briscoe, T. (2017). Artificial error generation with machine translation and syntactic patterns. In arXiv preprint arXiv:1707.05236. Tseng, Y.-H., Lee, L.-H., Chang, L.-P., & Chen, H.-H. (2015). Introduction to sighan 2015 bake-off for chinese spelling check. In Proceedings of the Eighth SIGHAN Workshop on Chinese Language Processing, 32-37. doi: 10.18653/v1/W15-3106 Wu, S.-H., Chen, Y.-Z., Yang, P.-C., Ku, T., & Liu, C.-L. (2010). Reducing the false alarm rate of chinese character error detection and correction. In CIPS-SIGHAN Joint Conference on Chinese Language Processing. Wu, S.-H., Liu, C.-L., & Lee, L.-H. (2013). Chinese spelling check evaluation at sighan bake-off 2013. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, 35-42. Xie, Z., Avati, A., Arivazhagan, N., Jurafsky, D., & Ng, A. Y. (2016). Neural language correction with character-based attention. In arXiv preprint arXiv:1603.09727. Yuan, Z. & Briscoe, T. (2016). Grammatical error correction using neural machine translation. In Proceedings of the 2016 Conference of the North American Chapter of the", "type_str": "figure", "uris": null, "num": null }, "FIGREF4": { "text": "\u6ce8\u610f\u529b\u6a5f\u5236 (Attention Layer) \u6ce8\u610f\u529b\u6a5f\u5236\u5e38\u88ab\u7528\u5728\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\uff0c\u52a0\u5f37\u95dc\u6ce8\u8207\u8f38\u5165\u8cc7\u8a0a\u76f8\u95dc\u7684\u91cd\u8981\u7279\u5fb5\u3002\u5728\u4e0a\u8ff0\u7684 \u5404\u7a2e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u4e2d\uff0c\u6211\u5011\u7d50\u5408\u81ea\u6ce8\u610f\u529b\u6a5f\u5236 (Self-Attention)\u4f86\u8a08\u7b97\u6587\u4e2d\u6bcf\u500b\u5b57\u8a5e\u4e4b \u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c 69 \u9593\u7684\u95dc\u4fc2\uff0c\u4ee5\u53d6\u5f97\u6bcf\u5247 Tweet \u4e2d\u6587\u5b57\u7279\u5fb5\u7684\u91cd\u8981\u8cc7\u8a0a\u3002Self-Attention \u662f\u4e00\u7a2e\u6ce8\u610f\u529b\u6a5f\u5236\uff0c \u8207\u50b3\u7d71\u7684 Attention \u6a5f\u5236\u5dee\u5225\u5728\u65bc\uff0cSelf-Attention \u4e0d\u9700\u8981\u900f\u904e\u5f15\u5165\u5916\u90e8\u7684\u8cc7\u8a0a\u4f86\u627e\u51fa\u8f03\u70ba \u91cd\u8981\u7684\u8a0a\u606f\uff0c\u50c5\u9700\u8981\u901a\u904e\u81ea\u8eab\u7684\u8a0a\u606f\u5c31\u80fd\u66f4\u65b0\u6b0a\u91cd\u8207\u53c3\u6578\uff0c\u627e\u51fa\u8f03\u91cd\u8981\u7684\u8cc7\u8a0a\u3002\u5b83\u7684\u6838 \u5fc3\u6982\u5ff5\u662f scaled dot-product attention \u67b6\u69cb\uff0c\u662f\u4e00\u7a2e dot-product attention \u7684\u8b8a\u5f62\uff0c\u5982\u5716 10 \u6240\u793a\u3002 \u5716 10. Scaled Dot-Product Attention \u793a\u610f\u5716 (Vaswani et al., 2017) [Figure 10. Scaled Dot-Product Attention (Vaswani et al., 2017)] \u7d93\u904e Vaswani \u7b49\u4eba (Vaswani et al., 2017) \u8207 Tan \u7b49\u4eba (Tan, Wang, Xie, Chen & Shi, 2018) \u7684\u63a2\u8a0e\u8207\u6bd4\u8f03\uff0c\u5df2\u8b49\u5be6\u8a72\u5167\u7a4d(\u4e58\u6cd5)\u6ce8\u610f\u529b\u6a5f\u5236\u6bd4\u4f7f\u7528\u55ae\u5c64\u795e\u7d93\u7db2\u8def\u7684\u6a19\u6e96\u6ce8 \u610f\u529b\u6a5f\u5236 (Bahdanau et al., 2015) \u66f4\u6709\u6548\u7387\u3002 , ,", "type_str": "figure", "uris": null, "num": null }, "FIGREF5": { "text": "30 Months: A Longitudinal Study with LENA Automatic Analysis", "type_str": "figure", "uris": null, "num": null }, "FIGREF6": { "text": "The LENA digital language processor (DLP) placed in the pocket of a vest Data transfer from a DLP to the LENA Pro software Linguistic Input and Child Vocalization of 7 Children 89 from 5 to 30 Months: A Longitudinal Study with LENA Automatic Analysis Reports from the LENA Pro software", "type_str": "figure", "uris": null, "num": null }, "FIGREF7": { "text": "30 Months: A Longitudinal Study with LENA Automatic Analysis vocalization.", "type_str": "figure", "uris": null, "num": null }, "FIGREF8": { "text": "30 Months: A Longitudinal Study with LENA Automatic Analysis", "type_str": "figure", "uris": null, "num": null }, "FIGREF10": { "text": "30 Months: A Longitudinal Study with LENA Automatic Analysis", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "text": "", "html": null, "content": "", "num": null }, "TABREF1": { "type_str": "table", "text": "Artificial misspelled sentences for '\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u9418'", "html": null, "content": "
Artificial Misspelled SentenceReplaced WordWrong Word
\u4e5f\u8ddf\u60a3\u8005\u57f9\u7f6a\u4e86\u5341\u5206\u9418\u8ce0\u7f6a\u57f9\u7f6a
\u4e5f\u8ddf\u60a3\u8005\u966a\u7f6a\u4e86\u5341\u5206\u9418\u8ce0\u7f6a\u966a\u7f6a
\u4e5f\u8ddf\u60a3\u8005\u8ce0\u7f6a\u4e86\u5341\u5206\u937e\u5206\u9418\u5206\u937e
", "num": null }, "TABREF2": { "type_str": "table", "text": "", "html": null, "content": "
Right SentenceWrong Sentence
\u53ef\u898b\u9152\u7cbe\u6703\u8b93\u767d\u8001\u9f20\u4e0a\u766e\uff0c\u53ef\u898b\u9152\u7cbe\u6703\u8b93\u767d\u8001\u9f20\u4e0a\u5ed5\uff0c
\u5c0e\u81f4\u6c34\u5733\u6df7\u6fc1\u4e0d\u582a\uff0c\u5c0e\u81f4\u6c34\u5733\u6df7\u6fc1\u4e0d\u52d8\uff0c
\u5a92\u9ad4\u4f55\u5617\u6c92\u6709\u4e00\u9ede\u8cac\u4efb\uff1f\u5a92\u9ad4\u4f55\u8cde\u6c92\u6709\u4e00\u9ede\u8cac\u4efb\uff1f
\u5730\u8655\u504f\u50fb\u4e14\u5df7\u5f04\u72f9\u7a84\uff0c\u5730\u8655\u7de8\u50fb\u4e14\u5df7\u5f04\u72f9\u7a84\uff0c
\u5e0c\u671b\u4ed6\u7684\u89ba\u9192\u70ba\u6642\u4e0d\u665a\u3002\u5e0c\u671b\u4ed6\u7684\u89ba\u7701\u70ba\u6642\u4e0d\u665a\u3002
", "num": null }, "TABREF3": { "type_str": "table", "text": "", "html": null, "content": "
Uniform Words List of UDN
", "num": null }, "TABREF4": { "type_str": "table", "text": "", "html": null, "content": "
UDN Edit LogsSIGHAN-7
", "num": null }, "TABREF6": { "type_str": "table", "text": "", "html": null, "content": "
20Jhih-Jie Chen et al
F1\uf03d2Precision Recall Precision Recall \uf02a \uf02a \uf02b(16)
Table 9. S1\u5e0c\u671b\u85c9\u6b64\u9f13\u52f5\u81ea\u5df1\u548c\u4ed6\u4eba\u8981\u7a4d\u6975\u6a02\u89c0\u5be6\u73fe\u5922\u60f3\u30020
S2PM2.5 \u5c0d\u4eba\u9ad4\u5065\u5eb7\u70ba\u5bb3\u5927\uff0c11, \u5371
S3\u56e0\u70ba\u96e3\u4ee5\u9054\u5230\u9023\u6578\u9580\u6abb\uff0c8, \u7f72
S4\u4ed6\u4ecd\u8a18\u5f97\u81ea\u5df2\u7576\u5e74\u9084\u662f\u5b78\u6821\u68d2\u7403\u968a\u54e1\uff0c6, \u5df1
S5\u525b\u63a8\u52d5\u7684\u793e\u6703\u4f4f\u5bc6\u4e5f\u8981\u8a2d\u4e00\u5b9a\u6bd4\u4f8b\u7684\u5927\u967d\u5149\u96fb\u30028, \u5b85, 17, \u592a
S6\u7f8e\u9e97\u7684\u52c7\u58eb\u5c71\u982d\u5c07\u88ab\u638f\u7a7a\u4e86\u55ce\uff1f10, \u6dd8
S7\u672a\u4f86\u767c\u5c55\u9700\u8981\u65b0\u7684\u80fd\u529b\u3001\u65b0\u7684\u52d5\u80fd\uff0c0
S8\u5b78\u751f\u56e0\u5b97\u6559\u3001\u91cd\u65cf\u3001\u570b\u7c4d\u800c\u906d\u7f9e\u8fb1\u8005\u5927\u5e45\u589e\u52a0\u30027, \u7a2e
The following metrics are calculated using TP,
FP, TN and FN:
( False Positive Rate FPR)\uf03dFP FP TN \uf02b(12)
Accuracy\uf03dTP TN TP FP TN FN \uf02b \uf02b \uf02b \uf02b(13)
Precision\uf03dTP TP FP \uf02b(14)
Recall\uf03dTP TP FN \uf02b(15)
", "num": null }, "TABREF7": { "type_str": "table", "text": "", "html": null, "content": "
ModelFPRAccuracyPrecisionRecallF1
UDN-only.066.64.80.64.71
UDN + Artificial (1:1).090.69.84.69.76
UDN + Artificial (1:2).063.71.86.72.78
UDN + Artificial (1:3).066.70.86.69.76
UDN + Artificial (1:4).059.71.87.71.78
Artificial-only.137.35.43.26.33
FEAT-Sound & Shape.098.72.88.72.79
FEAT-Context.059.71.87.70.78
", "num": null }, "TABREF8": { "type_str": "table", "text": "", "html": null, "content": "
Test SetModelFPRAccuracy Precision RecallF1
UDN Edit LogsUDN + Artificial (1:3) FEAT-Sound & Shape FEAT-Context.066 .098 .059.70 .72 .71.86 .88 .87.69 .72 .70.76 .79 .78
SIGHAN-7UDN + Artificial (1:3).078.85.56.62.58
FEAT-Sound & Shape.097.83.51.64.57
FEAT-Context.080.84.56.61.58
UDN Edit LogsSIGHAN-7
# of error characters9191,266
Similar Sound70%84%
Similar Shape36%40%
Similar Sound and Shape30%30%
", "num": null }, "TABREF9": { "type_str": "table", "text": "", "html": null, "content": "
Model# of errors not correctedSimilar SoundSimilar ShapeSimilar Sound and Shape
UDN-only40452%7%27%
UDN+Artificial (1:3)34054%8%26%
Artificial-only73343%6%26%
FEAT-Sound&Shape29957%8%25%
Model# of errors not correctedSimilar SoundSimilar ShapeSimilar Sound and Shape
UDN-only1,09257%9%27%
UDN+Artificial (1:3)59660%8%22%
Artificial-only64158%8%24%
FEAT-Sound&Shape59758%8%24%
", "num": null }, "TABREF10": { "type_str": "table", "text": "Chiu, H.-w., Wu, J.-c., & Chang, J. S. (2013). Chinese spelling checker based on statistical machine translation. In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, 49-53. Chollampatt, S. & Ng, H. T. (2018). A multilayer convolutional encoder-decoder neural network for grammatical error correction. In arXiv preprint arXiv:1801.08831. Felice, M. & Yuan, Z. (2014). Generating artificial errors for grammatical error correction. In Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics, 116-126. doi: 10.3115/v1/E14-3013", "html": null, "content": "", "num": null }, "TABREF11": { "type_str": "table", "text": "Chinese Spelling Check based on Neural Machine Translation 27Association for Computational Linguistics: Human Language Technologies, 380-386. doi: 10.18653/v1/N16-1042 Zhang, L.,Huang, C., Zhou, M., & Pan, H. (2000). Automatic detecting/correcting errors in chinese text by an approximate word-matching algorithm.In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, 248-254. doi: 10.3115/1075218.1075250 \u8521\u6709\u79e9 (2003)\u3002\u65b0\u7de8\u932f\u5225\u5b57\u9580\u8a3a\u3002 \u8a9e\u6587\u8a13\u7df4\u53e2\u66f8\uff0c\u87a2\u706b\u87f2\u3002[Tsai, Y.-J. (2003). New Common Typos Diagnosis, Fireflybooks.] \u8521\u69ae\u5733 (2012)\u3002\u5e38\u898b\u932f\u5225\u5b57\u8fa8\u6b63\u8fad\u5178\u3002\u4e2d\u6587\u53ef\u4ee5\u66f4\u597d\uff0c\u5546\u5468\u51fa\u7248\u3002 [Tsai, R.-J. (2012). Dictionary of Common Typos, Business Weekly.] Vol. 25, No. 1, June 2020, pp. 29-56 29 \uf0d3 The Association for Computational Linguistics and Chinese Language Processing\u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981 \uf02a\uf02b \u3001\u5289\u58eb\u5f18 \uf023 \u3001\u5f35\u570b\u97cb \uf02a \u3001\u9673\u67cf\u7433 \uf02b", "html": null, "content": "
\u5289\u6148\u6069 \u6458\u8981
\u672c\u8ad6\u6587\u4e3b\u8981\u63a2\u8a0e\u7aef\u5c0d\u7aef(End-to-End)\u7684\u7bc0\u9304\u5f0f\u6458\u8981\u65b9\u6cd5\u65bc\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u4e0a
\u7684\u61c9\u7528\uff0c\u4e26\u6df1\u5165\u7814\u7a76\u5982\u4f55\u6539\u5584\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4e4b\u6210\u6548\u3002\u56e0\u6b64\uff0c\u6211\u5011\u63d0\u51fa\u4ee5\u985e\u795e\u7d93
\u7db2\u8def\u70ba\u57fa\u790e\u4e4b\u6458\u8981\u6458\u8981\u6a21\u578b\uff0c\u904b\u7528\u968e\u5c64\u5f0f\u7684\u67b6\u69cb\u53ca\u6ce8\u610f\u529b\u6a5f\u5236\u6df1\u5c64\u6b21\u5730\u7406\u89e3\u6587
\u4ef6\u860a\u542b\u7684\u4e3b\u65e8\uff0c\u4e26\u4ee5\u5f37\u5316\u5b78\u7fd2\u8f14\u52a9\u8a13\u7df4\u6a21\u578b\u6839\u64da\u6587\u4ef6\u4e3b\u65e8\u9078\u53d6\u4e26\u6392\u5e8f\u5177\u4ee3\u8868\u6027
\u7684\u8a9e\u53e5\u7d44\u6210\u6458\u8981\u3002\u540c\u6642\uff0c\u6211\u5011\u70ba\u4e86\u907f\u514d\u8a9e\u97f3\u8fa8\u8b58\u7684\u932f\u8aa4\u5f71\u97ff\u6458\u8981\u7d50\u679c\uff0c\u4e5f\u5c07\u8a9e
\u97f3\u6587\u4ef6\u4e2d\u76f8\u95dc\u7684\u8072\u5b78\u7279\u5fb5\u52a0\u5165\u6a21\u578b\u8a13\u7df4\u4ee5\u53ca\u4f7f\u7528\u6b21\u8a5e\u5411\u91cf\u4f5c\u70ba\u8f38\u5165\u3002\u6700\u5f8c\u6211\u5011
\u5728\u4e2d\u6587\u5ee3\u64ad\u65b0\u805e\u8a9e\u6599(MATBN)\u4e0a\u9032\u884c\u4e00\u7cfb\u5217\u7684\u5be6\u9a57\u8207\u5206\u6790\uff0c\u5f9e\u5be6\u9a57\u7d50\u679c\u4e2d\u53ef
\u9a57\u8b49\u672c\u8ad6\u6587\u63d0\u51fa\u4e4b\u5047\u8a2d\u4e14\u5728\u6458\u8981\u6210\u6548\u4e0a\u6709\u986f\u8457\u7684\u63d0\u5347\u3002
", "num": null }, "TABREF12": { "type_str": "table", "text": "Nallapati et al., 2016) \u5f9e(Rush et al., 2015) \u548c(Chopra et al., 2016) \u767c \u60f3\u51fa\u8a31\u591a\u67b6\u69cb\uff0c\u540c\u6642\u4e5f\u89e3\u6c7a\u8a31\u591a\u91cd\u5beb\u5f0f\u6458\u8981\u6f5b\u5728\u7684\u554f\u984c\u3002\u57fa\u672c\u7684\u67b6\u69cb\u662f\u8ddf(Bahdanau et al., 2014) \u63d0\u51fa\u7684\u5e8f\u5217\u5c0d\u5e8f\u5217\u6a21\u578b\u76f8\u4f3c\uff0c\u540c\u6642\u4e5f\u52a0\u5165\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u800c\u8207(Chopra et al., 2016) \u4e0d\u540c\u4e4b\u8655\u5247\u662f\u5728\u65bc\u5176\u7de8\u78bc\u5668\u8207\u89e3\u78bc\u5668\u7686\u4f7f\u7528\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\uff0c\u4e14\u4f7f\u7528 \u5289\u6148\u6069 \u7b49 \u63d0\u51fa\u7684 Gated Recurrent Unit (GRU) \u800c\u975e LSTM\uff0cGRU \u540c\u6a23\u5177\u6709\u9598\u9580\uff0c\u4f46\u662f\u50c5\u6709\u5169\u500b\uff0c \u4e14\u6c92\u6709\u984d\u5916\u7684\u8a18\u61b6\u55ae\u5143\uff0c\u4f46\u662f\u6574\u9ad4\u7684\u8a18\u61b6\u6548\u679c\u662f\u4e00\u6a23\u7684\uff0c\u8a13\u7df4\u53c3\u6578\u91cf\u6e1b\u5c11\u5f88\u591a\uff0c\u53ef\u4ee5\u6bd4 LSTM \u66f4\u5feb\u901f\u5730\u5efa\u69cb\u548c\u8a13\u7df4\u3002(Nallapati et al., 2016) \u4e2d\u63d0\u5230\u5728\u8a9e\u8a00\u751f\u6210\u6642\u6703\u9047\u5230\u672a\u77e5\u8a5e (Out-of-vocabulary, OOV) \u554f\u984c\uff0c\u70ba\u4e86\u89e3\u6c7a\u6b64\u554f\u984c\uff0c\u52a0\u5165 Large Vocabulary Trick (LVT)(Jean, Cho, Memisevic & Bengio, 2014)\uff0c\u6b64\u6280\u8853\u662f\u5c0d\u6bcf\u5c0f\u6279 (mini-batch) \u8a13\u7df4\u8cc7\u6599\u5efa\u7acb\u55ae", "html": null, "content": "
\u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981 \u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u898131 \u5289\u6148\u6069 \u7b49 \u5289\u6148\u6069 \u7b49 37
\u4e00\u8a9e\u97f3\u6587\u4ef6\uff0c\u81ea\u52d5\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\u6703\u5148\u5c0d\u8a9e\u97f3\u8a0a\u865f\u9032\u884c\u7279\u5fb5\u62bd\u53d6\uff0c\u9032\u800c\u900f\u904e\u9810\u5148\u8a13\u7df4\u5b8c\u6210 \u5f80\u662f\u6311\u9078\u8f03\u7b26\u5408\u6458\u8981\u8a9e\u53e5\u7684\u7d50\u679c\uff0c\u56e0\u6b64\u5176\u901a\u5e38\u6c92\u6709\u6839\u64da\u8a9e\u610f\u9032\u884c\u6392\u5e8f\uff0c\u56e0\u6b64\u672c\u8ad6\u6587\u4ea6\u5617 \u2022 \u65b9\u6cd5\uff1a\u6b64\u5206\u985e\u65b9\u5f0f\u6700\u70ba\u5e38\u898b\uff0c\u53ef\u6982\u5206\u70ba\u4e09\u7a2e\uff1a \u4efb\u52d9\u9084\u662f\u6709\u76f8\u7576\u7684\u96e3\u5ea6\uff0c\u56e0\u70ba\u9664\u4e86\u7c21\u55ae\u7684\u5206\u985e\u5916\uff0c\u6211\u5011\u9084\u9700\u7406\u89e3\u4e26\u89e3\u6790\u51fa\u6587\u4ef6\u7684\u91cd\u8981\u8cc7 \u652f\u6301\u4e3b\u65e8\u7684\u76f8\u95dc\u8ad6\u8ff0\u3002\u5982\u4f55\u8b93\u6a21\u578b\u53ef\u4ee5\u6e96\u78ba\u5730\u7406\u89e3\u6587\u4ef6\u4e3b\u984c\u5462\uff1f(Ren et al., 2017)\u91dd\u5c0d\u6b64 \u6b64\u5916\uff0c\u70ba\u4e86\u907f\u514d\u6458\u8981\u7d50\u679c\u53d7\u5230\u904e\u591a\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\uff0c\u6211\u5011\u5617\u8a66\u52a0\u5165\u8072\u5b78\u7279\u5fb5\u548c
\u4e4b\u8072\u5b78\u6a21\u578b(Acoustic model)\u548c\u8a9e\u8a00\u6a21\u578b(Language model)\u9032\u884c\u8a9e\u97f3\u8fa8\u8b58\u5f97\u5230\u5176\u8f49\u5beb \u8a66\u5c07\u6458\u8981\u8a9e\u53e5\u7684\u6392\u5e8f\u53ca\u6458\u8981\u8a55\u4f30\u6307\u6a19\u61c9\u7528\u65bc\u5f37\u5316\u5b78\u7fd2(Reinforcement learning, RL)\u8f14\u52a9 \uf06e\u7bc0\u9304\u5f0f\u6458\u8981(Summarization by extraction) \u8a0a\uff0c\u624d\u80fd\u77e5\u9053\u54ea\u4e9b\u8a9e\u53e5\u6709\u6a5f\u6703\u6210\u70ba\u6458\u8981\u3002 \u8b70\u984c\u63d0\u51fa\u4e00\u500b\u6709\u6548\u7684\u65b9\u6cd5\uff0c\u5176\u5728\u7522\u751f\u8a9e\u53e5\u5411\u91cf\u8868\u793a\u6642\uff0c\u4ea6\u5c07\u524d\u9762\u7684\u8a9e\u53e5\u4ee5\u53ca\u5f8c\u9762\u7684\u8a9e\u53e5 \u6b21\u8a5e\u5411\u91cf\u8f14\u52a9\u8a13\u7df4\uff1b\u540c\u6642\u6211\u5011\u4ea6\u52a0\u5165\u6ce8\u610f\u529b\u6a5f\u5236\u548c\u5f37\u5316\u5b78\u7fd2\u6a5f\u5236\u65bc\u6a21\u578b\u8a13\u7df4\u4e2d\uff0c\u4ee5\u671f\u80fd
\u95dc\u9375\u8a5e\uff1a\u8a9e\u97f3\u6587\u4ef6\u3001\u7bc0\u9304\u5f0f\u6458\u8981\u3001\u985e\u795e\u7d93\u7db2\u8def\u3001\u968e\u5c64\u5f0f\u8a9e\u610f\u8868\u793a\u3001\u8072\u5b78\u7279\u5fb5 Keywords: Spoken Documents, Extractive Summarization, Deep Neural Networks, Hierarchical Semantic Representations, Acoustic Features 1. \u7dd2\u8ad6 (Introduction) \u96a8\u8457\u5927\u6578\u64da\u6642\u4ee3\u7684\u4f86\u81e8\uff0c\u5de8\u91cf\u4e14\u591a\u5143\u7684\u8cc7\u8a0a\u900f\u904e\u7db2\u969b\u7db2\u8def\u5feb\u901f\u5730\u5728\u5168\u7403\u5404\u5730\u50b3\u64ad\uff0c\u8cc7\u6599 \u5167\u5bb9\u7684\u5448\u73fe\u65b9\u5f0f\u5df2\u4e0d\u4fb7\u9650\u65bc\u50b3\u7d71\u7684\u7d19\u672c\u5f62\u5f0f\uff0c\u5305\u542b\u8a9e\u97f3\u53ca\u5f71\u50cf\u7684\u591a\u5a92\u9ad4\u8cc7\u8a0a\u9010\u6f38\u53d6\u4ee3\u975c \u614b\u7684\u6587\u5b57\u8cc7\u8a0a\uff0c\u5982\u4f55\u6709\u6548\u7387\u5730\u95b1\u8b80\u591a\u6a23\u5316\u5f62\u5f0f\u7684\u591a\u5a92\u9ad4\u8cc7\u8a0a\uff0c\u5df2\u6210\u70ba\u4e00\u500b\u523b\u4e0d\u5bb9\u7de9\u7684\u7814 \u7a76\u8ab2\u984c\u3002\u6b64\u5916\uff0c\u5728\u793e\u6703\u9010\u6b65\u884c\u52d5\u5316\u7684\u60c5\u6cc1\u4e0b\uff0c\u4eba\u624b\u4e00\u6a5f\u5df2\u662f\u5e38\u614b\uff0c\u4e14\u4f34\u96a8\u8457\u79d1\u6280\u4e0d\u65b7\u5730 \u5275\u65b0\uff0c\u884c\u52d5\u8a2d\u5099\u4e0d\u518d\u53ea\u80fd\u901a\u8a71\u548c\u50b3\u905e\u6587\u672c\u8a0a\u606f\uff0c\u591a\u5a92\u9ad4\u8a0a\u606f\u5982\u8a9e\u97f3\u53ca\u5f71\u50cf\u7b49\u4ea6\u80fd\u5b8c\u597d\u5730 \u50b3\u905e\uff0c\u66f4\u751a\u65bc\u6211\u5011\u80fd\u900f\u904e\u8072\u97f3\u53ca\u624b\u52e2\u7b49\u6307\u4ee4\u64cd\u4f5c\u8a2d\u5099\u3002 \u5728\u773e\u591a\u7684\u7814\u7a76\u65b9\u6cd5\u4e2d\uff0c\u81ea\u52d5\u6458\u8981 (Automatic Summarization) \u88ab\u8996\u70ba\u662f\u4e00\u9805\u95dc\u9375\u7684\u6280 \u8853\uff0c\u5176\u5728\u81ea\u7136\u8a9e\u8a00\u8655\u7406 (Natural Language Processing, NLP) \u9818\u57df\u4e2d\u4e00\u76f4\u90fd\u662f\u71b1\u9580\u7684\u7814\u7a76 \u8b70\u984c\uff0c\u56e0\u5176\u5177\u6709\u80fd\u64f7\u53d6\u6587\u4ef6\u91cd\u8981\u8cc7\u8a0a\u7684\u7279\u6027\uff0c\u5728\u8a31\u591a\u61c9\u7528\u4e0a\u66f4\u662f\u4e0d\u53ef\u6216\u7f3a\u7684\u4e00\u9805\u6280\u8853\uff0c \u5982\u554f\u7b54\u7cfb\u7d71 (Question Answering)\u3001\u8cc7\u8a0a\u6aa2\u7d22 (Information Retrieval) \u7b49\u3002\u53e6\u4e00\u65b9\u9762\uff0c\u8a9e \u97f3\u662f\u591a\u5a92\u9ad4\u6587\u4ef6\u4e2d\u6700\u5177\u8a9e\u610f\u7684\u4e3b\u8981\u6210\u4efd\u4e4b\u4e00\uff0c\u5982\u4f55\u900f\u904e\u8a9e\u97f3(\u6587\u4ef6)\u6458\u8981\u6280\u8853\u6709\u6548\u7387\u5730\u8655 \u7406\u6642\u5e8f\u8cc7\u6599\uff0c\u66f4\u662f\u986f\u5f97\u975e\u5e38\u91cd\u8981\u3002\u5176\u95dc\u9375\u5728\u65bc\u5f71\u97f3\u6587\u4ef6\u5f80\u5f80\u9577\u9054\u6578\u5206\u9418\u6216\u6578\u5c0f\u6642\uff0c\u4f7f\u7528 \u8005\u4e0d\u6613\u65bc\u700f\u89bd\u8207\u67e5\u8a62\uff0c\u800c\u5fc5\u9808\u8017\u8cbb\u8a31\u591a\u6642\u9593\u95b1\u8b80\u6216\u8046\u807d\u6574\u4efd\u6587\u4ef6\uff0c\u624d\u80fd\u7406\u89e3\u5176\u5167\u5bb9\uff0c\u4e0d \u7b26\u5408\u4eba\u5011\u60f3\u8981\u5feb\u901f\u5730\u7372\u53d6\u8cc7\u8a0a\u4e4b\u76ee\u7684\u3002 \u5c0d\u65bc\u542b\u6709\u8a9e\u97f3\u8a0a\u865f\u7684\u591a\u5a92\u9ad4\u8cc7\u8a0a\uff0c\u6211\u5011\u53ef\u5148\u7d93\u7531\u81ea\u52d5\u8a9e\u97f3\u8fa8\u8b58 (Automatic Speech Recognition, ASR) \u6280\u8853\u5c07\u6587\u4ef6\u8f49\u6210\u6613\u65bc\u700f\u89bd\u7684\u6587\u5b57\u5167\u5bb9\uff0c\u518d\u900f\u904e\u6587\u5b57\u6587\u4ef6\u6458\u8981\u7684\u6280\u8853\u4f5c \u8655\u7406\uff0c\u4ee5\u9054\u5230\u6458\u8981\u8a9e\u97f3\u6587\u4ef6\u4e4b\u76ee\u7684\u3002\u4f46\u56e0\u73fe\u968e\u6bb5\u7684\u8a9e\u97f3\u8fa8\u8b58\u6280\u8853\u4ecd\u5b58\u5728\u8fa8\u8b58\u932f\u8aa4\u7684\u554f\u984c\uff0c \u4e5f\u7f3a\u4e4f\u7ae0\u7bc0\u8207\u6a19\u9ede\u7b26\u865f\uff0c\u4f7f\u5f97\u8a9e\u53e5\u908a\u754c\u5b9a\u7fa9\u6a21\u7cca\u800c\u5931\u53bb\u6587\u4ef6\u7684\u7d50\u69cb\u8cc7\u8a0a\uff1b\u6b64\u5916\uff0c\u8a9e\u97f3\u6587 \u4ef6\u901a\u5e38\u542b\u6709\u4e00\u4e9b\u53e3\u8a9e\u52a9\u8a5e\u3001\u9072\u7591\u3001\u91cd\u8907\u7b49\u5167\u5bb9\uff0c\u9032\u800c\u4f7f\u5f97\u8a9e\u97f3\u6458\u8981\u6280\u8853\u7684\u767c\u5c55\u66f4\u70ba\u8271\u9245\u3002 \u672c\u8ad6\u6587\u4e3b\u8981\u63a2\u8a0e\u7aef\u5c0d\u7aef\u7684\u7bc0\u9304\u5f0f\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u5e38\u898b\u7684\u81ea\u52d5\u6458\u8981\u6280\u8853\u5927\u81f4\u4e0a\u53ef \u5206\u70ba\u5169\u7a2e\uff0c\u7bc0\u9304\u5f0f (Extractive) \u6458\u8981\u8207\u91cd\u5beb\u5f0f (Abstractive) \u6458\u8981\u3002\u7bc0\u9304\u5f0f\u6458\u8981\u65b9\u6cd5\u662f\u672c \u8ad6\u6587\u7684\u7814\u7a76\u91cd\u9ede\uff0c\u5176\u4e3b\u8981\u6703\u8fa8\u5225\u6587\u7ae0\u4e2d\u7684\u8a9e\u53e5\u662f\u5426\u5177\u4ee3\u8868\u6027\uff0c\u4e26\u4f9d\u7167\u7279\u5b9a\u7684\u6458\u8981\u6bd4\u4f8b\u5f9e \u5176\u4e2d\u9078\u53d6\u4f5c\u70ba\u6458\u8981\uff1b\u91cd\u5beb\u5f0f\u6458\u8981\u65b9\u6cd5\u5247\u9700\u7406\u89e3\u6587\u7ae0\u5f8c\uff0c\u4f9d\u6587\u7ae0\u7684\u4e3b\u65e8\u91cd\u65b0\u64b0\u5beb\u6458\u8981\uff0c\u5176 \u6240\u4f7f\u7528\u7684\u8a5e\u5f59\u8207\u6587\u6cd5\u4e0d\u5168\u7136\u5f9e\u539f\u6587\u4e2d\u8907\u88fd\uff0c\u8207\u4eba\u5011\u65e5\u5e38\u64b0\u5beb\u7684\u6458\u8981\u8f03\u70ba\u76f8\u4f3c\u3002 \u5e38\u898b\u7684\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u4e3b\u8981\u662f\u5206\u70ba\u5169\u968e\u6bb5\uff0c\u81ea\u52d5\u8a9e\u97f3\u8fa8\u8b58(Automatic speech recognition, ASR)\u548c\u81ea\u52d5\u6587\u4ef6\u6458\u8981(Automatic document summarization)\u3002\u7576\u6211\u5011\u5f97\u5230 \u6587\u4ef6 (Transcription)\u3002\u672c\u8ad6\u6587\u4e2d\u6240\u4f7f\u7528\u7684\u8a9e\u97f3\u8fa8\u8b58\u7cfb\u7d71\uff0c\u662f\u63a1\u7528\u570b\u7acb\u81fa\u7063\u5e2b\u7bc4\u5927\u5b78\u8cc7 \u8a0a\u5de5\u7a0b\u5b78\u7cfb\u7814\u7a76\u6240\u8a9e\u97f3\u66a8\u6a5f\u5668\u667a\u80fd\u5be6\u9a57\u5ba4\u6240\u767c\u5c55\u4e4b\u5927\u8a5e\u5f59\u8a9e\u97f3\u8fa8\u8b58\u5668(Large vocabulary continuous speech recognition system, LVCSR) (Chen, Kuo & Tsai, 2004; 2005) \u9032\u884c\u81ea\u52d5\u8a9e \u97f3\u8fa8\u8b58\u3002\u5e38\u898b\u7684\u7bc0\u9304\u5f0f\u6587\u4ef6\u6458\u8981\u65b9\u6cd5\u5927\u591a\u662f\u4ee5\u8cc7\u6599\u9a45\u52d5 (Data-driven) \u65b9\u6cd5\u70ba\u4e3b\u3002\u5176\u4e2d\uff0c \u53c8\u4ee5\u6df1\u5ea6\u5b78\u7fd2 (Deep Learning) \u65b9\u6cd5\u767c\u5c55\u51fa\u7684\u5e8f\u5217\u5c0d\u5e8f\u5217 (Sequence-to-Sequence) \u67b6\u69cb (Bahdanau, Cho & Bengio, 2015; Sutskever, Vinyals & Le, 2014)\u5728\u6458\u8981\u4efb\u52d9\u4e0a\u7372\u5f97\u8f03\u591a\u5b78 \u8005\u7684\u9752\u775e\u3002\u5c24\u5176\u91cd\u5beb\u5f0f\u6458\u8981\u88ab\u8a8d\u70ba\u662f\u4e00\u7a2e\u5e8f\u5217\u5c0d\u5e8f\u5217\u7684\u554f\u984c(Sutskever et al., 2014)\uff0c\u66f4\u4ee5 \u6b64\u767c\u5c55\u51fa\u8a31\u591a\u65b9\u6cd5(Chen, Zhu, Ling, Wei & Jiang, 2016; Chopra, Auli & Rush, 2016; Nallapati, Zhou, dos Santos, Gu\u0307l\u00e7ehre & Xiang, 2016; Paulus, Xiong & Socher, 2017; Rush, Chopra & Weston, 2015; See, Liu & Manning, 2017; Tan, Wan & Xiao, 2017)\uff1b\u800c\u7bc0\u9304\u5f0f\u6458 \u8981\u4e00\u822c\u5247\u88ab\u8996\u70ba\u4e00\u7a2e\u5e8f\u5217\u6a19\u8a18 (Sequence Labeling) \u7684\u554f\u984c\uff0c\u5c0d\u6587\u7ae0\u4e2d\u6bcf\u500b\u8a9e\u53e5\u4f5c\u6a19\u8a18\uff0c \u6a19\u793a\u51fa\u5176\u662f\u5426\u70ba\u6458\u8981(Cheng & Lapata, 2016; Nallapati, Zhai & Zhou, 2017)\u3002 \u96d6\u7136\u8a9e\u97f3\u8fa8\u8b58\u7684\u932f\u8aa4\u5c0d\u65bc\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u4e0a\u6703\u6709\u4e00\u5b9a\u7684\u5f71\u97ff\uff0c\u5176\u4e3b\u8981\u7684\u5f71\u97ff\u5728\u65bc \u81ea\u52d5\u8f49\u5beb\u6587\u4ef6\u4e2d\u7684\u5167\u6587\u6703\u8207\u4eba\u5de5\u8f49\u5beb\u7d50\u679c\u6709\u5dee\u7570\uff0c\u9032\u800c\u5c0e\u81f4\u6587\u4ef6\u6458\u8981\u7cfb\u7d71\u7121\u6cd5\u5b8c\u5168\u6e96\u78ba \u5730\u7406\u89e3\u6587\u4ef6\u542b\u7fa9\uff0c\u56e0\u6b64\u4f7f\u5f97\u6458\u8981\u6210\u6548\u4e0d\u4f73\uff1b\u6b64\u5916\uff0c\u6458\u8981\u7684\u5448\u73fe\u4ea6\u662f\u4e00\u9805\u91cd\u8981\u7684\u8ab2\u984c\uff0c\u5982 \u4f55\u5448\u73fe\u51fa\u6613\u65bc\u95b1\u8b80\u7684\u6458\u8981\uff0c\u662f\u6587\u4ef6\u6458\u8981\u7cfb\u7d71\u4e2d\u5fc5\u9808\u5b78\u6703\u7684\u91cd\u9ede\u3002\u800c\u4e00\u500b\u826f\u597d\u7684\u6458\u8981\u8868\u9054 \u61c9\u8a72\u8457\u91cd\u65bc\u4ee5\u4e0b\u56db\u500b\u8981\u7d20\uff1a \u2022 \u8cc7\u8a0a\u6027(Informativity) \uff1a\u6458\u8981\u7d50\u679c\u6240\u5305\u542b\u539f\u6587\u4ef6\u4e2d\u7684\u8cc7\u8a0a\u7a0b\u5ea6\uff0c\u61c9\u76e1\u53ef\u80fd\u6db5\u84cb\u6240\u6709 \u91cd\u8981\u8cc7\u8a0a\u3002 \u2022 \u6587\u6cd5\u6027(Grammaticality) \uff1a\u6458\u8981\u4e2d\u7684\u8a9e\u53e5\u61c9\u7b26\u5408\u8a9e\u8a00\u7684\u6587\u6cd5\uff0c\u6240\u5f97\u4e4b\u6458\u8981\u624d\u6613\u65bc\u95b1 \u8b80\uff1b\u82e5\u4e0d\u7b26\u5408\u6587\u6cd5\uff0c\u5247\u6703\u8f03\u5e38\u88ab\u8996\u70ba\u95dc\u9375\u8a5e\u64f7\u53d6(Keyword Extraction) \u3002\u6b64\u8981\u7d20\u65bc\u91cd \u5beb\u5f0f\u6458\u8981\u4efb\u52d9\u4e0a\u8f03\u53d7\u95dc\u6ce8\u3002 \u2022 \u9023\u8cab\u6027(Coherency) \uff1a\u6b64\u8981\u7d20\u6240\u6307\u7684\u662f\u6458\u8981\u4e2d\u4e0a\u4e0b\u6587\u9593\u7684\u9023\u8cab\u7a0b\u5ea6\uff0c\u82e5\u524d\u5f8c\u53e5\u4e0d\u5b58 \u5728\u9023\u8cab\u6027\uff0c\u5247\u6703\u985e\u4f3c\u65bc\u756b\u91cd\u9ede\u7684\u65b9\u5f0f\u689d\u5217\u51fa\u91cd\u9ede\uff0c\u800c\u975e\u6839\u64da\u6587\u4ef6\u4e3b\u65e8\u6240\u751f\u6210\u4e4b\u6458\u8981\u3002 \u6b64\u8981\u7d20\u65bc\u7bc0\u9304\u5f0f\u6458\u8981\u4efb\u52d9\u4e0a\u5e38\u88ab\u63d0\u53ca\u3002 \u2022 \u975e\u91cd\u8907\u6027(Non-Redundancy) \uff1a\u70ba\u4e86\u80fd\u7c21\u5316\u63cf\u8ff0\uff0c\u61c9\u907f\u514d\u51fa\u73fe\u904e\u591a\u91cd\u8907\u7684\u8a5e\u53e5\u6216\u76f8 \u4f3c\u7684\u8cc7\u8a0a\uff0c\u82e5\u91cd\u8907\u7684\u8cc7\u8a0a\u592a\u591a\u6703\u5f71\u97ff\u4f7f\u7528\u8005\u95b1\u8b80\u3002 \u56e0\u6b64\u672c\u8ad6\u6587\u4e3b\u8981\u6703\u91dd\u5c0d\u4e0a\u8ff0\u4e4b\u8cc7\u8a0a\u6027\u53ca\u9023\u8cab\u6027\u5169\u9805\u8981\u7d20\u8a0e\u8ad6\uff0c\u4e26\u5617\u8a66\u4ee5\u4e0d\u540c\u65b9\u6cd5\u907f \u514d\u53d7\u5230\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\u3002\u9996\u5148\u65bc\u6458\u8981\u8cc7\u8a0a\u6027\u90e8\u5206\uff0c\u672c\u8ad6\u6587\u767c\u5c55\u4e26\u6539\u9032\u4e00\u500b\u7aef\u5c0d\u7aef\u7684 \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\uff0c\u5176\u53d7\u76ca\u65bc\u647a\u7a4d\u5f0f\u985e\u795e\u7d93\u7db2\u8def(Convolutional neural networks, CNNs)\u4e4b\u8a9e\u8a00\u6a21\u578b\u61c9\u7528\u4ee5\u53ca\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def(Recurrent neural networks, RNNs)\u65bc\u81ea \u7136\u8a9e\u8a00\u8655\u7406\u9818\u57df\u7684\u512a\u79c0\u8868\u73fe\uff0c\u4f7f\u5f97\u6211\u5011\u80fd\u5920\u968e\u6bb5\u5f0f(\u5148\u8a9e\u53e5\u5f8c\u5168\u6587)\u5730\u95b1\u8b80\u6587\u4ef6\u4e26\u5feb\u901f \u5730\u7406\u89e3\u8a9e\u610f\uff1b\u53e6\u5916\u6211\u5011\u4ea6\u5617\u8a66\u61c9\u7528\u6ce8\u610f\u529b\u6a5f\u5236(Attention mechanism)\u66f4\u9032\u4e00\u6b65\u63d0\u5347\u6a21 \u578b\u5c0d\u65bc\u6587\u7ae0\u7684\u7406\u89e3\u5ea6\uff0c\u9032\u800c\u63d0\u5347\u6458\u8981\u8cc7\u8a0a\u6027\u3002\u5176\u6b21\u5c0d\u65bc\u6458\u8981\u9023\u8cab\u6027\uff0c\u7531\u65bc\u7bc0\u9304\u5f0f\u6458\u8981\u5f80 \u6a21\u578b\u8a13\u7df4\u3002\u6700\u5f8c\u70ba\u4e86\u907f\u514d\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\uff0c\u6211\u5011\u5728\u6a21\u578b\u9810\u6e2c\u6458\u8981\u7684\u904e\u7a0b\u4e2d\u53c3\u8003\u8a9e\u53e5\u7684\u8072\u5b78 \u7279\u5fb5(Acoustic features)\u53ca\u6b21\u8a5e\u8cc7\u8a0a(Subword information)\uff0c\u5176\u4e2d\u524d\u8005\u5305\u542b\u539f\u8a9e\u97f3\u6587 \u4ef6\u4e2d\u7684\u8a9e\u97f3\u7279\u6027\uff0c\u53ef\u6539\u5584\u5169\u968e\u6bb5\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u7cfb\u7d71\u4e0a\uff0c\u9032\u884c\u6458\u8981\u6642\u7121\u6cd5\u53c3\u8003\u4e4b\u539f\u8a9e\u97f3\u7279 \u6027\uff1b\u800c\u5f8c\u8005\u5247\u662f\u70ba\u4e86\u6539\u5584\u524d\u8ff0\u4e4b\u8a5e\u5f59\u8fa8\u8b58\u932f\u8aa4\uff0c\u56e0\u8fa8\u8b58\u932f\u8aa4\u53ef\u80fd\u767c\u751f\u5728\u8a5e\u5f59\u4e2d\u7684\u90e8\u5206\u5340 \u584a\uff0c\u800c\u5c0e\u81f4\u65b7\u8a5e\u6642\u7121\u6cd5\u8fa8\u5225\u6b63\u78ba\u7684\u8a5e\u5f59\uff0c\u82e5\u4f7f\u7528\u6b21\u8a5e\u8cc7\u8a0a\u5247\u53ef\u4ee5\u4f7f\u7528\u5468\u908a\u8cc7\u8a0a\u63a8\u6e2c\u932f\u8aa4 \u7684\u90e8\u5206\u5176\u6b63\u78ba\u7684\u8a9e\u610f\u3002 2. \u6587\u737b\u56de\u9867 \u81ea\u52d5\u6587\u4ef6\u6458\u8981\u65b9\u6cd5\u4e3b\u8981\u53ef\u4f9d\u7167\u56db\u500b\u9762\u5411\u5206\u985e(\u5982\u5716 1)\uff0c\u53ef\u4f9d\u7167\u4f86\u6e90\u3001\u76ee\u7684\u3001\u529f\u80fd\u53ca\u65b9 \u6cd5\u7b49\u7d30\u5206\u70ba\u4e0d\u540c\u985e\u578b\uff1a \u2022 \u4f86\u6e90\uff1a\u4e3b\u8981\u5206\u70ba\u55ae\u6587\u4ef6\u8207\u591a\u6587\u4ef6\uff0c\u524d\u8005\u6307\u91dd\u5c0d\u55ae\u4e00\u6587\u4ef6\u64f7\u53d6\u6458\u8981\uff0c\u5f8c\u8005\u5247\u662f\u7d71\u6574\u6b78 \u7d0d\u591a\u7bc7\u4e3b\u984c\u76f8\u8fd1\u7684\u6587\u4ef6\u91cd\u9ede\u7522\u751f\u6458\u8981\u3002\u591a\u6587\u4ef6\u6458\u8981\u901a\u5e38\u6703\u8207\u67e5\u8a62\u5171\u540c\u9032\u884c\u70ba\u4ee5\u67e5\u8a62 \u70ba\u4e3b\u4e4b\u591a\u6587\u4ef6\u6458\u8981\uff0c\u540c\u6642\u9032\u884c\u6aa2\u7d22\u8207\u6458\u8981\u3002 \u2022 \u76ee\u7684\uff1a\u53ef\u5206\u70ba\u4e00\u822c\u6027\u548c\u67e5\u8a62\u5c0e\u5411\uff0c\u4e00\u822c\u6027\u7684\u6458\u8981\u4e3b\u8981\u5c08\u6ce8\u5728\u6587\u4ef6\u4e2d\u7684\u4e3b\u8981\u91cd\u9ede\uff1b\u800c \u67e5\u8a62\u5c0e\u5411\u5247\u6703\u6839\u64da\u67e5\u8a62\u5b57\u4e32\u6c7a\u5b9a\u5176\u6458\u8981\u5167\u5bb9\uff0c\u800c\u67e5\u8a62\u5c0e\u5411\u7684\u6458\u8981\u901a\u5e38\u6703\u8207\u591a\u6587\u4ef6\u6458 \u8981\u540c\u6642\u51fa\u73fe\u3002 \u2022 \u529f\u80fd\uff1a\u5927\u591a\u6578\u6458\u8981\u662f\u8cc7\u8a0a\u6027\u7684\uff0c\u4e3b\u8981\u5c08\u6ce8\u5728\u7522\u751f\u539f\u6587\u4ef6\u7684\u7c21\u77ed\u7248\u672c\uff0c\u80fd\u4fdd\u7559\u5176\u91cd\u8981 \u8cc7\u8a0a\uff1b\u800c\u8f03\u5c11\u6578\u70ba\u6307\u793a\u6027\u548c\u6279\u5224\u6027\uff0c\u6b64\u4e8c\u8005\u7d66\u4e88\u7684\u6458\u8981\u7686\u4e0d\u5305\u542b\u539f\u6587\u7684\u91cd\u8981\u5167\u5bb9\uff0c \u524d\u8005\u6703\u6307\u51fa\u6587\u4ef6\u7684\u984c\u76ee\u6216\u9818\u57df\u7b49\u8a6e\u91cb\u8cc7\u6599(Metadata) \uff1b\u800c\u5f8c\u8005\u5247\u662f\u6703\u5224\u65b7\u6574\u4efd\u6587\u4ef6 \u662f\u6b63\u9762\u7684\u9084\u662f\u8ca0\u9762\u7684\u3002 (Cheng & Lapata, 2016) \u5c07\u7bc0\u9304\u5f0f\u6458\u8981\u4efb\u52d9\u8996\u70ba\u4e00\u7a2e\u5e8f\u5217\u6a19\u8a18\u53ca\u6392\u5e8f\u554f\u984c\uff0c\u5176\u65b9\u6cd5\u4e3b \u8207\u8a72\u53e5\u7684\u76f8\u95dc\u6027\u4e32\u63a5\uff0c\u540c\u6642\u653e\u5165\u4e00\u4e9b\u8207\u8a72\u53e5\u76f8\u95dc\u7684\u4eba\u5de5\u7279\u5fb5(\u8a9e\u53e5\u9577\u5ea6\u3001\u4f4d\u7f6e\u7b49)\uff0c\u4f7f \u589e\u52a0\u6458\u8981\u7684\u8cc7\u8a0a\u6027\u3002 \uf06e\u91cd\u5beb\u5f0f\u6458\u8981(Summarization by abstraction) \u8981\u7684\u7279\u8272\u5728\u65bc\u4f7f\u7528\u4e00\u968e\u5c64\u5f0f\u7de8\u78bc\u5668\u548c\u542b\u6709\u6ce8\u610f\u529b\u6a5f\u5236(Attention Mechanism)\u7684\u89e3\u78bc\u5668\u3002\u968e \u5f97\u5206\u985e\u6642\u80fd\u4f7f\u7528\u66f4\u5177\u8a9e\u610f\u7684\u8a9e\u53e5\u5411\u91cf\u3002\u6b64\u65b9\u6cd5\u4e4b\u67b6\u69cb\u76f8\u7576\u5927\uff0c\u4f46\u5f97\u5230\u4e4b\u6458\u8981\u6548\u679c\u4e5f\u76f8\u7576 \uf06e\u8a9e\u53e5\u58d3\u7e2e\u5f0f\u6458\u8981(Summarization by sentence compression) \u5b9a\u4e4b\u6458\u8981\u6bd4\u4f8b(Summarization ratio)\uff0c\u5f9e\u539f\u6587\u4ef6\u4e2d\u9078\u51fa\u91cd\u8981\u6027\u9ad8\u7684\u8a9e\u53e5\u3001\u6bb5\u843d\u6216\u7ae0\u7bc0 \u7c21\u55ae\u7d44\u5408\u6210\u6458\u8981\u3002\u6458\u8981\u6bd4\u4f8b\u662f\u6307\u6458\u8981\u9577\u5ea6\u8207\u539f\u6587\u4ef6\u9577\u5ea6\u7684\u6bd4\u4f8b\uff0c\u4e00\u822c\u6211\u5011\u901a\u5e38\u9078\u7528 10%\u7684\u6458\u8981\u6bd4\u4f8b\uff0c\u4e5f\u5c31\u662f\u6458\u8981\u9577\u5ea6\u70ba\u539f\u6587\u4ef6\u9577\u5ea6\u7684 10%\u3002\u800c\u91cd\u5beb\u5f0f\u6458\u8981\u4e3b\u8981\u6703\u4f9d\u539f \u6587\u4ef6\u4e2d\u7684\u5b8c\u6574\u6982\u5ff5\uff0c\u91cd\u65b0\u64b0\u5beb\u51fa\u6458\u8981\uff0c\u56e0\u6b64\u6458\u8981\u5167\u5bb9\u4e2d\u53ef\u80fd\u9084\u6709\u975e\u539f\u6587\u4ef6\u4e2d\u6240\u4f7f\u7528 \u4f46\u4e0d\u5f71\u97ff\u5176\u8a9e\u610f\u7684\u8a5e\u8a9e\u3002\u7d9c\u4e0a\u6240\u8ff0\uff0c\u6211\u5011\u53ef\u4ee5(Torres-Moreno, 2014)\u4e4b\u793a\u4f8b\u7c21\u55ae\u63cf\u8ff0 \u7bc0\u9304\u5f0f\u6458\u8981\u8207\u91cd\u5beb\u5f0f\u6458\u8981\u7684\u512a\u7f3a\uff0c\u4ee5\u5b78\u7fd2\u8005\u70ba\u4f8b\uff0c\u4e00\u500b\u597d\u7684\u5b78\u7fd2\u8005\u5728\u64b0\u5beb\u6458\u8981\u6642\u6703 \u5148\u95b1\u8b80\u904e\u6574\u7bc7\u6587\u7ae0\uff0c\u518d\u4ee5\u81ea\u5df1\u7684\u65b9\u5f0f\u64b0\u5beb\uff0c\u800c\u5f97\u4e4b\u6458\u8981\u5167\u5bb9\u80fd\u524d\u5f8c\u901a\u9806\u4e14\u7b26\u5408\u6587\u7ae0 \u65e8\u610f\uff1b\u800c\u4e0d\u597d\u7684\u5b78\u7fd2\u8005\u5728\u64b0\u5beb\u6458\u8981\u6642\uff0c\u53ea\u6703\u5927\u7565\u770b\u904e\u6587\u7ae0\uff0c\u4e26\u4e14\u6311\u9078\u51fa\u300c\u53ef\u80fd\u300d\u91cd \u8981\u7684\u8a9e\u53e5\uff0c\u7d44\u5408\u5728\u4e00\u8d77\u4f5c\u70ba\u6458\u8981\u3002\u4f46\u6b64\u65b9\u6cd5\u5f97\u5230\u4e4b\u6458\u8981\u53ef\u80fd\u5305\u542b\u67d0\u4e9b\u4e0d\u76f8\u95dc\u7684\u5167\u5bb9\uff0c \u4e14\u8a9e\u53e5\u9593\u7684\u929c\u63a5\u53ef\u80fd\u6703\u6709\u5167\u5bb9\u4e0d\u9023\u8cab\u6216\u4e0d\u901a\u9806\u7684\u60c5\u6cc1\u767c\u751f\u3002\u9664\u4e86\u8f03\u5e38\u898b\u7684\u7bc0\u9304\u5f0f\u6458 \u8981\u53ca\u91cd\u5beb\u5f0f\u6458\u8981\u5916\uff0c\u8a9e\u53e5\u58d3\u7e2e\u5f0f\u6458\u8981\u6bd4\u8f03\u7279\u5225\u4e00\u9ede\uff0c\u4e3b\u8981\u7528\u65bc\u5c07\u8a9e\u53e5\u9577\u5ea6\u7e2e\u6e1b\uff0c\u6b64 \u65b9\u6cd5\u53ef\u8207\u7bc0\u9304\u5f0f\u6458\u8981\u5171\u540c\u4f7f\u7528\uff0c\u800c\u76ee\u524d\u901a\u5e38\u6703\u5c07\u6b64\u65b9\u6cd5\u6b78\u985e\u70ba\u91cd\u5beb\u5f0f\u6458\u8981\u7684\u4e00\u90e8 \u5206\u3002 \u672c\u8ad6\u6587\u4e3b\u8981\u5c08\u6ce8\u65bc\u4e00\u822c\u6027\u55ae\u6587\u4ef6\u7bc0\u9304\u5f0f\u6458\u8981\u7684\u7814\u7a76\u3002\u6b64\u5916\u6458\u8981\u4ea6\u53ef\u91dd\u5c0d\u6587\u4ef6\u5f62\u5f0f\u5206 \u985e\uff0c\u5982\u5e38\u898b\u7684\u6587\u5b57\u6587\u4ef6 (Text documents) \u53ca\u5305\u542b\u8a9e\u97f3\u8cc7\u8a0a\u7684\u8a9e\u97f3\u6587\u4ef6 (Spoken documents) \uff0c \u91dd\u5c0d\u4e0d\u540c\u6587\u4ef6\u5f62\u5f0f\uff0c\u6240\u4f7f\u7528\u7684\u6458\u8981\u6a21\u578b\u7d30\u7bc0\u4e5f\u61c9\u6709\u6240\u8b8a\u5316\u3002\u6587\u5b57\u6587\u4ef6\u6458\u8981\u4fc2\u6307\u4e00\u822c\u4ee5\u6587 \u5b57\u5167\u5bb9\u70ba\u4e3b\u7684\u6587\u4ef6\u7522\u751f\u4e4b\u6458\u8981\uff0c\u5927\u90e8\u5206\u7684\u6458\u8981\u7814\u7a76\u90fd\u5c6c\u65bc\u6587\u5b57\u6587\u4ef6\u6458\u8981\uff1b\u800c\u8a9e\u97f3\u6587\u4ef6\u6458 \u8981\u5247\u662f\u4f7f\u7528\u542b\u6709\u8a9e\u97f3\u8cc7\u8a0a\u7684\u6587\u4ef6\uff0c\u901a\u5e38\u662f\u900f\u904e\u8a9e\u97f3\u8fa8\u8b58\u5f8c\u5f97\u5230\u7684\u8f49\u5beb\u6587\u4ef6\uff0c\u5176\u4e2d\u53ef\u80fd\u6703 \u542b\u6709\u4e00\u4e9b\u8a9e\u97f3\u8fa8\u8b58\u7522\u751f\u4e4b\u932f\u8aa4\uff0c\u4ee5\u53ca\u53e3\u8a9e\u4e0a\u7121\u610f\u7fa9\u7684\u8cc7\u8a0a\u3002\u56e0\u6b64\uff0c\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u6703\u6bd4\u6587 \u5b57\u6587\u4ef6\u6458\u8981\u66f4\u70ba\u56f0\u96e3\uff0c\u53cd\u4e4b\uff0c\u8a9e\u97f3\u6587\u4ef6\u5305\u542b\u8a9e\u97f3\u8cc7\u8a0a\uff0c\u53ef\u4ee5\u63d0\u4f9b\u6458\u8981\u65b9\u6cd5\u66f4\u591a\u6709\u610f\u7fa9\u7684 \u8cc7\u8a0a\uff0c\u80fd\u6709\u6548\u5730\u62b5\u92b7\u5176\u8fa8\u8b58\u932f\u8aa4\u3002 \u6b64\u5916\uff0c\u6709\u9452\u65bc\u6df1\u5c64\u5b78\u7fd2\u7684\u84ec\u52c3\u767c\u5c55\uff0c\u73fe\u4eca\u7684\u6280\u8853\u5927\u591a\u662f\u4ee5\u7aef\u5c0d\u7aef\u7684\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def \u67b6\u69cb\u70ba\u4e3b\u3002\u6df1\u5c64\u5b78\u7fd2\u4e3b\u8981\u662f\u6a21\u64ec\u4eba\u985e\u4e4b\u5b78\u7fd2\u6a21\u5f0f\uff0c\u5c07\u6df1\u5c64\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u8996\u70ba\u4eba\u985e\u5927\u8166 \u795e\u7d93\u7cfb\u7d71\uff0c\u4e26\u8f14\u4ee5\u5927\u91cf\u8cc7\u6599\u9032\u884c\u8a13\u7df4\uff0c\u4f7f\u5176\u80fd\u5920\u5b78\u7fd2\u5982\u4f55\u89e3\u6c7a\u8a72\u7814\u7a76\u554f\u984c\u3002\u5176\u67b6\u69cb\u4e2d\u4e3b \u8981\u5b78\u7fd2\u7684\u662f\u8f38\u5165\u8207\u8f38\u51fa\u4e4b\u9593\u7684\u95dc\u4fc2\uff0c\u85c9\u7531\u5c07\u4e0d\u540c\u7684\u8f38\u5165\u6a23\u672c\u6295\u5f71\u81f3\u76f8\u540c\u7684\u7a7a\u9593\u4e2d\uff0c\u6211\u5011 \u5373\u53ef\u5728\u8a72\u7a7a\u9593\u4e2d\u5c07\u6bcf\u500b\u8f38\u5165\u6a23\u672c\u5c0d\u61c9\u81f3\u6b63\u78ba\u7684\u8f38\u51fa\uff0c\u9032\u800c\u5f97\u5230\u6b63\u78ba\u7684\u7d50\u679c\u3002\u56e0\u6b64\u5f8c\u7e8c\u4e4b \u6587\u737b\u63a2\u8a0e\u5c07\u4ee5\u7aef\u5c0d\u7aef\u4e4b\u6df1\u5c64\u5b78\u7fd2\u65b9\u6cd5\u70ba\u4e3b\u3002 2.1 \u7bc0\u9304\u5f0f\u6458\u8981 (Extractive Summarization) \u5728\u7bc0\u9304\u5f0f\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u4e2d\uff0c\u6211\u5011\u901a\u5e38\u53ef\u4ee5\u5c07\u5176\u8996\u70ba\u5206\u985e\u554f\u984c\uff0c\u56e0\u70ba\u6211\u5011\u8981\u5224\u65b7\u6587\u4ef6\u4e2d\u7684 \u8a9e\u53e5\u300c\u662f\u5426\u300d\u70ba\u6458\u8981\u3002\u800c\u5206\u985e\u554f\u984c\u5728\u6df1\u5c64\u5b78\u7fd2\u6280\u8853\u4e2d\u662f\u6700\u57fa\u672c\u7684\u554f\u984c\uff0c\u4f46\u662f\u7bc0\u9304\u5f0f\u6458\u8981 \u662f\u53c3\u8003(Kim, 2014)\u7684\u65b9\u6cd5\uff0c\u4f7f\u7528 CNN \u8a08\u7b97\u8a9e\u53e5\u7684\u5411\u91cf\u8868\u793a\uff1b\u7b2c\u4e8c\u5c64\u70ba\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def (Recurrent Neural Networks, RNNs)\uff0c\u5c07\u8a9e\u53e5\u5411\u91cf\u505a\u70ba\u6bcf\u500b\u6642\u9593\u9ede\u7684\u8f38\u5165\uff0c\u800c\u5c07\u6700\u5f8c\u4e00\u500b \u6642\u9593\u9ede\u7684\u8f38\u51fa\u8996\u70ba\u6587\u4ef6\u7684\u5411\u91cf\u8868\u793a\u3002\u6b64\u4f5c\u6cd5\u5c0d\u65bc\u8f03\u9577\u7684\u6587\u7ae0\u800c\u8a00\u662f\u76f8\u7576\u6709\u6548\u7684\uff0c\u56e0\u70ba\u6587 \u7ae0\u904e\u9577\u6642\uff0c\u82e5\u55ae\u4f7f\u7528\u4e00\u500b RNN\uff0c\u5247\u6709\u53ef\u80fd\u6703\u907a\u5931\u6389\u8a31\u591a\u91cd\u8981\u7684\u8cc7\u8a0a\u3002\u6700\u5f8c\u900f\u904e\u53e6\u4e00\u500b RNN \u5c0d\u6bcf\u500b\u8a9e\u53e5\u9032\u884c\u6a19\u8a18\uff0c\u4e26\u4f7f\u7528\u9810\u6e2c\u51fa\u7684\u5206\u6578\u9032\u884c\u6392\u5e8f\uff0c\u9032\u800c\u5f97\u5230\u6700\u5f8c\u7684\u6458\u8981\u6210\u679c\u3002 \u6b64\u5916\uff0c(Cheng & Lapata, 2016)\u9084\u5617\u8a66\u7528\u7bc0\u9304\u5f0f\u7684\u65b9\u6cd5\u6a21\u64ec\u51fa\u91cd\u5beb\u5f0f\u6458\u8981\uff0c\u8207\u524d\u8ff0\u6a19\u8a18\u8a9e \u53e5\u7684\u4e0d\u540c\uff0c\u4e3b\u8981\u662f\u5f9e\u539f\u6587\u4ef6\u4e2d\u6311\u9078\u55ae\u8a5e\u5f8c\u7d44\u5408\u6210\u6458\u8981\u53e5\uff0c\u800c\u751f\u6210\u4e4b\u6458\u8981\u76f8\u7576\u4e0d\u7b26\u5408\u6587\u6cd5 \u6027\u4e5f\u4e0d\u901a\u9806\uff0c\u4e0d\u904e\u95dc\u9375\u8a5e\u5f59\u57fa\u672c\u4e0a\u90fd\u80fd\u6db5\u84cb\u3002\u4ee5\u6b64\u5f97\u77e5\uff0c(Cheng & Lapata)\u7684\u65b9\u6cd5\u5728\u8a9e\u8a00 \u7406\u89e3(Language Understanding)\u53ca\u8cc7\u8a0a\u64f7\u53d6(Information Extraction)\u6709\u4e0d\u932f\u7684\u6210\u6548\u3002 \u9664\u4e86(Cheng & Lapata, 2016)\u540c\u6642\u9032\u884c\u7bc0\u9304\u5f0f\u6458\u8981\u8207\u91cd\u5beb\u5f0f\u6458\u8981\u7684\u7814\u7a76\u5916\uff0c(Nallapati et al., 2017)\u63d0\u51fa\u7684 SummaRuNNer \u4ea6\u5617\u8a66\u751f\u6210\u91cd\u5beb\u5f0f\u6458\u8981\u3002\u8207(Cheng & Lapata, 2016)\u4e0d \u540c\u4e4b\u8655\u5728\u65bc SummaRuNNer \u5728\u7bc0\u9304\u5f0f\u6458\u8981\u4efb\u52d9\u4e0a\uff0c\u4e26\u975e\u4f7f\u7528\u7de8\u78bc-\u89e3\u78bc\u5668\u67b6\u69cb\uff0c\u50c5\u662f\u55ae\u7d14 \u5730\u5efa\u7acb\u5169\u5c64\u96d9\u5411 RNN \u5f8c\u4fbf\u5224\u65b7\u8a9e\u53e5\u6a19\u8a18\u70ba\u4f55\u3002\u76f8\u4f3c\u4e4b\u8655\u5728\u65bc\u5176 RNN \u4e5f\u662f\u968e\u5c64\u5f0f\u7684\u67b6\u69cb\uff0c \u7b2c\u4e00\u5c64\u8f38\u5165\u70ba\u8a5e\u5f59\u5411\u91cf\uff0c\u7b2c\u4e8c\u5c64\u5247\u662f\u7b2c\u4e00\u5c64\u8f38\u51fa\u6240\u5f97\u4e4b\u8a9e\u53e5\u5411\u91cf\u3002\u6b64\u7a2e\u4f5c\u6cd5\u4e2d\u4f7f\u7528\u7684\u53c3 \u6578\u91cf\u8f03\u5c11\uff0c\u56e0\u6b64\u6536\u6582\u901f\u5ea6\u4e5f\u8f03\u70ba\u5feb\u901f\u3002\u9664\u4e86\u7bc0\u9304\u5f0f\u6458\u8981\u4efb\u52d9\u5916\uff0c(Nallapati et al., 2017)\u4e5f \u5617\u8a66\u5c07\u6700\u5f8c\u4e00\u5c64\u9810\u6e2c\u6a19\u8a18\uff0c\u6539\u70ba\u4e00\u500b\u7c21\u6613\u89e3\u78bc\u5668\u7528\u65bc\u91cd\u5beb\u5f0f\u6458\u8981\u4efb\u52d9\u3002\u6b64\u5916\uff0c\u7531\u65bc\u6458\u8981 \u4efb\u52d9\u4f7f\u7528\u4e4b\u8cc7\u6599\u96c6\u4e00\u822c\u662f\u6c92\u6709\u6458\u8981\u6a19\u8a18\u7684\uff0c(Nallapati et al., 2017)\u63d0\u51fa\u4e00\u7a2e\u8caa\u5a6a\u6cd5\u5c0d\u6bcf\u500b \u8a9e\u53e5\u6a19\u8a18\u6458\u8981\uff0c\u9019\u500b\u65b9\u6cd5\u80fd\u5920\u627e\u5230\u8f03\u597d\u7684\u6458\u8981\u7d44\u5408\u800c\u975e\u53ea\u662f\u627e\u55ae\u7368\u6bd4\u5c0d\u6bcf\u53e5\u7684\u91cd\u8981\u6027\uff0c \u4ea6\u6709\u8a31\u591a\u5b78\u8005\u5617\u8a66\u5c07\u6b64\u65b9\u6cd5\u7528\u65bc\u81ea\u8eab\u7684\u4efb\u52d9\u4e0a\u3002 \u96a8\u8457\u8fd1\u5e7e\u5e74\u5f37\u5316\u5b78\u7fd2(Reinforcement Learning)\u7684\u71b1\u6f6e\uff0c\u4ea6\u6709\u5b78\u8005\u5c07\u5f37\u5316\u5b78\u7fd2\u61c9\u7528\u65bc \u7bc0\u9304\u5f0f\u6458\u8981\u4efb\u52d9\u4e0a\uff0c(Narayan, Cohen & Lapata, 2018a)\u70ba\u4e86\u89e3\u6c7a\u524d\u8ff0\u4e4b\u7bc0\u9304\u5f0f\u6458\u8981\u6c92\u6709\u6b63 \u78ba\u6458\u8981\u6a19\u8a18\u7684\u60c5\u6cc1\uff0c\u56e0\u6b64\u52a0\u5165\u5f37\u5316\u5b78\u7fd2\u3002\u5176\u4e3b\u8981\u67b6\u69cb\u662f\u6539\u826f\u81ea(Cheng & Lapata, 2016)\uff0c \u4e0d\u540c\u4e4b\u8655\u5728\u65bc\u5176\u5728\u7b2c\u4e8c\u5c64\u7de8\u78bc\u5668\u7684\u8a9e\u53e5\u8f38\u5165\u662f\u4ee5\u5012\u5e8f\u65b9\u5f0f\u8f38\u5165\uff0c\u56e0\u70ba\u5927\u591a\u6578\u6587\u4ef6\u901a\u5e38\u6703 \u5c07\u4e3b\u65e8\u7f6e\u65bc\u8f03\u524d\u9762\u7684\u6bb5\u843d\uff0c\u518d\u52a0\u4e0a RNN \u6bd4\u8f03\u5bb9\u6613\u8a18\u5f97\u5f8c\u9762\u6642\u9593\u9ede\u8cc7\u8a0a\u7684\u7279\u6027\uff0c\u6b64\u65b9\u5f0f \u80fd\u5920\u5c07\u91cd\u8981\u8cc7\u8a0a\u66f4\u6e05\u695a\u8a18\u5f97\u3002(Narayan et al., 2018a)\u6240\u4f7f\u7528\u7684\u5f37\u5316\u5b78\u7fd2\u65b9\u6cd5\uff0c\u662f\u6700\u57fa\u790e\u7684 \u7b56\u7565\u68af\u5ea6(Policy Gradient) \uff0c\u4e5f\u5c31\u662f\u900f\u904e\u8a08\u7b97\u5f97\u4e4b\u734e\u52f5(Reward)\u5206\u6578\u8207\u6a21\u578b\u8a13\u7df4\u68af\u5ea6\u52a0\u6210\uff0c \u4f7f\u5176\u80fd\u5920\u5f80\u6211\u5011\u671f\u5f85\u7684\u65b9\u5411\u9032\u884c\u8a13\u7df4\u3002(Narayan et al., 2018a)\u6240\u4f7f\u7528\u7684\u734e\u52f5\u5206\u6578\u662f\u4f7f\u7528\u9810 \u6e2c\u6458\u8981\u8207\u6a19\u6e96\u6458\u8981\u7684\u8a55\u4f30\u5206\u6578\uff0c\u800c\u6b64\u65b9\u6cd5\u8b93\u6a21\u578b\u6536\u6582\u901f\u5ea6\u589e\u52a0\uff0c\u540c\u6642\u4e5f\u63d0\u5347\u6e96\u78ba\u5ea6\uff0c\u662f \u4e00\u9805\u8df3\u8e8d\u6027\u5730\u6210\u9577\u3002 \u5f0f\uff0c\u6587\u4ef6\u4e3b\u65e8\u53ef\u80fd\u5206\u6563\u65bc\u6587\u4ef6\u7684\u4e0d\u540c\u90e8\u5206\uff0c\u9664\u53bb\u6587\u4ef6\u4e3b\u65e8\u7684\u6bb5\u843d\uff0c\u6587\u4ef6\u7684\u5176\u4ed6\u90e8\u5206\u61c9\u70ba \u8fa8\u5225\u53ca\u6392\u5e8f\u6458\u8981\u53e5\u3002 \u8981\u8a9e\u53e5\u7684\u7a0b\u5ea6\uff0c\u610f\u5373\u6a21\u578b\u6240\u5f97\u4e4b\u6587\u4ef6\u5411\u91cf\u8868\u793a\u61c9\u5b8c\u6574\u6db5\u84cb\u6587\u4ef6\u4e3b\u65e8\u3002\u6839\u64da\u4e0d\u540c\u7684\u64b0\u5beb\u65b9 \u6703\u5c07\u8a9e\u53e5\u8868\u793a\u53ca\u6587\u4ef6\u8868\u793a\u7686\u653e\u7f6e\u65bc\u8a9e\u53e5\u9078\u53d6\u5668\u4e2d\uff0c\u4f7f\u5176\u80fd\u5920\u6839\u64da\u6587\u4ef6\u8868\u793a\u53ca\u8a9e\u53e5\u8868\u793a\uff0c \u7136\u800c\uff0c\u5c0d\u65bc\u7bc0\u9304\u5f0f\u6458\u8981\u4efb\u52d9\u4f86\u8aaa\uff0c\u6a21\u578b\u5c0d\u6587\u4ef6\u7684\u7406\u89e3\u61c9\u8a72\u8981\u80fd\u9054\u5230\u652f\u6490\u5f8c\u7e8c\u5206\u985e\u6458 \u4ee5\u6b64\u6211\u5011\u53ef\u4ee5\u63a8\u8ad6\uff0c\u985e\u795e\u7d93\u7db2\u8def\u7684\u5b78\u7fd2\u4ecd\u9700\u4eba\u5de5\u7279\u5fb5\u8f14\u52a9\u65b9\u53ef\u66f4\u52a0\u63d0\u5347\u6210\u6548\u3002 \u55ae\u55ae\u53ea\u8b93\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u81ea\u52d5\u5b78\u7fd2\u8a9e\u53e5\u6216\u6587\u4ef6\u5411\u91cf\u8868\u793a\u7684\u6548\u679c\u4ecd\u6709\u9650\uff0c\u82e5\u80fd\u52a0\u5165\u4e00 \u4e9b\u76f8\u95dc\u7684\u984d\u5916\u8cc7\u8a0a\u8f14\u52a9\u8a13\u7df4\uff0c\u53ef\u4ee5\u8b93\u6211\u5011\u7684\u65b9\u6cd5\u66f4\u6df1\u5165\u5730\u5b78\u7fd2\u5230\u6587\u4ef6\u91cd\u8981\u8cc7\u8a0a\u3002(Narayan et al., 2018b)\u63d0\u51fa\u5728\u6458\u8981\u65b9\u6cd5\u4e2d\u53c3\u8003\u6587\u4ef6\u7684\u6a19\u984c\u8cc7\u8a0a\uff0c\u53ef\u4ee5\u8b93\u6211\u5011\u7684\u65b9\u6cd5\u66f4\u5feb\u901f\u5730\u627e\u5230\u6587 \u4ef6\u7684\u4e3b\u65e8\uff0c\u800c\u4ee5\u6b64\u5f97\u5230\u7684\u6587\u4ef6\u5411\u91cf\u8868\u793a\u4e5f\u8f03\u80fd\u6db5\u84cb\u6587\u4ef6\u4e3b\u65e8\uff0c\u56e0\u800c\u80fd\u63d0\u5347\u6458\u8981\u7684\u6210\u6548\u3002 \u800c(Narayan et al., 2018b)\u4e3b\u8981\u7528\u7684\u57fa\u672c\u67b6\u69cb\u662f\u7531(Narayan et al., 2018a)\u8b8a\u5316\u800c\u6210\uff0c\u5dee\u7570\u5728 \u65bc\u5176\u5c07\u984d\u5916\u8cc7\u8a0a\u5411\u91cf\u8207\u8a9e\u53e5\u5411\u91cf\u5171\u540c\u7528\u65bc\u5224\u65b7\u662f\u5426\u70ba\u6458\u8981\u3002\u6b64\u65b9\u6cd5\u66f4\u662f\u9a57\u8b49\u985e\u795e\u7d93\u7db2\u8def \u67b6\u69cb\u6709\u984d\u5916\u8cc7\u8a0a\u8f14\u52a9\u80fd\u5b78\u7fd2\u66f4\u597d\u3002 2.2 \u91cd\u5beb\u5f0f\u6458\u8981 (Abstractive Summarization) (Rush et al., 2015) \u662f\u6700\u65e9\u5c07\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u61c9\u7528\u65bc\u91cd\u5beb\u5f0f\u6458\u8981\u7684\u7814\u7a76\uff0c\u5176\u4e3b\u8981\u7684\u67b6\u69cb\u662f \u6539\u826f\u81f3 (Bahdanau et al., 2014) \u63d0\u51fa\u7684\u7de8\u78bc\u89e3\u78bc\u5668 (Encoder-Decoder) \u8207\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u4ea6 \u7a31\u4e4b\u70ba\u5e8f\u5217\u5c0d\u5e8f\u5217\u6a21\u578b\uff0c\u4e26\u61c9\u7528\u65bc\u91cd\u5beb\u5f0f\u6458\u8981\u4efb\u52d9\u3002\u6ce8\u610f\u529b\u6a5f\u5236\u80fd\u8b93\u8f38\u5165\u6587\u4ef6\u5167\u5bb9\u8207\u8f38 \u51fa\u6458\u8981\u4e2d\u7684\u6587\u5b57\u4f5c\u4e00\u500b\u5c0d\u61c9\uff0c\u80fd\u627e\u5230\u6587\u4ef6\u8207\u6458\u8981\u4e2d\u8a5e\u5f59\u9593\u7684\u95dc\u4fc2\u3002(Rush et al., 2015) \u7684 \u67b6\u69cb\u8207 (Bahdanau et al., 2014) \u4e0d\u540c\u4e4b\u8655\u5728\u65bc\u5176\u4e26\u975e\u4f7f\u7528\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u4f5c\u70ba\u7de8\u78bc\u5668 \u8207\u89e3\u78bc\u5668\uff0c\u800c\u662f\u4f7f\u7528\u6700\u57fa\u672c\u7684\u524d\u5411\u5f0f\u985e\u795e\u7d93\u7db2\u8def (Feed-forward Neural Networks) \u7d50\u5408\u6ce8 \u610f\u529b\u6a5f\u5236\u4f5c\u70ba\u5176\u7de8\u78bc\u5668\uff0c\u800c\u89e3\u78bc\u5668\u5247\u662f\u57fa\u65bc(Bengio, Ducharme, Vincent & Jauvin, 2003) \u63d0\u51fa\u7684 NNLM \u8b8a\u5316\u3002\u6b64\u65b9\u6cd5\u5728\u8a9e\u53e5\u6458\u8981 (Sentence Summarization) \u4efb\u52d9\u4e0a\u5f97\u5230\u76f8\u7576\u512a\u7570 \u7684\u6210\u6548\uff0c\u56e0\u6b64\u4e5f\u8b49\u5be6\u985e\u795e\u7d93\u7db2\u8def\u80fd\u5920\u9069\u7528\u65bc\u91cd\u5beb\u5f0f\u6458\u8981\u4efb\u52d9\u4e0a\u3002 \u96a8\u8457\u6df1\u5c64\u5b78\u7fd2\u7684\u5feb\u901f\u767c\u5c55\uff0c\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u5728\u5e8f\u5217\u76f8\u95dc\u4efb\u52d9\u4e0a\u7684\u6210\u529f\u4ea6\u6f38\u6f38\u5ee3\u70ba \u4eba\u77e5\uff0c\u56e0\u6b64(Chopra et al., 2016) \u5247\u63d0\u51fa\u4e00\u500b\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u7684\u7de8\u78bc\u89e3\u78bc\u5668\u67b6\u69cb\uff0c\u61c9\u7528 \u65bc\u8a9e\u53e5\u6458\u8981\u4efb\u52d9\u4e0a\u3002\u6b64\u65b9\u6cd5\u4e3b\u8981\u662f (Rush et al., 2015) \u7684\u5ef6\u4f38\uff0c\u5176\u7de8\u78bc\u5668\u4f7f\u7528\u647a\u7a4d\u5f0f\u985e\u795e \u7d93\u7db2\u8def\uff0c\u800c\u89e3\u78bc\u5668\u5247\u4f7f\u7528\u9577\u77ed\u671f\u8a18\u61b6 (Long Short-Term Memory, LSTM) (Hochreiter & Schmidhuber, 1997) \u55ae\u5143\u4f5c\u70ba\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u7684\u57fa\u672c\u55ae\u5143\u3002LSTM \u662f \u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2 \u8def\u6f14\u8b8a\u7684\u67b6\u69cb\uff0c\u56e0\u5176\u5177\u6709\u4e09\u500b\u9598\u9580: \u8f38\u5165\u9598 (input gate)\u3001\u907a\u5fd8\u9598 (forget gate) \u53ca\u8f38\u51fa\u9598 (output gate)\uff0c\u4ee5\u53ca\u4e00\u500b\u8a18\u61b6\u55ae\u5143 (memory cell)\uff0c\u6240\u4ee5\u53ef\u4ee5\u6539\u5584\u6d88\u5931\u7684\u68af\u5ea6(Vanishing Gradient)\u554f\u984c\uff0c\u540c\u6642\u900f\u904e\u4e0d\u65b7\u66f4\u65b0\u8a18\u61b6\u55ae\u5143\uff0c\u80fd\u4fdd\u7559\u66f4\u591a\u91cd\u8981\u8cc7\u8a0a\uff0c\u4e0d\u6703\u96a8\u8457\u6642\u9593\u592a\u9577 \u800c\u907a\u5fd8\u4ee5\u524d\u7684\u8cc7\u8a0a\u3002 \u984c\u3002\u9664\u4e86\u57fa\u672c\u67b6\u69cb\u5916\uff0c\u9084\u63d0\u51fa\u4e09\u7a2e\u6539\u826f\u7684\u7248\u672c\uff0c\u7b2c\u4e00\u7a2e\u662f\u5728\u8f38\u5165\u6642\u52a0\u5165\u4e00\u4e9b\u984d\u5916\u7684\u7279\u5fb5\uff0c \u5982\uff1a\u8a5e\u6027\u3001\u8a5e\u983b\u7b49\uff1b\u7b2c\u4e8c\u7a2e\u5247\u662f\u5728\u89e3\u78bc\u5668\u751f\u6210\u8a5e\u5f59\u4e4b\u524d\uff0c\u52a0\u5165\u4e00\u500b\u63a7\u5236\u5668\uff0c\u63a7\u5236\u89e3\u78bc\u5668 \u662f\u5426\u8981\u751f\u6210\u65b0\u8a5e\u6216\u5f9e\u8f38\u5165\u6587\u4ef6\u8907\u88fd\uff0c\u6b64\u4e00\u6a5f\u5236\u662f\u53c3\u8003(Vinyals, Fortunato & Jaitly, 2015)\u63d0 \u51fa\u7684 Pointer Network \u67b6\u69cb\uff0c\u7576\u6587\u4ef6\u4e2d\u6709\u5c08\u6709\u540d\u8a5e\u51fa\u73fe\u6642\uff0c\u4f46\u89e3\u78bc\u5668\u7684\u8a5e\u5178\u4e2d\u53ef\u80fd\u6c92\u6709\u8a72 \u8a5e\u5f59\uff0c\u5c31\u9700\u8981\u5f9e\u8f38\u5165\u8cc7\u6599\u4e2d\u8907\u88fd\u4f7f\u7528\uff1b\u6700\u5f8c\u5247\u662f\u5c07\u7de8\u78bc\u5668\u6539\u6210\u968e\u5c64\u5f0f\u7684\u7de8\u78bc\u5668\uff0c\u4e00\u822c\u7684 \u7de8\u78bc\u5668\u8f38\u5165\u90fd\u662f\u6574\u7bc7\u6587\u7ae0\u7684\u6bcf\u500b\u8a5e\u5f59\uff0c\u4e0d\u8003\u616e\u8a9e\u53e5\u7684\u5206\u754c\uff0c\u800c\u968e\u5c64\u5f0f\u7de8\u78bc\u5668\u7b2c\u4e00\u5c64\u7684\u8f38 \u5165\u4e00\u6a23\u662f\u6574\u7bc7\u6587\u7ae0\u7684\u6bcf\u500b\u8a5e\u5f59\uff0c\u7576\u9047\u5230\u6bcf\u500b\u8a9e\u53e5\u7684\u7d50\u5c3e\u8a5e\u6642\uff0c\u5c31\u6703\u5c07\u6b64\u6642\u7684\u8f38\u51fa\u5411\u91cf\u8996 \u70ba\u8a9e\u53e5\u7684\u5411\u91cf\u8868\u793a\uff0c\u4e26\u4f5c\u70ba\u7b2c\u4e8c\u5c64\u7684\u8f38\u5165\uff0c\u4e5f\u5c31\u662f\u8aaa\uff0c\u7b2c\u4e8c\u5c64\u7684\u8f38\u5165\u662f\u6587\u7ae0\u4e2d\u7684\u8a9e\u53e5\uff0c \u9019\u7a2e\u65b9\u6cd5\u80fd\u5920\u5f97\u5230\u66f4\u7d30\u90e8\u7684\u6587\u4ef6\u8cc7\u8a0a\uff0c\u4e5f\u4f7f\u5f97\u7522\u751f\u4e4b\u6458\u8981\u5167\u5bb9\u8f03\u7b26\u5408\u6587\u7ae0\u4e3b\u65e8\u3002\u96d6\u7136\u5728 (Nallapati et al., 2016) \u5df2\u7d93\u6709\u5617\u8a66\u5c07 Pointer Network \u7684\u60f3\u6cd5\u7d50\u5408\u9032\u6a21\u578b\u4e2d\uff0c\u4f46\u662f\u6b64\u7a2e\u65b9 \u6cd5\u904e\u65bc\u5f37\u786c\uff0c\u56e0\u70ba\u6b64\u63a7\u5236\u5668\u5f97\u5230\u7684\u7d50\u679c\u50c5\u80fd\u4e8c\u9078\u4e00\u3002 \u56e0\u6b64 (See et al., 2017) \u63d0\u51fa\u7684\u67b6\u69cb\u80fd\u6709\u6548\u7684\u89e3\u6c7a\u6b64\u72c0\u6cc1\uff0c\u6b64\u7bc7\u7814\u7a76\u63d0\u51fa\u7684\u65b9\u6cd5\u662f\u4ee5 \u540c\u6642\u9032\u884c\u7522\u751f\u65b0\u8a5e\u8207\u9078\u53d6\u539f\u6709\u8a5e\u5f59\u7684\u52d5\u4f5c\uff0c\u6700\u5f8c\u5229\u7528\u4e00\u6a5f\u7387\u503c\u7c21\u55ae\u7dda\u6027\u7d50\u5408\u5169\u8005\u6240\u5f97\u5230 \u7684\u6a5f\u7387\u5206\u4f48\uff0c\u4ee5\u6b64\u5f97\u5230\u6700\u7d42\u7684\u8a5e\u5178\u6a5f\u7387\u5206\u4f48\uff0c\u8a5e\u5178\u4e2d\u5305\u542b\u89e3\u78bc\u8a5e\u5178\u8207\u8f38\u5165\u6587\u4ef6\u7684\u8a5e\u5f59\u3002 \u6b64\u5916\uff0c(See et al., 2017)\u4ea6\u63d0\u51fa\u4e00\u7a2e Coverage \u6a5f\u5236\uff0c\u6b64\u6a5f\u5236\u4e3b\u8981\u662f\u70ba\u4e86\u89e3\u6c7a\u5728\u8a9e\u8a00\u751f\u6210\u4efb \u52d9\u4e0a\u5bb9\u6613\u51fa\u73fe OOV \u548c\u91cd\u8907\u8a5e\u7684\u554f\u984c\uff0c\u5176\u5728\u6bcf\u500b\u6642\u9593\u9ede\u6703\u5c07\u4ee5\u524d\u6642\u9593\u9ede\u5f97\u5230\u7684\u6ce8\u610f\u529b\u5206 \u4f48\u52a0\u7e3d\u5f8c\u4f5c\u70ba\u4e00 coverage \u5411\u91cf\uff0c\u7dad\u5ea6\u5927\u5c0f\u70ba\u7de8\u78bc\u5668\u7684\u6642\u9593\u9ede\u6578\u91cf\uff0c\u800c\u5f8c\u5728\u7576\u524d\u6642\u9593\u9ede\u6703 \u53c3\u8003\u6b64\u5411\u91cf\u8a08\u7b97\u6ce8\u610f\u529b\u5206\u4f48\uff0c\u540c\u6642\u4e5f\u6703\u5c07\u6b64\u5411\u91cf\u548c\u6ce8\u610f\u529b\u5206\u4f48\u9032\u884c\u6bd4\u8f03\uff0c\u627e\u51fa\u6bcf\u500b\u7dad\u5ea6 \u6700\u5c0f\u503c\u5f8c\u52a0\u7e3d\u4fbf\u5f97\u5230\u4e00 coverage \u640d\u5931\uff0c\u4e4b\u5f8c\u6703\u505a\u70ba\u8a13\u7df4\u6642\u4f7f\u7528\u7684\u61f2\u7f70\u503c\uff0c\u8b93\u6a21\u578b\u53ef\u4ee5\u5c07 \u91cd\u8907\u8a5e\u7684\u6a5f\u7387\u964d\u4f4e\u3002\u6b64\u7814\u7a76\u6240\u5f97\u5230\u7684\u6458\u8981\u6548\u679c\u6bd4\u4ee5\u5f80\u7684\u91cd\u5beb\u5f0f\u6458\u8981\u512a\u7570\u8a31\u591a\uff0c\u800c\u5be6\u9a57\u7d50 \u679c\u4ea6\u986f\u793a\u6458\u8981\u6210\u679c\u6bd4\u8f03\u504f\u5411\u65bc\u7bc0\u9304\u5f0f\u6458\u8981\uff0c\u56e0\u70ba\u8907\u88fd\u7684\u6bd4\u4f8b\u6bd4\u751f\u6210\u7684\u6bd4\u4f8b\u9ad8\u51fa\u8a31\u591a\uff0c\u8207 \u6b64\u540c\u6642\u6211\u5011\u4e5f\u767c\u73fe\u7bc0\u9304\u5f0f\u6458\u8981\u7684\u6210\u6548\u4ecd\u6bd4\u91cd\u5beb\u5f0f\u6458\u8981\u66f4\u70ba\u986f\u8457\u3002 3. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b \u6211\u5011\u5c07\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u554f\u984c\u8996\u70ba\u4e00\u8a9e\u53e5\u5206\u985e\u66a8\u6392\u5e8f\u554f\u984c\uff0c\u4ee5\u671f\u80fd\u4f9d\u6587\u4ef6\u4e3b\u65e8\u9078\u51fa\u53ef\u80fd\u70ba\u6458 \u8981\u7684\u8a9e\u53e5\uff0c\u4e14\u540c\u6642\u80fd\u5b78\u7fd2\u5230\u6458\u8981\u8a9e\u53e5\u9593\u6709\u610f\u7fa9\u7684\u6392\u5e8f\uff0c\u4f7f\u5f97\u6458\u8981\u5167\u5bb9\u80fd\u66f4\u6d41\u66a2\u5730\u8868\u9054\u6587 \u4ef6\u4e3b\u984c\u53ca\u6982\u5ff5\u3002\u56e0\u6b64\uff0c\u6211\u5011\u63d0\u51fa\u4e00\u57fa\u672c\u67b6\u69cb\uff0c\u5176\u4e2d\u5305\u542b\u4e00\u968e\u5c64\u5f0f\u7de8\u78bc\u5668\u53ca\u4e00\u89e3\u78bc\u5668\uff0c\u4ea6 \u7a31\u4e4b\u70ba\u8a9e\u53e5\u9078\u53d6\u5668\u3002\u968e\u5c64\u5f0f\u7de8\u78bc\u5668\u4e2d\u4e3b\u8981\u6709\u5169\u500b\u968e\u5c64\uff0c\u6211\u5011\u6703\u5148\u91dd\u5c0d\u6587\u4ef6\u4e2d\u7684\u8a9e\u53e5\u627e\u5230 \u5c0d\u61c9\u7684\u8a9e\u53e5\u8868\u793a\uff0c\u518d\u5f9e\u8a9e\u53e5\u8868\u793a\u4e2d\u5b78\u7fd2\u5230\u6587\u4ef6\u4e2d\u7684\u91cd\u8981\u6982\u5ff5\uff0c\u4ea6\u53ef\u7a31\u70ba\u6587\u4ef6\u8868\u793a\uff1b\u6700\u5f8c \u8207\u6b64\u540c\u6642\uff0c(\u7368\u7684\u89e3\u78bc\u7528\u8a5e\u5178\uff0c\u56e0\u6b64\u80fd\u5920\u8b93\u8a5e\u5178\u4e0d\u6703\u592a\u5927\uff0c\u540c\u6642\u53c8\u80fd\u5728\u8a13\u7df4\u7684\u6642\u5019\u6e1b\u5c11\u767c\u751f\u672a\u77e5\u8a5e\u554f \u7bc0\u9304\u5f0f\u6458\u8981\u8207\u91cd\u5beb\u5f0f\u6458\u8981\u4e4b\u5dee\u7570\u5728\u65bc\u5176\u7522\u751f\u6458\u8981\u7684\u539f\u7406\u4e0d\u540c\u3002\u7bc0\u9304\u5f0f\u6458\u8981\u662f\u4f9d\u64da\u56fa \u5c64\u5f0f\u7684\u7de8\u78bc\u5668\u6709\u5169\u5c64\uff0c\u7b2c\u4e00\u5c64\u70ba\u647a\u7a4d\u5f0f\u985e\u795e\u7d93\u7db2\u8def(Convolutional Neural Networks, CNNs)\uff0c \u4e0d\u932f\u3002\u4e0d\u904e\u5f9e\u5be6\u9a57\u5206\u6790\u53ef\u4ee5\u767c\u73fe\u5c0d\u65bc\u6458\u8981\u7d50\u679c\u6709\u8f03\u591a\u8ca2\u737b\u7684\u90e8\u5206\u5927\u591a\u5728\u65bc\u4eba\u5de5\u7279\u5fb5\u4e0a\uff0c 3.1 \u554f\u984c\u5b9a\u7fa9\u53ca\u5047\u8a2d (Problem Formulation)
", "num": null }, "TABREF13": { "type_str": "table", "text": "\u4f5c\u70ba RNN \u7684\u57fa\u672c\u55ae\u5143\u3002\u6b64\u5916\uff0c\u6211\u5011\u53c3\u8003\u76f8\u95dc\u5be6\u4f5c\uff0c\u5c07 \u6587\u4ef6\u4ee5\u5012\u5e8f\u7684\u65b9\u5f0f\u4f5c\u70ba\u8f38\u5165", "html": null, "content": "
\u5289\u6148\u6069 \u7b49
\u57fa\u672c\u67b6\u69cb\u4e2d\u5305\u542b\u4e00\u968e\u5c64\u5f0f\u7de8\u78bc\u5668\u53ca\u4e00\u89e3\u78bc\u5668\uff0c\u4ea6\u7a31\u4e4b\u70ba\u8a9e\u53e5\u9078\u53d6\u5668\u3002\u968e\u5c64\u5f0f\u7de8\u78bc\u5668\u4e2d\u4e3b
\u8981\u6709\u5169\u500b\u968e\u5c64\uff0c\u6211\u5011\u6703\u5148\u91dd\u5c0d\u6587\u4ef6\u4e2d\u7684\u8a9e\u53e5\u627e\u5230\u5c0d\u61c9\u7684\u8a9e\u53e5\u8868\u793a\uff0c\u518d\u5f9e\u8a9e\u53e5\u8868\u793a\u4e2d\u5b78\u7fd2
\u5230\u6587\u4ef6\u4e2d\u7684\u91cd\u8981\u6982\u5ff5\uff0c\u4ea6\u53ef\u7a31\u70ba\u6587\u4ef6\u8868\u793a\uff1b\u6700\u5f8c\u6703\u5c07\u8a9e\u53e5\u8868\u793a\u53ca\u6587\u4ef6\u8868\u793a\u7686\u653e\u7f6e\u65bc\u8a9e\u53e5
\u9078\u53d6\u5668\u4e2d\uff0c\u4f7f\u5176\u80fd\u5920\u6839\u64da\u6587\u4ef6\u8868\u793a\u53ca\u8a9e\u53e5\u8868\u793a\uff0c\u8fa8\u5225\u53ca\u6392\u5e8f\u6458\u8981\u53e5\u3002
3.2.1 \u8a9e\u53e5\u7de8\u78bc\u5668 (Sentence Encoder)
\u6211\u5011\u5229\u7528\u647a\u7a4d\u5f0f\u985e\u795e\u7d93\u7db2\u8def (Convolutional Neural Networks, CNNs) \u5c07\u6bcf\u500b\u4e0d\u540c\u9577\u5ea6\u7684
\u8a9e\u53e5\u6295\u5f71\u81f3\u5411\u91cf\u7a7a\u9593\uff0c\u80fd\u5920\u5f97\u5230\u56fa\u5b9a\u9577\u5ea6\u7684\u5411\u91cf\u8868\u793a (Representation)\u3002\u5728\u904e\u53bb\u7684\u7814\u7a76\u4e2d
\u986f\u793a\uff0cCNNs \u5728 NLP \u9818\u57df\u7684\u4efb\u52d9\u4e2d\u6709\u76f8\u7576\u4e0d\u932f\u7684\u6210\u6548(Cheng & Lapata, 2016; Collobort
et al., 2011; Kalchbrenner, Grefenstette & Blunsom, 2014; Kim, Jernite, Sontag & Rush, 2016;
Lei, Barzilay & Jaakkola, 2015; Zhang, Zhao & LeCun, 2015)\u3002\u6211\u5011\u4f7f\u7528 1-D \u647a\u7a4d
(Convolution) \u4e26\u7d66\u5b9a\u5bec\u5ea6\u7684\u647a\u7a4d\u6838 (Kernel) \uff0c\u5176\u5b9a\u7fa9\u70ba\u6bcf\u6b21\u770b\u500b\u8a5e\u5f59\uff0c\u985e\u4f3c\u65bc
N \u5143\u6a21\u578b (N-gram) \u7684\u6982\u5ff5\uff0c\u53ef\u5f97\u5230\u7279\u5fb5\u5716 (Feature map) \u3002\u4e4b\u5f8c\uff0c\u5c0d\u6bcf\u500b\u7279\u5fb5\u5716\u6cbf\u8457
\u6642\u5e8f\u4f7f\u7528\u6700\u5927\u6c60\u5316 (Max Pooling)\uff0c\u5c07\u7279\u5fb5\u5716\u4e2d\u7684\u6700\u5927\u503c\u8996\u70ba\u8a9e\u53e5\u7279\u5fb5\u3002\u70ba\u4e86\u80fd\u627e\u5230\u66f4\u597d
\u7684\u7279\u5fb5\uff0c\u6211\u5011\u4f7f\u7528\u591a\u7a2e\u5bec\u5ea6\u7684\u647a\u7a4d\u6838\uff0c\u4e14\u6bcf\u7a2e\u5bec\u5ea6\u6709\u591a\u500b\u4e0d\u540c\u7684\u647a\u7a4d\u6838\uff0c\u6700\u5f8c\u5c07\u6240\u5f97\u5230
\u5c0d\u65bc\u6bcf\u500b\u8a9e\u97f3\u6587\u4ef6\uff0c\u6211\u5011\u5b9a\u7fa9\u4ee5\u4e0b\u5e7e\u9ede\u5047\u8a2d\uff1a \u7684\u7279\u5fb5\u4e32\u63a5\u5728\u4e00\u8d77\uff0c\u5373\u70ba\u8a9e\u53e5\u7684\u5411\u91cf\u8868\u793a\u3002
\uf0b7 \u8a9e\u97f3\u8cc7\u8a0a\u53ef\u900f\u904e\u984d\u5916\u7684\u8072\u5b78\u7279\u5fb5\u53c3\u8003\u9032\u6a21\u578b\u8a13\u7df4 3.2.2 \u6587\u4ef6\u7de8\u78bc\u5668 (Document Encoder) \uf0b7 \u4f7f\u7528\u5b57\u5411\u91cf\u53ef\u6709\u6548\u6539\u5584\u8a9e\u53e5\u8868\u793a\u7684\u6210\u6548\u4e26\u62b5\u92b7\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4 \u5728\u6587\u4ef6\u7de8\u78bc\u5668\u4e2d\uff0c\u6211\u5011\u4f7f\u7528\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def (Recurrent Neural Networks, RNNs)\uff0c\u5c07\u6bcf \uf0b7 \u6458\u8981\u53e5\u53ef\u88ab\u5176\u4ed6\u975e\u6458\u8981\u53e5\u89e3\u91cb \u500b\u6587\u4ef6\u7684\u8a9e\u53e5\u5e8f\u5217\u8f49\u63db\u6210\u4e00\u56fa\u5b9a\u9577\u5ea6\u4e4b\u5411\u91cf\u8868\u793a\uff0c\u5176\u80fd\u5920\u64f7\u53d6\u5230\u6587\u4ef6\u4e2d\u7684\u91cd\u8981\u8cc7\u8a0a\u3002\u5176
\uf0b7 \u5f37\u5316\u5b78\u7fd2\u6280\u8853\u53ef\u8a13\u7df4\u6458\u8981\u4e4b\u6392\u5e8f \u4e2d\u70ba\u4e86\u907f\u514d\u7522\u751f\u6d88\u5931\u7684\u68af\u5ea6 (Vanishing Gradient) \u554f\u984c\uff0c\u6211\u5011\u9078\u64c7\u4f7f\u7528 GRU (Gated
\u5f8c\u7e8c\u6211\u5011\u6703\u91dd\u5c0d\u4e0a\u8ff0\u4e4b\u5047\u8a2d\u5c0d\u6a21\u578b\u67b6\u69cb\u9032\u884c\u4e0d\u540c\u7684\u6539\u9032\uff0c\u4e14\u6703\u8a73\u7d30\u95e1\u8ff0\u5176\u52d5\u6a5f\u3002 Recurrent Unit) (
3.2 \u57fa\u672c\u67b6\u69cb (Basic Architecture)
[Figure 2. Basic architecture]
", "num": null }, "TABREF15": { "type_str": "table", "text": "", "html": null, "content": "
\u5289\u6148\u6069 \u7b49
\u53e5\u6703\u6709\u5c0d\u61c9\u7684\u8072\u5b78\u7279\u5fb5\uff0c\u56e0\u6b64\u4ee4\u8072\u5b78\u7279\u5fb5\u5411\u91cf\u70ba \uff0c\u6211\u5011\u7684\u65b9\u6cd5\u53ef\u5b9a\u7fa9\u4e0b\u5217\u65b9\u7a0b\u5f0f\uff1a
\u2032, ;
, ,\u5c07\u6bcf\u500b\u8a9e\u53e5
\u9032\u884c\u6392\u5e8f\uff0c\u4f9d\u7167\u56fa\u5b9a\u7684\u6458\u8981\u6bd4\u4f8b\u9078\u53d6\u6392\u540d\u9ad8\u7684\u8a9e\u53e5\u4f5c\u70ba\u5b8c\u6574\u7684\u6458\u8981\u7d50\u679c\u3002
\u5716 3\u70ba\u4e86\u80fd\u5920\u907f\u514d\u6458\u8981\u7d50\u679c\u53d7\u5230\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\uff0c\u6211\u5011\u8a8d\u70ba\u8072\u5b78\u7279\u5fb5\u80fd\u5920\u4fdd\u7559\u6bcf\u500b\u6587\u4ef6\u7684\u8a9e
\u97f3\u8cc7\u8a0a\u4e14\u4e0d\u53d7\u8fa8\u8b58\u932f\u8aa4\u4e4b\u5f71\u97ff\uff0c\u56e0\u6b64\u63d0\u51fa\u4e09\u7a2e\u65b9\u5f0f\u5c07\u8072\u5b78\u7279\u5fb5\u8207\u4e0a\u8ff0\u67b6\u69cb\u7d50\u5408\uff0c\u4f7f\u5f97\u5728
\u5224\u65b7\u6458\u8981\u7684\u6642\u5019\u80fd\u5920\u53c3\u8003\uff0c\u4ee5\u5f97\u5230\u66f4\u597d\u7684\u6458\u8981\u6210\u679c\u3002\u8072\u5b78\u7279\u5fb5\u662f\u4ee5\u8a9e\u53e5\u70ba\u55ae\u4f4d\uff0c\u6bcf\u500b\u8a9e
", "num": null }, "TABREF19": { "type_str": "table", "text": "Bahdanau, D., Cho, K.H., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR 2015. TACL, 5, 135-146. Chen, B., Kuo, J.-W., & Tsai, W.-H. (2004). Lightly Supervised and Data-Driven Approaches to Mandarin Broadcast News Transcription. In Proc. of IEEE International Conference onAcoustics, Speech, and Signal Processing 2004. doi : 10.1109/ICASSP.2004.1326101 Chen, B., Kuo, J.-W., & Tsai, W.-H. (2005. Lightly Supervised and Data-Driven Approaches The Association for Computational Linguistics and Chinese Language Processing", "html": null, "content": "
48\u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981 \u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981 \u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981 \u57fa\u65bc\u7aef\u5c0d\u7aef\u6a21\u578b\u5316\u6280\u8853\u4e4b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981 Vol. 25, No. 1, June 2020, pp. 57-8045 \u5289\u6148\u6069 \u7b49 47 \u5289\u6148\u6069 \u7b49 49 \u5289\u6148\u6069 \u7b49 51 \u5289\u6148\u6069 \u7b49 57
One-class classification \u8cc7\u6599\u4ea6\u5206\u6210\u5169\u7a2e\uff0cTD \u70ba\u7d93\u904e\u4eba\u5de5\u6a19\u8a3b\u7684\u6587\u4ef6\uff0c\u800c SD \u5247\u70ba\u7d93\u904e\u81ea\u52d5\u8a9e\u97f3\u8fa8\u8b58\u5f8c\u7522\u751f\u7684\u6587 Sentences in summary Other sentences \u4ef6\uff0c\u56e0\u6b64 SD \u6703\u6709\u90e8\u5206\u7684\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u3002\u8868 1 \u662f\u5c0d\u8a13\u7df4\u96c6\u53ca\u6e2c\u8a66\u96c6\u4f5c\u7684\u4e00\u4e9b\u57fa\u672c\u7d71\u8a08\u8cc7 \u6599\u3002\u6b64\u5916\uff0c\u8a9e\u97f3\u6587\u4ef6\u7684\u8072\u5b78\u7279\u5fb5\u985e\u578b\u5217\u65bc\u8868 2 \u4e2d\uff0c\u662f\u5229\u7528 Praat \u5de5\u5177\u64f7\u53d6\u7684\u7d50\u679c\uff0c\u7e3d\u8a08\u6709 36 \u500b\u7279\u5fb5\u3002 \u8868 1. \u7528\u65bc\u6458\u8981\u4e4b\u5ee3\u64ad\u65b0\u805e\u6587\u4ef6\u7684\u7d71\u8a08\u8cc7\u8a0a[Tsai et al., 2016] [Table 1. The statistics of MATBN] \u8a13\u7df4\u96c6 \u6e2c\u8a66\u96c6 \u6587\u4ef6\u6578 185 20 \u6bcf\u6587\u4ef6\u5e73\u5747\u53e5\u6578 20 23.3 \u6bcf\u53e5\u5e73\u5747\u8a5e\u6578 17.5 16.9 \u6bcf\u6587\u4ef6\u5e73\u5747\u8a5e\u6578 326.0 290.3 \u5e73\u5747\u8a5e\u932f\u8aa4\u7387 38.0% 39.4% \u5e73\u5747\u5b57\u932f\u8aa4\u7387 28.8% 29.8% \u6b64\u5916\uff0c\u6211\u5011\u6240\u4f7f\u7528\u4e4b\u8072\u5b78\u7279\u5fb5\u5217\u65bc\u8868 2 \u4e2d\uff0c\u662f\u5229\u7528 Praat \u5de5\u5177\u64f7\u53d6\u7684\u7d50\u679c\uff0c\u7e3d\u8a08\u6709 36 \u500b\u7279\u5fb5\uff0c\u53ef\u7c21\u55ae\u5206\u70ba\u56db\u7a2e\u985e\u578b\u4ecb\u7d39\uff1a \uf0b7 Pitch \u97f3\u9ad8\uff1a \u7576\u6211\u5011\u5728\u8aaa\u8a71\u6642\uff0c\u8b1b\u5230\u91cd\u9ede\u7684\u6642\u5019\uff0c\u97f3\u9ad8\u5c31\u6703\u6bd4\u8f03\u9ad8\u4f86\u5438\u5f15\u6ce8\u610f\uff0c\u53cd\u4e4b\u5247\u6703\u7dad\u6301\u76f8 \u5c0d\u8f03\u4f4e\u7684\u97f3\u9ad8\u3002 \uf0b7 Energy \u80fd\u91cf\uff1a \u80fd\u91cf\u4e00\u822c\u662f\u6307\u8a9e\u8005\u7684\u8aaa\u8a71\u97f3\u91cf\uff0c\u901a\u5e38\u90fd\u6703\u88ab\u8996\u70ba\u4e00\u9805\u91cd\u8981\u7684\u8cc7\u8a0a\u3002\u7576\u6211\u5011\u8981\u5f37\u8abf\u67d0 \u4ef6\u4e8b\u60c5\u6642\uff0c\u9664\u4e86\u97f3\u9ad8\u6703\u63d0\u9ad8\u5916\uff0c\u97f3\u91cf\u4e5f\u6703\u81ea\u7136\u5730\u653e\u5927\uff0c\u56e0\u800c\u80fd\u5e6b\u52a9\u6a21\u578b\u5206\u8fa8\u91cd\u8981\u8cc7 \u8a0a\u3002 \uf0b7 Duration \u6301\u7e8c\u6642\u9593\uff1a \u6301\u7e8c\u6642\u9593\u6709\u9ede\u985e\u4f3c\u65bc\u4e00\u500b\u8a9e\u53e5\u4e2d\u7684\u8a5e\u5f59\u6578\u91cf\uff0c\u7576\u6301\u7e8c\u6642\u9593\u8d8a\u9577\u6c92\u6709\u9593\u65b7\u6642\uff0c\u5247\u8868\u793a \u9019\u53e5\u8a71\u5305\u542b\u7684\u8cc7\u8a0a\u76f8\u5c0d\u8f03\u591a\u3002 \uf0b7 Peak and Formant \u5cf0\u8207\u5171\u632f\u5cf0\uff1a \u8868 2\u8072\u5b78\u7279\u5fb5 \u8a13\u7df4\u8a5e\u5411\u91cf\u7684\u5dee\u7570\u5176\u5be6\u4e0d\u5927\uff0c\u56e0\u6b64\u5728\u6574\u9ad4\u7684\u6458\u8981\u6548\u679c\u4e0a\u5169\u8005\u7684\u5dee\u7570\u5176\u5be6\u4e26\u6c92\u6709\u5f88\u5927\uff0c\u4f46 \u8868 5. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b-\u5f37\u5316\u5b78\u7fd2 \u9593\u7684\u95dc\u806f\u6027\uff0c\u800c\u5c0d\u65bc\u8a9e\u97f3\u6587\u4ef6\u800c\u8a00\uff0c\u82e5\u8fa8\u8b58\u932f\u8aa4\u7684\u592a\u591a\uff0c\u6bd4\u8f03\u96e3\u627e\u5230\u8a9e\u53e5\u9593\u7684\u8a9e\u610f\u95dc\u806f \u6027\uff0c\u56e0\u6b64\u7576\u5169\u8005\u540c\u6642\u8a13\u7df4\u6642\uff0c\u96d6\u7136\u90fd\u662f\u91dd\u5c0d\u8cc7\u8a0a\u6027\uff0c\u4f46\u53ef\u80fd\u56e0\u70ba\u592a\u904e\u6ce8\u91cd\u800c\u9020\u6210\u53cd\u6548\u679c\u3002 \u73fe\u6ce8\u610f\u529b\u6a5f\u5236\u548c\u5f37\u5316\u5b78\u7fd2\u7686\u53ef\u63d0\u5347\u6458\u8981\u8cc7\u8a0a\u6027\uff0c\u4f46\u540c\u6642\u4f7f\u7528\u6642\u6548\u679c\u6703\u76f8\u5c0d\u8f03\u5dee\uff1b\u5176\u6b21\u5728 \uf0d3 \u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684 CBOW \u76f8\u8f03\u65bc SG \u662f\u8f03\u512a\u7570\u7684\uff0c\u800c\u6b64\u4e8c\u8005\u65b9\u6cd5\u7684\u6548\u80fd\u4ea6\u8d85\u8d8a\u50b3\u7d71\u7684\u5411\u91cf\u6a21\u578b\u8a31\u591a\u3002 \u6027\uff0c\u56e0\u800c\u5c0e\u81f4\u7d50\u679c\u76f8\u5c0d\u8f03\u5dee\u3002 \u907f\u514d\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u90e8\u5206\uff0c\u6b21\u8a5e\u5411\u91cf\u8207\u8072\u5b78\u7279\u5fb5\u7686\u6709\u4e0d\u932f\u7684\u6210\u6548\uff0c\u5c24\u4ee5\u6b21\u8a5e\u5411\u91cf\u7684\u6548\u679c \u5176\u6b21\uff0c\u6211\u5011\u4e5f\u5617\u8a66\u7d50\u5408\u6ce8\u610f\u529b\u6a5f\u5236\u548c\u8072\u5b78\u7279\u5fb5\u7684\u61c9\u7528\uff0c\u5982\u8868 9 \u7684\u6700\u5f8c\u5169\u5217\uff0c\u7531\u65bc\u524d 1. Pitch (min, max, diff, avg) 2. Peak normalized cross-correlation of pitch (min, max, diff, avg) \u6700\u5f8c\u6211\u5011\u91dd\u5c0d\u76e3\u7763\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u4f5c\u8a0e\u8ad6\uff0cDNN \u662f\u6700\u57fa\u672c\u7684\u591a\u5c64\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\uff0c \u800c CNN \u5247\u662f\u4f7f\u7528\u647a\u7a4d\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\uff0cRefresh \u662f\u8207\u672c\u8ad6\u6587\u76f8\u4f3c\u7684\u968e\u5c64\u5f0f\u67b6\u69cb\u3002\u5176\u4e2d \u6587\u5b57\u6587\u4ef6 \u8a9e\u97f3\u6587\u4ef6 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u8868 7. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b-\u6b21\u8a5e\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 \u9762\u7684\u8a0e\u8ad6\u4e2d\u767c\u73fe\u4f7f\u7528\u5c40\u90e8\u5411\u91cf\u65b9\u5f0f\u7d50\u5408\u8072\u5b78\u7279\u5fb5\u5728\u8a9e\u97f3\u6587\u4ef6\u4e0a\u6703\u6709\u8f03\u4f73\u7684\u6548\u679c\uff0c\u56e0\u6b64\u6b64 \u8f03\u70ba\u986f\u8457\uff1b\u6700\u5f8c\u5c0d\u65bc\u6458\u8981\u9023\u8cab\u6027\uff0c\u6211\u5011\u7684\u65b9\u6cd5\u96d6\u7136\u6709\u5b78\u7fd2\u6392\u5e8f\uff0c\u4f46\u8cc7\u6599\u96c6\u4e2d\u7684\u53c3\u8003\u6458\u8981 \u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c 1 \u4e0d\u5305\u542b\u6392\u5e8f\u8cc7\u8a0a\uff0c\u56e0\u6b64\u7121\u6cd5\u5b8c\u6574\u5730\u5b78\u7fd2\u5230\u8a9e\u53e5\u9593\u7684\u9023\u8cab\u6027\u3002\u56e0\u6b64\u900f\u904e\u521d\u6b65\u7684\u5be6\u9a57\u7d50\u679c\uff0c \u90e8\u5206\u5be6\u9a57\u4ea6\u63a1\u7528\u5c40\u90e8\u5411\u91cf\u3002\u5be6\u9a57\u7d50\u679c\u986f\u793a\u52a0\u5165\u8072\u5b78\u7279\u5fb5\u5728\u6587\u5b57\u6587\u4ef6\u6458\u8981\u4e0a\u6709\u4e9b\u8a31\u7684\u63d0\u5347\uff0c 3. Energy value (min, max, diff, avg) 4. Duration value (min, max, diff, avg) 5. 1 st formant value (min, max, diff, avg) 6. 2 nd formant value (min, max, diff, avg) 7. 3 rd formant value (min, max, diff, avg) 4.2 \u5be6\u9a57\u7d50\u679c (Results) 0.347 0.228 0.290 0.342 0.189 0.287 0.362 0.233 0.316 0.345 0.201 0.301 0.410 0.300 0.364 0.378 0.239 0.333 CBOW VSM LSA SG 0.415 0.308 0.366 0.393 0.250 0.349 DNN 0.488 0.382 0.444 0.371 0.233 0.332 CNN 0.501 0.407 0.460 0.370 0.208 0.312 \u5728\u6587\u5b57\u6587\u4ef6\u7684\u6548\u679c\u4e0a\uff0c\u53ef\u4ee5\u5f88\u660e\u986f\u5730\u767c\u73fe\u4e09\u8005\u90fd\u8d85\u8d8a\u4e86\u975e\u76e3\u7763\u5f0f\u7684\u65b9\u6cd5\uff0c\u5c24\u4ee5 CNN \u7684 \u6548\u679c\u6700\u597d\uff0c\u53ef\u80fd\u662f\u56e0\u70ba CNN \u6bd4 DNN \u66f4\u80fd\u6293\u5230\u91cd\u8981\u8cc7\u8a0a\uff0c\u800c\u53c3\u6578\u91cf\u53c8\u6bd4 Refresh \u5c11\uff0c\u8f03 \u6613\u65bc\u8a13\u7df4\uff1b\u4f46\u5728\u8a9e\u97f3\u6587\u4ef6\u7684\u6210\u6548\u4e0a\uff0c\u4e09\u8005\u90fd\u6bd4\u975e\u76e3\u7763\u5f0f\u7684\u6548\u679c\u5dee\uff0c\u53ef\u80fd\u662f\u56e0\u70ba\u5176\u592a\u904e\u65bc \u4f9d\u8cf4\u6587\u4ef6\u4e2d\u7684\u8a5e\u5f59\u8cc7\u8a0a\uff0c\u56e0\u800c\u53d7\u5230\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\u8f03\u70ba\u56b4\u91cd\uff0c\u5c0e\u81f4\u5176\u6548\u679c\u8f03\u5dee\u3002 \u5f8c\u7e8c\u7ae0\u7bc0\u6211\u5011\u5c07\u4ee5 Refresh \u7684\u6578\u64da\u8207\u672c\u8ad6\u6587\u63d0\u51fa\u4e4b\u67b6\u69cb\u9032\u884c\u6bd4\u8f03\u53ca\u5206\u6790\u3002 4.2.2 \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b\u5be6\u9a57 (Our models) \u5728\u5be6\u9a57\u7d50\u679c\u5206\u6790\u4e2d\uff0c\u6211\u5011\u524d\u9762\u7ae0\u7bc0\u4ecb\u7d39\u6a21\u578b\u6642\u63d0\u5230\u7684\u526f\u67b6\u69cb\u5206\u958b\u5be6\u9a57\uff0c\u4ee5\u4e0b\u6703\u5217\u51fa\u4e0d\u540c \u5be6\u9a57\u8a2d\u7f6e\u7684\u6548\u679c\uff0c\u4ee5\u53ca\u7d50\u679c\u8a0e\u8ad6\u8207\u5206\u6790\u3002 I. \u6b21\u8a5e\u5411\u91cf \u9996\u5148\uff0c\u6211\u5011\u5148\u6bd4\u8f03\u8a5e\u5411\u91cf\u548c\u5b57\u5411\u91cf\u7528\u65bc\u6a21\u578b\u4e2d\u7684\u6548\u679c\uff0c\u5982\u4e0b\u8868\u6240\u793a\uff0c\u53ef\u4ee5\u770b\u51fa\u55ae\u7368\u4f7f\u7528 \u8a5e\u5411\u91cf\u7684\u7d50\u679c\u5728\u8a9e\u97f3\u6587\u4ef6\u4e0a\u7684\u6548\u679c\u53cd\u800c\u6bd4\u55ae\u7368\u4f7f\u7528\u5b57\u5411\u91cf\u7684\u6642\u5019\u512a\u7570\uff0c\u4f46\u5728\u6587\u5b57\u6587\u4ef6\u4e0a \u53cd\u800c\u76f8\u53cd\uff0c\u9019\u6a23\u7684\u7d50\u679c\u8207\u6211\u5011\u7684\u5047\u8a2d\u6709\u4e9b\u8a31\u51fa\u5165\uff0c\u53ef\u80fd\u662f\u56e0\u70ba\u8a13\u7df4\u6587\u4ef6\u4e2d\u932f\u8aa4\u7684\u5b57\u6bd4\u8f03 \u96c6\u4e2d\uff0c\u56e0\u800c\u7121\u6cd5\u900f\u904e\u5468\u570d\u7684\u8cc7\u8a0a\u4f86\u5b78\u7fd2\u6b63\u78ba\u7684\u8a5e\u5f59\u8cc7\u8a0a\uff1b\u6b64\u5916\uff0c\u82e5\u4f7f\u7528\u878d\u5408\u5411\u91cf\u65bc\u6211\u5011 \u7684\u6a21\u578b\u4e2d\uff0c\u5728\u8a9e\u97f3\u6587\u4ef6\u7684\u7d50\u679c\u4e0a\u53ef\u4ee5\u6709\u5f88\u660e\u986f\u7684\u9032\u6b65\uff0c\u4f46\u5728\u6587\u5b57\u6587\u4ef6\u4e0a\u50c5\u65bc ROUGE-2 \u6709\u9032\u6b65\uff0c\u56e0\u800c\u6211\u5011\u8a8d\u70ba\u5b57\u5411\u91cf\u548c\u8a5e\u5411\u91cf\u4e4b\u9593\u53ef\u80fd\u4ecd\u6709\u76f8\u8f14\u76f8\u6210\u7684\u4f5c\u7528\u3002 \u8868 4. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b-\u6b21\u8a5e\u5411\u91cf\u7d50\u679c Refresh [Narayan et al., 2018a] 0.453 0.372 0.446 0.329 0.197 0.319 \u8a5e\u5411\u91cf 0.526 0.473 0.520 0.380 0.262 0.370 \u5b57\u5411\u91cf 0.544 0.473 0.535 0.363 0.242 0.351 Refresh [Narayan et al., 2018a] 0.453 0.372 0.446 0.329 0.197 \u6587\u5b57\u6587\u4ef6 \u8a9e\u97f3\u6587\u4ef6 \u8db3\u4ee5\u8b49\u660e\u6211\u5011\u63d0\u51fa\u7684\u67b6\u69cb\u5c0d\u65bc\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u6709\u4e0d\u932f\u7684\u6210\u6548\uff0c\u4f46\u4e3b\u8981\u90fd\u53cd\u61c9\u65bc\u6587\u5b57\u5167\u5bb9\u4e0a\uff0c \u4f46\u65bc\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4e2d\u6c92\u6709\u592a\u5927\u7684\u5f71\u97ff\uff0c\u7136\u800c\u8ddf\u672a\u52a0\u5165\u8072\u5b78\u7279\u5fb5\u8a13\u7df4\u7684\u5be6\u9a57\u6578\u64da\u76f8\u6bd4\u8f03\uff0c 0.319 \u878d\u5408\u5411\u91cf 0.543 0.481 0.533 0.392 0.266 \u82e5\u8981\u5be6\u8cea\u6027\u7684\u6539\u5584\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u7684\u7f3a\u9ede\uff0c\u4ecd\u9700\u66f4\u6df1\u5165\u7684\u63a2\u8a0e\u3002 \u6211\u5011\u767c\u73fe\u6578\u64da\u5176\u5be6\u5dee\u7570\u4e0d\u5927\uff0c\u6b64\u60c5\u6cc1\u53ef\u80fd\u662f\u56e0\u70ba\u6b64\u90e8\u5206\u7684\u5be6\u9a57\u53d7\u5230\u6ce8\u610f\u529b\u6a5f\u5236\u7684\u5f71\u97ff\u8f03 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u986f\u8457\uff0c\u8072\u5b78\u7279\u5fb5\u5c0d\u65bc\u6b64\u90e8\u5206\u5be6\u9a57\u4e0d\u662f\u5176\u8a13\u7df4\u7684\u91cd\u9ede\uff0c\u56e0\u6b64\u6c92\u6709\u986f\u8457\u7684\u63d0\u5347\u3002 \u627f\u4e0a\u6240\u8ff0\uff0c\u672a\u4f86\u7684\u7814\u7a76\u6211\u5011\u53ef\u4ee5\u91dd\u5c0d\u5e7e\u500b\u9762\u5411\u7e7c\u7e8c\u6df1\u5165\u3002\u9996\u5148\u662f\u61c9\u7528\u9810\u8a13\u7df4\u8a9e\u8a00\u6a21 0.380 \u878d\u5408\u5411\u91cf+\u5f37\u5316\u5b78\u7fd2 0.555 0.479 0.543 0.395 0.269 0.379 III. \u8072\u5b78\u7279\u5fb5+\u5f37\u5316\u5b78\u7fd2 \u7d93\u904e\u524d\u9762\u5169\u9805\u5be6\u9a57\u6bd4\u8f03\uff0c\u6211\u5011\u53ef\u4ee5\u767c\u73fe\u878d\u5408\u5411\u91cf\u53ef\u4ee5\u89e3\u6c7a\u90e8\u5206\u7684\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u5f71\u97ff\uff0c\u800c \u5f37\u5316\u5b78\u7fd2\u5247\u6bd4\u8f03\u5c08\u6ce8\u65bc\u6458\u8981\u8cc7\u8a0a\u6027\u3002\u56e0\u6b21\u6211\u5011\u5617\u8a66\u65bc\u6a21\u578b\u4e0a\u7d50\u5408\u8072\u5b78\u7279\u5fb5\u8207\u5f37\u5316\u5b78\u7fd2\u7684 \u65b9\u6cd5\uff0c\u5f9e\u8868 6 \u4e2d\uff0c\u6211\u5011\u53ef\u4ee5\u767c\u73fe\u5728\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u4e0a\uff0c\u6548\u679c\u6bd4\u8f03\u986f\u8457\u7684\u662f\u4f7f\u7528\u5c40\u90e8\u5411\u91cf\u7684 \u65b9\u5f0f\u7d50\u5408\u8072\u5b78\u7279\u5fb5\uff1b\u7136\u800c\u5728\u6587\u5b57\u6587\u4ef6\u6458\u8981\u4e2d\uff0c\u6bd4\u8f03\u597d\u7684\u7d50\u679c\u662f\u4f7f\u7528\u5168\u57df\u5411\u91cf\u3002\u56e0\u6b64\u6211\u5011 \u53ef\u4ee5\u63a8\u8ad6\u51fa\u8072\u5b78\u7279\u5fb5\u5c0d\u65bc\u4eba\u985e\u8f49\u5beb\u7684\u6587\u5b57\u6587\u4ef6\u6548\u7528\u4e0d\u5f70\uff0c\u800c\u5c0d\u65bc\u81ea\u52d5\u8fa8\u8b58\u7684\u8a9e\u97f3\u6587\u4ef6\u4e0a\uff0c \u9084\u662f\u6709\u4e0d\u932f\u7684\u6548\u679c\uff0c\u4f46\u53ef\u80fd\u9700\u8981\u8b93\u8072\u5b78\u7279\u5fb5\u76f4\u63a5\u53c3\u8207\u6458\u8981\u9078\u53d6\u7684\u968e\u6bb5\u624d\u80fd\u6709\u6548\u7684\u63d0\u5347\u6548 \u80fd\u3002\u7136\u800c\uff0c\u6574\u9ad4\u7684\u6578\u64da\u4e0a\u4ecd\u662f\u6bd4\u524d\u9762\u7684\u5be6\u9a57\u5dee\u4e86\u8a31\u591a\uff0c\u53ef\u80fd\u662f\u6a21\u578b\u4e0a\u9084\u9700\u4f5c\u66f4\u591a\u7d30\u90e8\u7684 \u8abf\u6574\uff0c\u6216\u7d50\u5408\u5176\u4ed6\u6a5f\u5236\u3002 \u8868 6. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b-\u8072\u5b78\u7279\u5fb5+\u5f37\u5316\u5b78\u7fd2 Refresh [Narayan et al., 2018a] 0.453 0.372 0.446 0.329 0.197 0.319 \u7121\u8072\u5b78\u7279\u5fb5 0.479 0.400 0.469 0.352 0.226 0.342 \u5168\u57df\u5411\u91cf 0.486 0.400 0.473 0.350 0.222 0.336 \u5c40\u90e8\u5411\u91cf 0.478 0.399 0.469 0.384 0.264 0.370 \u5168\u57df\u5411\u91cf+\u5c40\u90e8\u5411\u91cf 0.464 0.373 0.453 0.350 0.224 0.336 Refresh 0.453 0.372 0.446 0.329 0.197 0.319 \u8868 9. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b-\u7d9c\u5408\u6bd4\u8f03 \u578b\u65bc\u6458\u8981\u7814\u7a76\u4e0a\uff0c\u6539\u5584\u8a9e\u53e5\u6216\u6587\u7ae0\u7684\u8a9e\u610f\u8868\u793a\uff0c\u7531\u65bc\u6700\u8fd1\u6709\u8a31\u591a\u9810\u8a13\u7df4\u8a9e\u8a00\u6a21\u578b\u5df2\u7d93\u4f7f [Narayan et al., 2018a] \u8a5e\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 0.523 0.472 0.519 \u7528\u76f8\u7576\u5927\u91cf\u7684\u8cc7\u6599\u53ca\u9ad8\u6548\u80fd\u7684\u8a2d\u5099\u9032\u884c\u8a13\u7df4\uff0c\u4e14\u5df2\u88ab\u8b49\u660e\u5728\u8a31\u591a\u4efb\u52d9\u4e0a\u6709\u76f8\u7576\u4eae\u773c\u7684\u6210 [Table 9. Comprehensive comparison of our models] 0.401 0.290 0.392 \u5b57\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 0.535 0.477 0.529 0.368 0.245 \u6587\u5b57\u6587\u4ef6 \u7e3e\uff0c\u50c5\u9700\u91dd\u5c0d\u61c9\u7528\u5fae\u8abf\u5373\u53ef\uff0c\u6216\u8a31\u53ef\u4ee5\u5617\u8a66\u9032\u884c\u6df1\u5165\u7814\u7a76\uff1b\u5176\u6b21\u662f\u91cd\u65b0\u6574\u7406\u8cc7\u6599\u96c6\uff0c\u56e0 \u8a9e\u97f3\u6587\u4ef6 \u70ba\u6458\u8981\u9023\u8cab\u6027\u5c0d\u65bc\u6458\u8981\u4ea6\u662f\u76f8\u7576\u91cd\u8981\u7684\u6307\u6a19\uff0c\u82e5\u6210\u672c\u5141\u8a31\uff0c\u5247\u53ef\u4ee5\u50f1\u8acb\u5c08\u5bb6\u5e6b\u5fd9\u70ba\u8cc7\u6599 0.356 \u878d\u5408\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 0.567 0.496 0.557 0.402 0.278 ROUGE-1 ROUGE-2 ROUGE-L ROUGE-1 ROUGE-2 ROUGE-L \u9032\u884c\u91cd\u65b0\u6a19\u8a3b\uff0c\u9664\u4e86\u6a19\u8a3b\u6458\u8981\u8a9e\u53e5\u5916\uff0c\u540c\u6642\u4ea6\u52a0\u5165\u6458\u8981\u8a9e\u53e5\u7684\u9806\u5e8f\uff0c\u66f4\u6709\u5229\u65bc\u5f8c\u7e8c\u7684\u6458 0.389 V. \u6b21\u8a5e\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236+\u5f37\u5316\u5b78\u7fd2 Refresh 0.453 0.372 0.446 0.329 0.197 0.319 \u8981\u6392\u5e8f\u76f8\u95dc\u7814\u7a76\uff1b\u518d\u8005\uff0c\u7bc0\u9304\u5f0f\u6458\u8981\u4ea6\u53ef\u80fd\u767c\u751f\u8a9e\u53e5\u9593\u8a9e\u610f\u91cd\u8907\u7684\u60c5\u6cc1\uff0c\u7136\u800c\u9bae\u5c11\u5b78\u8005 [Narayan et al., 2018a] \u91dd\u5c0d\u7bc0\u9304\u5f0f\u6458\u8981\u91cd\u8907\u6027\u9032\u884c\u7814\u7a76\uff0c\u56e0\u6b64\u70ba\u4e86\u6e1b\u5c11\u7bc0\u9304\u5f0f\u6458\u8981\u4e4b\u91cd\u8907\u6027\uff0c\u6216\u8a31\u53ef\u5c07\u91cd\u5beb\u5f0f \u63a5\u7e8c\u524d\u4e00\u500b\u5be6\u9a57\uff0c\u6211\u5011\u52a0\u5165\u5f37\u5316\u5b78\u7fd2\u6a5f\u5236\u65bc\u8a13\u7df4\u4e2d\uff0c\u5be6\u9a57\u7d50\u679c\u5982\u8868 8 \u6240\u793a\u3002\u5f9e\u7d50\u679c\u53ef\u4ee5 \u767c\u73fe\uff0c\u4e0d\u7ba1\u662f\u6587\u5b57\u6587\u4ef6\u9084\u662f\u8a9e\u97f3\u6587\u4ef6\uff0c\u52a0\u5165\u5f37\u5316\u5b78\u7fd2\u6a5f\u5236\u5f8c\uff0c\u7686\u662f\u5728\u8f38\u5165\u70ba\u8a5e\u5411\u91cf\u6642\u6703 \u878d\u5408\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 +\u5f37\u5316\u5b78\u7fd2 0.518 0.448 0.502 0.347 0.209 \u6458\u8981\u7814\u7a76\u4e2d\u5e38\u898b\u4e4b\u6e1b\u5c11\u5197\u4f59\u7684\u6a5f\u5236\u6539\u826f\u4e26\u61c9\u7528\u65bc\u6211\u5011\u7684\u65b9\u6cd5\u4e0a\uff0c\u61c9\u80fd\u5f97\u5230\u66f4\u5177\u610f\u7fa9\u7684\u6458 \u6458\u8981 0.337 \u8981\u7d50\u679c\uff1b\u6700\u5f8c\u4e5f\u6700\u91cd\u8981\u7684\u662f\u9700\u8981\u907f\u514d\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u5f71\u97ff\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u6548\u679c\uff0c\u5f9e\u6211\u5011\u7684\u5be6 \u96a8\u8457\u793e\u7fa4\u5e73\u53f0\u84ec\u52c3\u7684\u767c\u5c55\uff0c\u8a31\u591a\u8b20\u8a00\u8207\u5047\u8a0a\u606f\u4e5f\u5145\u65a5\u5728\u793e\u7fa4\u5a92\u9ad4\u4e4b\u4e2d\u3002\u73fe\u4eca\u5404 \u5f97\u5230\u8f03\u597d\u7684\u6548\u679c\u3002\u9019\u6709\u53ef\u80fd\u662f\u56e0\u70ba\u6211\u5011\u7684\u5f37\u5316\u5b78\u7fd2\u4e2d\u734e\u52f5\u51fd\u6578\u4f7f\u7528 ROUGE \u5206\u6578\uff0c\u800c ROUGE \u8a08\u7b97\u6642\u4e3b\u8981\u662f\u4ee5\u8a5e\u70ba\u57fa\u672c\u55ae\u4f4d\uff0c\u56e0\u800c\u5c0e\u81f4\u5728\u5176\u4ed6\u60c5\u6cc1\u4e0b\u7d50\u679c\u76f8\u5c0d\u8f03\u5dee\u3002 \u8868 8. \u968e\u5c64\u5f0f\u985e\u795e\u7d93\u6458\u8981\u6a21\u578b-\u6b21\u8a5e\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236+\u5f37\u5316\u5b78\u7fd2 Refresh [Narayan et al., 2018a] 0.453 0.372 0.446 0.329 0.197 0.319 \u8a5e\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236+ \u5f37\u5316\u5b78\u7fd2 0.543 0.491 0.539 0.350 0.226 0.337 \u5b57\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236+ \u76f8\u95dc\u7814\u7a76\u7684\u51fa\u73fe\u3002\u96d6\u7136\u591a\u5a92\u9ad4\u6280\u8853\u9032\u6b65\u5feb\u901f\uff0c\u4f46\u5927\u591a\u6578\u7684\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u65b9\u6cd5\u4ecd\u591a\u534a\u7531\u6587 \u5f37\u5316\u5b78\u7fd2 0.525 0.451 0.515 0.342 0.221 0.329 \u878d\u5408\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 0.567 0.496 0.557 0.402 0.278 0.389 \u878d\u5408\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 +\u8072\u5b78\u7279\u5fb5+\u5f37\u5316\u5b78\u7fd2 0.532 0.455 0.521 0.336 0.220 0.326 \u878d\u5408\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 +\u8072\u5b78\u7279\u5fb5 0.569 0.507 0.561 0.401 0.288 0.394 VII. \u8996\u89ba\u5316\u6ce8\u610f\u529b \u53e6\u5916\uff0c\u6211\u5011\u4ea6\u91dd\u5c0d\u6ce8\u610f\u529b\u6a5f\u5236\u4e2d\u7684\u6b0a\u91cd\u9032\u884c\u5206\u6790(\u5716 7)\uff0c\u5716\u4e2d\u6bcf\u500b\u5217\u548c\u884c\u4ee3\u8868\u4ee3\u8868\u6587 \u4ef6\u4e2d\u7684\u8a9e\u53e5\uff0c\u6bcf\u500b\u5217\u7684\u8a9e\u53e5\u6a19\u865f\u65c1\u62ec\u5f27\u5167\u7684\u6578\u503c\u70ba 1| , , \uff0c\u5373\u8a72\u53e5\u88ab\u8fa8\u8b58\u70ba \u9a57\u53ef\u4ee5\u5f97\u51fa\uff0c\u73fe\u4eca\u7684\u65b9\u6cd5\u4ecd\u6709\u6240\u4fb7\u9650\uff0c\u800c\u70ba\u4e86\u6709\u6548\u5730\u63d0\u5347\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u6e96\u78ba\u6027\uff0c\u6216\u8a31\u6211 \u5927\u793e\u7fa4\u5e73\u53f0\u5927\u591a\u662f\u900f\u904e\u4eba\u5de5\u7684\u8209\u5831\u6216\u7d71\u8a08\u7684\u65b9\u5f0f\u4f86\u9032\u884c\u8b20\u8a00\u7684\u5206\u8fa8\uff0c\u9019\u5728\u8cc7\u8a0a \u5011\u80fd\u5617\u8a66\u4f7f\u7528\u8a9e\u97f3\u7279\u5fb5\u5982 Fbank \u548c MFCC \u7b49\u4f5c\u70ba\u6458\u8981\u7cfb\u7d71\u4e4b\u8f38\u5165\uff0c\u61c9\u53ef\u5f97\u5230\u8f03\u539f\u59cb\u7684\u8a9e \u5feb\u901f\u50b3\u64ad\u7684\u6642\u4ee3\uff0c\u975e\u5e38\u7f3a\u4e4f\u6548\u7387\u3002\u672c\u8ad6\u6587\u63d0\u51fa\u4e00\u500b\u7d50\u5408\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\u7684\u591a\u6a21\u5f0f \u5716 7. \u6ce8\u610f\u529b\u6a5f\u5236\u6b0a\u91cd\u8996\u89ba\u5316 [Figure 7. Visualization of attention weight] \u97f3\u5167\u5bb9\uff0c\u4ea6\u80fd\u6e1b\u5c11\u9047\u5230\u8fa8\u8b58\u932f\u8aa4\u7684\u60c5\u6cc1\uff0c\u4e14\u56e0\u7bc0\u9304\u5f0f\u6458\u8981\u662f\u9032\u884c\u8a9e\u53e5\u9078\u64c7\uff0c\u56e0\u6b64\u4e0d\u9700\u518d \u7279\u5fb5\u878d\u5408\u65b9\u6cd5\uff0c\u4e26\u900f\u904e\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u4f86\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c\u3002\u5f9e Tweets \u4e2d\u64f7\u53d6\u51fa \u9032\u884c\u8f49\u5beb\uff0c\u56e0\u800c\u80fd\u4f7f\u5f97\u6458\u8981\u540c\u70ba\u8a9e\u97f3\u5f62\u5f0f\uff0c\u4f46\u6b64\u60f3\u6cd5\u9700\u8981\u591a\u52a0\u8003\u616e\u7684\u90e8\u5206\u5728\u65bc\u96e3\u4ee5\u8a55\u4f30 \u5716\u50cf\u3001\u6587\u5b57\u5167\u5bb9\u3001\u8207\u767c\u6587\u8005\u7684\u793e\u7fa4\u7279\u5fb5\u5f8c\uff0c\u9996\u5148\uff0c\u6211\u5011\u5c07\u5716\u50cf\u8f38\u5165\u5716\u50cf\u63cf\u8ff0\u6a21 \u7c21\u55ae\u7e3d\u7d50\u6574\u9ad4\u5be6\u9a57\u7d50\u679c\uff0c\u6211\u5011\u63d0\u51fa\u4e4b\u6a21\u578b\u67b6\u69cb\u78ba\u5be6\u53ef\u6709\u6548\u63d0\u5347\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u7684\u6210\u6548\uff0c \u7136\u800c\u5c0d\u65bc\u907f\u514d\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\u4e0a\uff0c\u6b21\u8a5e\u5411\u91cf\u548c\u8072\u5b78\u7279\u5fb5\u7684\u6548\u679c\u4ecd\u6709\u5f85\u52a0\u5f37\uff1b\u800c\u6ce8\u610f \u7d50\u679c\u6b63\u78ba\u8207\u5426\uff0c\u4e5f\u76f8\u8f03\u5169\u968e\u6bb5\u7684\u65b9\u6cd5\u96e3\u5be6\u73fe\uff0c\u56e0\u6b64\u8f03\u5c11\u5b78\u8005\u6295\u5165\u9019\u65b9\u9762\u7684\u7814\u7a76\uff0c\u82e5\u80fd\u5be6 \u578b\uff0c\u900f\u904e CNN \u8207 Seq2Seq \u6a21\u578b\u7522\u751f\u80fd\u63cf\u8ff0\u8a72\u5716\u50cf\u7684\u8a9e\u53e5\uff1b\u5176\u6b21\uff0c\u9019\u4e9b\u8a9e\u53e5\u8207 \u73fe\u6211\u5011\u7684\u69cb\u60f3\uff0c\u61c9\u53ef\u4f7f\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u6280\u8853\u9054\u5230\u65b0\u7684\u9ad8\u5ea6\uff0c\u4ea6\u9020\u798f\u65e5\u5f8c\u7684\u5b78\u8005\u5011\u3002 \u6587\u5b57\u5167\u5bb9\u4e32\u63a5\uff0c\u7d93\u904e word embedding \u7de8\u78bc\u5f8c\uff0c\u4ee5 Early \u53ca Late Fusion \u5169\u7a2e\u7279 \u529b\u6a5f\u5236\u548c\u5f37\u5316\u5b78\u7fd2\u7b49\u65b9\u6cd5\u5c0d\u65bc\u6587\u5b57\u6587\u4ef6\u7684\u6548\u679c\u4ecd\u6bd4\u8f03\u986f\u8457\u3002\u56e0\u6b64\u82e5\u8981\u5be6\u8cea\u6027\u5730\u63d0\u5347\u8a9e\u97f3 \u6587\u4ef6\u6458\u8981\u7684\u6210\u6548\uff0c\u6211\u5011\u8a8d\u70ba\u4ecd\u9808\u5f9e\u8a9e\u97f3\u8fa8\u8b58\u7684\u90e8\u5206\u8457\u624b\uff0c\u82e5\u80fd\u4e0d\u7d93\u904e\u8f49\u5beb\u76f4\u63a5\u64f7\u53d6\u6458\u8981\uff0c \u5fb5\u878d\u5408\u65b9\u5f0f\uff0c\u9032\u4e00\u6b65\u7d50\u5408\u793e\u7fa4\u7279\u5fb5\u3002\u6700\u5f8c\uff0c\u6211\u5011\u8a2d\u8a08\u4e86\u591a\u5c64 (Multi-layer)\u53ca\u591a \u53c3\u8003\u6587\u737b(References) \u55ae\u5143 (Multi-cell)\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def (BRNN)\uff0c\u4e26\u7d50\u5408\u6ce8\u610f\u529b\u6a5f\u5236\u8ce6\u4e88\u6bcf\u500b \u6216\u8a31\u66f4\u7b26\u5408\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\uff0c\u4ea6\u80fd\u6709\u8f03\u512a\u7570\u7684\u6210\u6548\u3002 \u7279\u5fb5\u4e0d\u540c\u7684\u6b0a\u91cd\uff0c\u4ee5\u627e\u51fa\u6700\u91cd\u8981\u7684\u7279\u5fb5\u4e26\u9032\u884c\u5206\u985e\u3002\u5be6\u9a57\u7d50\u679c\u986f\u793a\uff0c\u4ee5 Early \u6458\u8981\u7684\u6a5f\u7387\u3002\u82e5\u8a72\u5217\u4e2d\u6bcf\u6b04\u7684\u984f\u8272\u8d8a\u6df1\uff0c\u5247\u4ee3\u8868\u8a72\u53e5\u548c\u5176\u4ed6\u53e5\u7684\u95dc\u806f\u6027\u8d8a\u5927\uff0c\u5247\u8a72\u53e5\u4e5f \u88ab\u8996\u70ba\u6458\u8981\uff0c\u5176\u4e2d\u88ab\u7d05\u6846\u5708\u8d77\u7684\u5217\u70ba\u53c3\u8003\u6458\u8981\u3002\u5f9e\u7d05\u6846\u7684\u90e8\u5206\u770b\u53ef\u4ee5\u5f88\u660e\u986f\u7684\u767c\u73fe\uff0c\u6211 Fusion \u878d\u5408\u6240\u6709\u7279\u5fb5\uff0c\u4f7f\u7528\u57fa\u65bc GRU \u7684\u591a\u55ae\u5143 (Multi-cell) BRNN \u67b6\u69cb\u80fd\u9054\u5230 5. \u7d50\u8ad6\u8207\u672a\u4f86\u5c55\u671b (Conclusion & Future Work) \u6700\u4f73\u6548\u679c\uff0cF-measure \u9054 0.89\uff0c\u9a57\u8b49\u4e86\u672c\u8ad6\u6587\u6240\u63d0\u51fa\u8b20\u8a00\u6aa2\u6e2c\u65b9\u6cd5\u7684\u6709\u6548\u6027\uff0c \u5011\u7684\u6458\u8981\u7cfb\u7d71\u9078\u51fa\u7684\u6458\u8981\u5927\u90e8\u5206\u548c\u53c3\u8003\u6458\u8981\u76f8\u540c\uff0c\u56e0\u6b64\u53ef\u9a57\u8b49\u6211\u5011\u7684\u6ce8\u610f\u529b\u6a5f\u5236\u65bc\u6458\u8981 \u5b78\u7fd2\u6280\u8853\u7684\u84ec\u52c3\u767c\u5c55\uff0c\u4f7f\u5f97\u591a\u5a92\u9ad4\u6587\u4ef6\u76f8\u95dc\u7814\u7a76\u66f4\u70ba\u5bb9\u6613\uff0c\u56e0\u800c\u9010\u6f38\u6709\u591a\u5a92\u9ad4\u6587\u4ef6\u6458\u8981 \u4efb\u52d9\u4e0a\u771f\u7684\u6709\u4e00\u5b9a\u7684\u6210\u6548\u3002 \u904e\u53bb\u6709\u95dc\u81ea\u52d5\u6587\u4ef6\u6458\u8981\u7684\u7814\u7a76\u4e3b\u8981\u4ecd\u8457\u91cd\u65bc\u6587\u5b57\u6587\u4ef6\u6458\u8981\uff1b\u800c\u8fd1\u5e74\u4f86\u7531\u65bc\u5927\u6578\u64da\u53ca\u6a5f\u5668 \u672a\u4f86\u5c07\u4ee5\u66f4\u5927\u91cf\u7684\u8cc7\u6599\u9032\u884c\u5be6\u9a57\u3002
\u5171\u632f\u5cf0\u662f\u983b\u8b5c\u4e2d\u7684\u5cf0\u503c\uff0c\u4e3b\u8981\u7528\u4f86\u63cf\u8ff0\u4eba\u985e\u8072\u9053\u5167\u7684\u5171\u632f\u60c5\u5f62\u3002\u5982\u679c\u8072\u97f3\u6bd4\u8f03\u4f4e\u6c88\uff0c \u5247\u5171\u632f\u5cf0\u6703\u6bd4\u8f03\u660e\u986f\uff0c\u807d\u5230\u7684\u5167\u5bb9\u4ea6\u6703\u8f03\u6e05\u6670\uff1b\u53cd\u4e4b\u82e5\u8072\u97f3\u592a\u904e\u9ad8\u4ea2\uff0c\u5247\u5171\u632f\u92d2\u6703 \u6bd4\u8f03\u6a21\u7cca\uff0c\u540c\u6642\u807d\u5230\u7684\u5167\u5bb9\u4e5f\u6703\u6bd4\u8f03\u6a21\u7cca\u96e3\u8fa8\u3002 [Narayan et al., 2018a] Refresh 0.453 0.372 0.446 0.329 0.197 0.319 \u878d\u5408\u5411\u91cf(\u8a5e+\u5b57) 0.543 0.481 0.533 0.392 0.266 0.380 II. \u5f37\u5316\u5b78\u7fd2 \u627f\u4e0a\u6240\u8ff0\uff0c\u6211\u5011\u8a8d\u70ba\u878d\u5408\u5411\u91cf\u65bc\u8a9e\u97f3\u6458\u8981\u4e0a\u6709\u76f8\u7576\u5927\u7684\u53ef\u80fd\u6027\uff0c\u56e0\u6b64\u6211\u5011\u5617\u8a66\u540c\u6642\u4f7f\u7528 \u878d\u5408\u5411\u91cf\u548c\u5f37\u5316\u5b78\u7fd2\u65bc\u6a21\u578b\u4e0a\uff0c\u5f9e\u8868 5 \u4e2d\u53ef\u4ee5\u5f88\u660e\u986f\u7684\u770b\u5230\u5f37\u5316\u5b78\u7fd2\u65bc\u6211\u5011\u7684\u65b9\u6cd5\u4e2d\u6709 \u4e00\u5b9a\u7684\u6210\u6548\u5728\uff0c\u4e0d\u904e\u5728\u6587\u5b57\u6587\u4ef6\u6458\u8981\u4e0a\u6709\u6bd4\u8f03\u591a\u7684\u9032\u6b65\uff0c\u4e3b\u56e0\u53ef\u80fd\u662f\u5728\u65bc\u53c3\u8003\u6458\u8981\u4e0d\u5305 \u542b\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\uff0c\u56e0\u6b64\u6c92\u6709\u8fa6\u6cd5\u5b8c\u5168\u89e3\u6c7a\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u7684\u5f71\u97ff\uff0c\u82e5\u80fd\u5c07\u8072\u5b78\u7279\u5fb5\u4ea6\u52a0\u5165 \u5f37\u5316\u5b78\u7fd2\u7684\u734e\u52f5\u51fd\u6578\u4e2d\u6216\u8a31\u80fd\u6539\u9032\u6b64\u60c5\u6cc1\u3002 \u9078\u64c7\u5411\u91cf 0.448 0.371 0.439 0.350 0.213 0.334 \u878d\u5408\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 0.518 0.448 0.502 0.347 0.209 0.337 \u5b57\u6587\u4ef6\u6458\u8981\u65b9\u6cd5\u5ef6\u4f38\u800c\u4f86\u3002\u76f4\u81f3\u8fd1\u671f\u96a8\u8457\u6df1\u5c64\u5b78\u7fd2\u6280\u8853\u6f38\u8da8\u6210\u719f\uff0c\u591a\u5a92\u9ad4\u6587\u4ef6\u6458\u8981\u6280\u8853 +\u5f37\u5316\u5b78\u7fd2 \u4e5f\u96a8\u4e4b\u6210\u9577\u3002 IV. \u6b21\u8a5e\u5411\u91cf+\u6ce8\u610f\u529b\u6a5f\u5236 VI. \u7d9c\u5408\u6bd4\u8f03 \u9806\u61c9\u6df1\u5c64\u5b78\u7fd2\u7684\u6d6a\u6f6e\uff0c\u672c\u8ad6\u6587\u63d0\u51fa\u4e00\u7a2e\u968e\u5c64\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u4f86\u5f9e\u4e8b\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\uff0c \u56e0\u524d\u4e00\u500b\u5be6\u9a57\u7d50\u679c\u767c\u73fe\u8072\u5b78\u7279\u5fb5\u548c\u5f37\u5316\u5b78\u7fd2\u5171\u540c\u8a13\u7df4\u6642\u6548\u679c\u76f8\u5c0d\u8f03\u5dee\uff0c\u56e0\u6b64\u6211\u5011\u9019\u6b21\u6bd4 \u6700\u5f8c\uff0c\u6211\u5011\u5c07\u524d\u8ff0\u63d0\u5230\u4e4b\u67b6\u69cb\u505a\u4e00\u500b\u7d9c\u5408\u6bd4\u8f03\uff0c\u5be6\u9a57\u7d50\u679c\u5982\u8868 9 \u6240\u793a\u3002\u5176\u4e2d\u6211\u5011\u53ef\u4ee5\u767c \u540c\u6642\u4ea6\u9069\u7528\u65bc\u4e00\u822c\u6587\u5b57\u6587\u4ef6\u6458\u8981\u3002\u6587\u4ef6\u6458\u8981\u4efb\u52d9\u53ef\u6982\u5206\u70ba\u7bc0\u9304\u5f0f\u8207\u91cd\u5beb\u5f0f\u6458\u8981\u3002\u672c\u8ad6\u6587 \u8f03\u7d50\u5408\u6b21\u8a5e\u5411\u91cf\u548c\u6ce8\u610f\u529b\u6a5f\u5236\u7684\u5be6\u9a57\u7d50\u679c\u3002\u5f9e\u8868 7 \u4e2d\u53ef\u4ee5\u767c\u73fe\u540c\u6642\u4f7f\u7528\u878d\u5408\u5411\u91cf\u548c\u6ce8\u610f \u73fe\u7576\u5f37\u5316\u5b78\u7fd2\u6a5f\u5236\u548c\u6ce8\u610f\u529b\u6a5f\u5236\u540c\u6642\u4f7f\u7528\u7684\u60c5\u6cc1\u4e0b\uff0c\u4e0d\u7ba1\u662f\u5728\u6587\u5b57\u6587\u4ef6\u9084\u662f\u8a9e\u97f3\u6587\u4ef6\u4e0a \u65e8\u5728\u63a2\u8a0e\u7bc0\u9304\u5f0f\u8a9e\u97f3\u6587\u4ef6\u6458\u8981\u65b9\u6cd5\u3002\u5176\u4e2d\u70ba\u4e86\u63d0\u5347\u6458\u8981\u8cc7\u8a0a\u6027\u53ca\u9023\u8cab\u6027\uff0c\u6211\u5011\u52a0\u5165\u6ce8\u610f \u529b\u6a5f\u5236\u7684\u6548\u679c\u5728\u6587\u5b57\u6587\u4ef6\u4e0a\u8f03\u70ba\u512a\u7570\uff0c\u800c\u5728\u8a9e\u97f3\u6587\u4ef6\u4e0a\u4ecd\u662f\u4ee5\u8a5e\u5411\u91cf\u7684\u7d50\u679c\u6bd4\u8f03\u597d\u3002\u96d6 \u6548\u679c\u90fd\u76f8\u5c0d\u8f03\u5dee\u3002\u6b64\u7a2e\u60c5\u6cc1\u6709\u53ef\u80fd\u662f\u56e0\u70ba\u6211\u5011\u7684\u6ce8\u610f\u529b\u6a5f\u5236\u4e3b\u8981\u91dd\u5c0d\u7684\u662f\u6458\u8981\u8cc7\u8a0a\u6027\u63d0 \u529b\u6a5f\u5236\u53ca\u5f37\u5316\u5b78\u7fd2\u6280\u8853\uff1b\u53e6\u5916\u6211\u5011\u4ea6\u5617\u8a66\u4f7f\u7528\u8072\u5b78\u7279\u5fb5\u53ca\u6b21\u8a5e\u5411\u91cf\u65bc\u6a21\u578b\u8a13\u7df4\u4e2d\uff0c\u4ee5\u907f \u7136\u6574\u9ad4\u7684\u6548\u679c\u7686\u6bd4\u4e4b\u524d\u7684\u7d50\u679c\u597d\uff0c\u4f46\u53ef\u80fd\u662f\u56e0\u70ba\u6ce8\u610f\u529b\u6a5f\u5236\u8a13\u7df4\u7684\u4e3b\u8981\u662f\u6587\u4ef6\u4e2d\u8a9e\u53e5\u4e4b \u5347\uff0c\u800c\u5f37\u5316\u5b78\u7fd2\u4e2d\u7531\u65bc\u4f7f\u7528 ROUGE \u5206\u6578\u4f5c\u70ba\u734e\u52f5\u51fd\u6578\uff0c\u800c ROUGE \u4e5f\u662f\u8a08\u7b97\u6458\u8981\u8cc7\u8a0a \u514d\u8a08\u7b97\u6458\u8981\u6642\u53d7\u5230\u904e\u591a\u8a9e\u97f3\u8fa8\u8b58\u932f\u8aa4\u5f71\u97ff\u3002\u7d93\u7531\u4e00\u7cfb\u5217\u7684\u5be6\u9a57\u5206\u6790\u8207\u8a0e\u8ad6\uff0c\u9996\u5148\u6211\u5011\u767c
", "num": null }, "TABREF20": { "type_str": "table", "text": "Castillo \u7b49\u4eba (Castillo, Mendoza & Poblete, 2011) \u6839\u64da tweets \u7684\u6587\u5b57\u5167 \u5bb9\uff0c\u518d\u52a0\u4e0a\u4f7f\u7528\u8005\u7684\u767c\u6587\u8207 retweet \u884c\u70ba\uff0c\u4ee5\u53ca\u5f15\u7528\u5916\u90e8\u4f86\u6e90\u7b49\u7279\u5fb5\uff0c\u4ee5 decision tree \u4f86 \u5224\u65b7 Twitter \u7684\u8cc7\u8a0a\u53ef\u4fe1\u5ea6 (information credibility)\u3002Gupta \u7b49\u4eba (Gupta, Zhao & Han, 2012) \u4ee5\u985e\u4f3c PageRank \u7684\u65b9\u5f0f\u9032\u884c authority propagation\uff0c\u4e26\u4e14\u4f9d\u64da\u76f8\u4f3c\u4e8b\u4ef6\u61c9\u8a72\u6709\u76f8 Ma \u7b49\u4eba (Ma et al., 2016) \u5229\u7528 RNN \u4f86\u6aa2\u6e2c Weibo \u8207 Twitter \u63a8\u6587\u662f\u5426\u70ba\u8b20\u8a00\uff1b Yu \u7b49", "html": null, "content": "
60\u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c \u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c59 \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43 61 \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43
\u7d93\u7db2\u8def (Convolutional Neural Networks, CNNs)\uff0c\u4f86\u9032\u884c\u5206\u985e\u4ee5\u8fa8\u8b58\u5047\u8a0a\u606f\uff0c\u4f46\u56e0\u70ba\u767c\u6587 2. \u76f8\u95dc\u7814\u7a76 (Related Work) LeCun \u7b49\u4eba\u63d0\u51fa (LeCun, Bottou, Bengio & Haffner, 1998)\u3002\u5176\u6982\u5ff5\u662f\u900f\u904e\u5377\u7a4d\u7db2\u8def\u5c64
\u5167\u5bb9\u7c21\u77ed\uff0c\u5982\u679c\u6587\u4ef6\u6578\u91cf\u4e0d\u8db3\uff0c\u5176\u5b78\u7fd2\u6548\u679c\u6709\u9650\u3002\u5176\u6b21\uff0c\u793e\u7fa4\u7db2\u8def\u4f7f\u7528\u8005\u4e4b\u9593\u53ef\u80fd\u6709\u4e0d \u540c\u7684\u95dc\u4fc2\uff0c\u5982: \u597d\u53cb\u3001\u8ffd\u96a8\u3001\u7b49\uff0c\u900f\u904e\u793e\u7fa4\u7db2\u8def\u5206\u6790\u65b9\u6cd5\u53ea\u8457\u91cd\u5728\u767c\u6398\u95dc\u4fc2\u7d50\u69cb\uff0c\u5bb9\u6613 \u5ffd\u7565\u6587\u4ef6\u5167\u5bb9\u6240\u8868\u9054\u7684\u8cc7\u8a0a\uff0c\u5c0e\u81f4\u8b20\u8a00\u7684\u8fa8\u8b58\u7387\u4e0d\u4f73\u3002\u6709\u9452\u65bc\u6b64\uff0c\u672c\u8ad6\u6587\u63d0\u51fa\u4e00\u500b\u7d50\u5408 \u5716\u50cf\u63cf\u8ff0\u6a21\u7d44\u7684\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u65b9\u6cd5\uff0c\u4e26\u900f\u904e\u6df1\u5ea6\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u642d\u914d\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u4ee5 \u63d0\u5347\u8b20\u8a00\u5075\u6e2c\u7684\u6e96\u78ba\u7387\u3002\u9996\u5148\uff0c\u6211\u5011\u63d0\u51fa\u4e86\u5229\u7528\u5716\u50cf\u63cf\u8ff0\u6a21\u578b (Image Captioning Model)\uff0c \u4ee5 CNN \u64f7\u53d6\u5716\u50cf\u7279\u5fb5\uff0c\u4e26\u900f\u904e Sequence To Sequence (Seq2Seq)\u6982\u5ff5 (Sutskever, Vinyals & Le, 2014)\uff0c\u5c07\u5716\u50cf\u8f49\u63db\u70ba\u80fd\u5920\u8868\u9054\u5716\u50cf\u5167\u5bb9\u7684\u6587\u5b57\u3002\u5176\u6b21\uff0c\u6211\u5011\u8a2d\u8a08\u4e86\u591a\u5c64 (Multi-layer) (Convolutional)\u8207\u6c60\u5316\u7db2\u8def\u5c64(Pooling)\u4f7f\u8f38\u5165\u7684\u8a0a\u606f\u53ef\u4ee5\u4fdd\u7559\u66f4\u591a\u7684\u7279\u5fb5\uff0c\u4e0d\u50cf\u57fa \u96a8\u8457\u793e\u7fa4\u5e73\u53f0\u4e0a\u5927\u91cf\u4f7f\u7528\u8005\u7522\u751f\u5167\u5bb9 (user generated content)\u7684\u5feb\u901f\u51fa\u73fe\uff0c\u8b20\u8a00\u6aa2\u6e2c\u5df2\u6210 \u672c\u7684\u795e\u7d93\u7db2\u8def\u53ea\u80fd\u53d6\u5f97\u8f38\u5165\u8cc7\u6599\u4e00\u500b\u7dad\u5ea6\u7684\u7279\u5fb5\u3002CNN \u901a\u5e38\u7528\u5728\u8655\u7406\u5716\u50cf\u76f8\u95dc\u7684\u4efb\u52d9\uff0c \u70ba\u4e0d\u53ef\u5ffd\u7565\u7684\u8b70\u984c\u3002\u4e0d\u7ba1\u662f\u5728 Facebook \u6216 Twitter\uff0c\u90fd\u63d0\u4f9b\u932f\u8aa4\u8cc7\u8a0a\u7684\u6aa2\u6e2c\u6a5f\u5236\uff0c\u4ee5\u9a57 \u76ee\u524d\u5df2\u7d93\u6709\u8a31\u591a\u4e0d\u540c\u8b8a\u7570\u7684\u67b6\u69cb\u61c9\u7528\u5728\u5404\u9818\u57df\uff0c\u4f8b\u5982\u8457\u540d\u7684 VGG Net (Simonyan & \u8b49\u4f7f\u7528\u8005\u767c\u6587\u7684\u771f\u5be6\u6027\u3002Facebook \u900f\u904e\u4f7f\u7528\u8005\u8207\u7b2c\u4e09\u65b9\u6aa2\u67e5\u6a5f\u69cb\u5354\u52a9\uff0c\u5c0d\u4e0d\u5be6\u8a0a\u606f\u9032\u884c Zisserman, 2015)\u8207 GoogleLeNet (Szegedy et al., 2015) \u7b49\u67b6\u69cb\u90fd\u5728\u89e3\u6c7a\u50b3\u7d71\u5377\u7a4d\u5c64\u5728\u7279 \u6a19\u8a3b\uff0c\u88ab\u6a19\u8a3b\u7684\u8a0a\u606f\u6703\u7d93\u904e FactCheck.org \u548c Snopes.com \u7b49\u7b2c\u4e09\u65b9\u4e8b\u5be6\u67e5\u6838\u6a5f\u69cb\u9a57\u8b49\u3002 \u5fb5\u50b3\u905e\u904e\u7a0b\u4e2d\uff0c\u56e0\u70ba\u67d0\u4e9b\u7279\u5fb5\u4e0d\u5920\u660e\u986f\u800c\u88ab\u5ffd\u7565\u7684\u554f\u984c\u3002RNN \u6700\u521d\u662f\u7531 Elman \u6240\u63d0\u51fa \u82e5\u7d93\u9a57\u8b49\u78ba\u8a8d\u70ba\u5047\u8a0a\u606f\uff0c\u5247\u8a72\u8a0a\u606f\u5c07\u6703\u88ab\u516c\u958b\u3002Twitter \u5247\u900f\u904e\u4f7f\u7528\u8005\u7684\u6a19\u8a3b\uff0c\u4ee5\u81ea\u52d5\u8a55 (Elman, 1990)\uff0c\u5f8c\u4f86\u88ab Mikolov \u7b49\u4eba (Mikolov, Karafi\u00e1t, Burget, \u010cernock\u00fd & Khudanpur, \u4f30\u7cfb\u7d71\u8ce6\u4e88\u6bcf\u4e00\u5247\u63a8\u6587\u53ef\u4fe1\u5ea6\u7b49\u7d1a\u3002\u82e5\u53ef\u4fe1\u5ea6\u7b49\u7d1a\u904e\u4f4e\u6216\u8a72\u8a0a\u606f\u5167\u5bb9\u88ab\u4e00\u5b9a\u7a0b\u5ea6\u7684\u7528\u6236 2010) \u61c9\u7528\u5728\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u4e2d\u3002RNN \u7684\u4e3b\u8981\u67b6\u69cb\u5982\u5716 1 \u6240\u793a\uff0c\u662f\u7531\u55ae\u5c64\u96b1\u85cf\u5c64\u7684\u795e\u7d93\u7db2 \u6a19\u793a\u70ba\u5047\u8a0a\u606f\uff0c\u5247\u5224\u65b7\u8a72\u8a0a\u606f\u70ba\u8b20\u8a00\u3002\u7136\u800c\uff0c\u5728\u9019\u500b\u8cc7\u8a0a\u50b3\u64ad\u5feb\u901f\u7684\u6642\u4ee3\uff0c\u7b2c\u4e09\u65b9\u9a57\u8b49 \u4ee5\u53ca\u591a\u55ae\u5143 (Multi-cell) \u5169\u7a2e\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def(Bi-directional RNNs, BRNNs)\uff0c\u7d50\u5408\u81ea (LSTM)\u53ca\u6ce8\u610f\u529b\u6a5f\u5236 (attention)\uff0c\u64f7\u53d6\u7279\u5fb5\u4e26\u8a08\u7b97\u51fa attention \u6b0a\u91cd\uff1b\u800c\u5716\u50cf\u5167\u5bb9\u5247\u662f\u76f4 \u63a5\u4ee5 CNN \u67b6\u69cb\u53d6\u51fa\u7279\u5fb5\uff0c\u4e26\u4e14\u5c07 attention \u6b0a\u91cd\u8207\u5716\u50cf\u7279\u5fb5\u76f4\u63a5\u9032\u884c elementwise multiplication\u3002\u7136\u800c\uff0c\u9019\u6a23\u7684\u76f8\u4e58\u4e26\u6c92\u6709\u5177\u9ad4\u53ef\u89e3\u91cb\u7684\u5be6\u8cea\u610f\u7fa9\uff0c\u56e0\u70ba CNN \u6240\u53d6\u51fa\u7684\u7279 \u5fb5\u5411\u91cf\u8207 LSTM \u5f8c atttention \u7684\u7279\u5fb5\u5411\u91cf\u4e4b\u9593\uff0c\u7dad\u5ea6\u4e0d\u76f8\u540c\u4e14\u5169\u5411\u91cf\u5404\u7dad\u5ea6\u4e26\u6c92\u6709\u4efb\u4f55\u95dc \u806f\u3002 \u800c\u4e14\u73fe\u6709\u65b9\u6cd5\u7684\u6df1\u5ea6\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u50c5\u63a1\u7528 LSTM \u53ca attention\uff0c\u96a8\u8457\u66f4\u591a\u6df1\u5c64\u7684\u795e\u7d93 \u8a13\u7df4\u6a21\u578b\u3002\u4f8b\u5982: \u4f3c\u53ef\u4fe1\u5ea6\u7684\u60f3\u6cd5\uff0c\u8a08\u7b97\u51fa\u53ef\u4fe1\u5ea6\u7684\u503c\u3002 \u6cd5 (Jin, Cao, Guo, Zhang & Luo, 2017)\uff0c\u5176\u4e2d\u6587\u5b57\u5167\u5bb9\u900f\u904e Long Short-Term Memory \u7279\u5fb5\u64f7\u53d6\uff0c\u4ee5\u53ca\u767c\u6587\u88ab\u5206\u4eab\u53ca\u518d\u5206\u4eab\u7b49\u884c\u70ba\uff0c\u4f5c\u70ba\u8b20\u8a00\u6aa2\u6e2c\u7684\u7279\u5fb5\uff0c\u4e26\u4ee5\u6a5f\u5668\u5b78\u7fd2\u65b9\u6cd5 \u878d\u5408\u7684\u6df1\u5ea6\u795e\u7d93\u7db2\u8def\uff0c\u7d50\u5408\u6587\u5b57\u3001\u5716\u50cf\u3001\u53ca\u793e\u7fa4\u7b49\u7279\u5fb5\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c\uff0c\u5982: Jin \u7b49\u4eba\u7684\u4f5c \u8b20\u8a00\u6aa2\u6e2c\u7684\u7279\u5fb5\u4f86\u6e90\u4e3b\u8981\u53ef\u5206\u70ba\u5169\u5927\u985e: \u767c\u6587\u5167\u5bb9\uff0c\u4ee5\u53ca\u50b3\u64ad\u8def\u5f91\u3002\u900f\u904e\u767c\u6587\u5167\u5bb9 \u5716\u50cf\u3001\u53ca\u793e\u7fa4\u7279\u5fb5\uff0c\u4ee5\u63d0\u5347\u5206\u985e\u6e96\u78ba\u7387\u3002\u96d6\u7136\u904e\u53bb\u76f8\u95dc\u7814\u7a76\u5df2\u7d93\u6709\u8ad6\u6587\u63d0\u51fa\u591a\u6a21\u5f0f\u7279\u5fb5 \u6aa2\u6e2c\uff0c\u5373\u662f\u672c\u8ad6\u6587\u4e3b\u8981\u63a2\u8a0e\u7684\u8b70\u984c\u3002 \u6ce8\u610f\u529b\u6a5f\u5236 (self-attention)\uff0c\u4e26\u5229\u7528 Early \u53ca Late Fusion \u5169\u7a2e\u7279\u5fb5\u878d\u5408\u65b9\u6cd5\u7d50\u5408\u6587\u5b57\u3001 \u8207\u4eba\u5de5\u6a19\u8a18\u65b9\u6cd5\u90fd\u7121\u6cd5\u5373\u6642\u8fa8\u5225\u5047\u8a0a\u606f\u4e26\u963b\u6b62\u5176\u7e7c\u7e8c\u50b3\u64ad\u3002\u5982\u4f55\u5feb\u901f\u53c8\u6e96\u78ba\u7684\u9032\u884c\u8b20\u8a00 \u8def\u4e0d\u65b7\u905e\u8ff4\u800c\u6210\u7684\u3002 (a) (b)
\u7db2\u8def\u6a21\u578b\u4e0d\u65b7\u9032\u6b65\uff0c\u4ecd\u6709\u9032\u4e00\u6b65\u6539\u5584\u7684\u7a7a\u9593\u3002\u56e0\u6b64\u5728\u672c\u8ad6\u6587\u6240\u63d0\u51fa\u7684\u65b9\u6cd5\u4e2d\uff0c\u6211\u5011\u7d50\u5408 \u8fd1\u5e74\u4f86\u4eba\u5de5\u667a\u6167\u518d\u5ea6\u53d7\u5230\u91cd\u8996\uff0c\u5927\u591a\u8b20\u8a00\u6aa2\u6e2c\u76f8\u95dc\u8ad6\u6587\u90fd\u4f7f\u7528\u6df1\u5ea6\u5b78\u7fd2\u65b9\u6cd5\u3002\u4f8b\u5982: .,
Keywords: Rumor Detection, Bi-directional Recurrent Neural Networks, Gated Recurrent Unit, Self-attention Mechanism, \u96a8\u8457\u793e\u7fa4\u7db2\u8def\u5feb\u901f\u767c\u5c55\uff0c\u4eba\u5011\u53ef\u4ee5\u5373\u6642\u5f9e\u5404\u5927\u793e\u7fa4\u5e73\u53f0\u7372\u53d6\u6700\u65b0\u8a0a\u606f\u3002\u7136\u800c\u8b20\u8a00\u53ca\u5047\u8a0a \u606f\u5145\u65a5\u5176\u4e2d\uff0c\u5982\u4f55\u5206\u8fa8\u8a0a\u606f\u7684\u771f\u5047\uff0c\u907f\u514d\u4eba\u5011\u88ab\u8aa4\u5c0e\uff0c\u662f\u73fe\u4eca\u5404\u5927\u793e\u7fa4\u5e73\u53f0\u6240\u9762\u81e8\u7684\u91cd \u5927\u554f\u984c\u3002\u8f03\u8457\u540d\u7684\u793e\u7fa4\u7db2\u7ad9\uff0c\u5982 Facebook\u3001Twitter \u7b49\uff0c\u91dd\u5c0d\u8b20\u8a00\u7684\u8fa8\u5225\u90fd\u6709\u76f8\u61c9\u7684\u8655\u7406 \u6a5f\u5236\u3002Facebook \u5229\u7528\u516c\u6b63\u7684\u7b2c\u4e09\u65b9\u6a5f\u69cb\u5c0d\u8a0a\u606f\u9032\u884c\u4eba\u5de5\u9a57\u8b49\uff0c\u5f97\u77e5\u8a0a\u606f\u7684\u771f\u507d\uff1b\u800c Twitter \u5247\u5229\u7528\u81ea\u52d5\u8a55\u4f30\u7cfb\u7d71\u8207\u4eba\u5de5\u6a19\u8a18\uff0c\u6a19\u793a\u5177\u722d\u8b70\u6216\u8aa4\u5c0e\u8cc7\u8a0a\u3002\u7136\u800c\uff0c\u7b2c\u4e09\u65b9\u9a57\u8b49\u8207\u4eba\u5de5\u6a19 \u8a18\u7b49\u65b9\u6cd5\u7121\u6cd5\u5373\u6642\u9032\u884c\u8fa8\u8b58\uff0c\u4e26\u963b\u6b62\u5047\u8a0a\u606f\u7e7c\u7e8c\u50b3\u64ad\u3002\u56e0\u6b64\u5982\u4f55\u5feb\u901f\u53c8\u6e96\u78ba\u7684\u8fa8\u8b58\u5047\u8a0a \u606f\u6216\u8b20\u8a00\uff0c\u5df2\u6210\u70ba\u8fd1\u5e74\u4f86\u71b1\u9580\u7684\u7814\u7a76\u8b70\u984c\u3002 \u5728\u793e\u7fa4\u7db2\u8def\u8b20\u8a00\u6aa2\u6e2c\u7684\u76f8\u95dc\u7814\u7a76\u4e2d\uff0c\u5927\u81f4\u53ef\u5206\u70ba\u91dd\u5c0d\u767c\u6587\u5167\u5bb9\uff0c\u4ee5\u53ca\u900f\u904e\u5206\u6790\u793e\u7fa4 \u7db2\u8def\u7684\u50b3\u64ad\u7d50\u69cb\u5169\u7a2e\u65b9\u6cd5\u3002\u9996\u5148\uff0c\u793e\u7fa4\u7db2\u8def\u767c\u6587\u5167\u5bb9\uff0c\u5305\u542b\u6709: \u6587\u5b57\u3001\u5716\u50cf\u7b49\uff0c\u53ef\u4ee5\u900f \u904e\u6df1\u5ea6\u5b78\u7fd2\u65b9\u6cd5\uff0c\u4f8b\u5982: \u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def (Recurrent Neural Networks, RNNs)\uff0c\u6216\u5377\u7a4d\u795e \u4e86\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\uff0c\u5982: Vinyals \u7b49\u4eba (Vinyals, Toshev, Bengio & Erhan, 2015) \u548c Xu \u7b49\u4eba (Xu et al., 2015) \u6240\u63d0\u65b9\u6cd5\uff0c\u5148\u5c07\u5716\u50cf\u5167\u5bb9\u8f49\u63db\u6210\u6700\u53ef\u80fd\u7684\u6587\u5b57\u63cf\u8ff0\uff0c\u4ee5\u63d0\u5347\u5716\u50cf\u7279\u5fb5\u7684 \u8a9e\u610f\uff0c\u7136\u5f8c\u518d\u8207\u5176\u4ed6\u6587\u5b57\uff0c\u9032\u884c word embedding \u53ca\u7279\u5fb5\u878d\u5408\uff1b\u540c\u6642\u6211\u5011\u8a2d\u8a08\u4e86\u591a\u5c64 (Multi-layer)\u53ca\u591a\u55ae\u5143 (Multi-cell)\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def(Bi-directional RNNs, BRNNs)\uff0c\u7d50 \u5408\u81ea\u6ce8\u610f\u529b\u6a5f\u5236 (self-attention)\uff0c\u4e26\u4ee5 Gated Recurrent Unit (GRU)\u53d6\u4ee3 LSTM\uff0c\u4ee5\u63d0\u5347\u5206 \u985e\u6548\u679c\u3002\u672c\u6587\u4e3b\u8981\u7684\u8ca2\u737b\u70ba: (1) \u6211\u5011\u662f\u7b2c\u4e00\u500b\u7d50\u5408\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\u7684\u591a\u6a21\u5f0f\u878d\u5408\u8b20\u8a00\u5075\u6e2c\u65b9\u6cd5\uff0c\u8b93\u5716\u50cf\u5167\u5bb9\u7684\u878d\u5408\u5177\u6709 \u610f\u7fa9\u3002\u6bd4\u8d77\u73fe\u6709\u4f5c\u6cd5\uff0c\u80fd\u6709\u6548\u63d0\u5347\u6e96\u78ba\u7387\u3002 (2) \u6211\u5011\u63d0\u51fa\u5275\u65b0\u7684\u591a\u55ae\u5143\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def (Multi-cell BRNN): \u5728 forward \u53ca backward \u96d9\u5411\u7684 RNN\uff0c\u4ee5\u591a\u500b\u8a18\u61b6\u55ae\u5143 (memory cells)\uff0c\u540c\u6642\u9032\u884c\u5e8f\u5217\u8cc7\u6599\u7684\u8a18\u61b6\u8207 \u5b78\u7fd2\uff0c\u80fd\u9032\u4e00\u6b65\u63d0\u5347\u6548\u679c\u3002 \u5be6\u9a57\u7d50\u679c\u986f\u793a\uff0c\u4f7f\u7528\u57fa\u65bc GRU \u7684\u591a\u55ae\u5143\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def (Multi-cell BRNN)\u642d \u914d\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u53ef\u4ee5\u4f7f\u5206\u985e\u7d50\u679c\u7684 F-measure \u9054\u5230 0.816\uff1b\u5728\u9032\u4e00\u6b65\u4ee5 Early Fusion \u878d\u5408 \u793e\u7fa4\u7279\u5fb5\u5f8c\uff0c\u80fd\u9054\u5230\u6700\u4f73\u7684\u8b20\u8a00\u6aa2\u6e2c\u7387\uff0cF-measure \u53ef\u9054 0.89\uff0c\u9a57\u8b49\u4e86\u672c\u8ad6\u6587\u6240\u63d0\u51fa\u65b9\u6cd5 \u7684\u6548\u679c\u3002\u5f8c\u7e8c\u6211\u5011\u5728\u7b2c\u4e8c\u7ae0\u4ecb\u7d39\u76f8\u95dc\u7814\u7a76\uff0c\u7b2c\u4e09\u7ae0\u8a73\u8ff0\u7814\u7a76\u65b9\u6cd5\uff0c\u7b2c\u56db\u7ae0\u5247\u63cf\u8ff0\u5be6\u9a57\u7d50 \u679c\u53ca\u5206\u6790\uff0c\u7b2c\u4e94\u7ae0\u5247\u662f\u7d50\u8ad6\u3002 2014)\uff0c\u88ab\u61c9\u7528\u5728\u6a5f\u5668\u7ffb\u8b6f\u7684\u4efb\u52d9\u3002\u5c07\u8f38\u5165\u7684\u53e5\u5b50(Sequence)\u7d93\u904e\u5b78\u7fd2\uff0c\u7522\u751f\u53e6\u4e00\u500b\u53e5 \u4eba (Yu, Liu, Wu, Wang & Tan, 2017)\u548c Chen \u7b49\u4eba (Chen, Li, Yin & Zhang, 2018) \u5206\u5225\u63d0 \u51fa\u57fa\u65bc CNN \u7684\u932f\u8aa4\u8a0a\u606f\u8b58\u5225\u5377\u7a4d\u6cd5(CAMI)\u8207\u6df1\u5ea6\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u5617\u8a66\u5728\u767c\u6587\u65e9\u671f\u5224\u65b7\u8a72 \u63a8\u6587\u662f\u5426\u70ba\u5047\u8a0a\u606f\uff1bMa \u7b49\u4eba (Ma, Gao & Wong, 2018) \u5c07\u7acb\u5834\u6aa2\u6e2c\u4efb\u52d9\u8207\u8b20\u8a00\u6aa2\u6e2c\u4efb\u52d9 \u6df1\u5ea6\u5b78\u7fd2\u65b9\u6cd5\u7279\u5225\u662f\u5404\u7a2e\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u6210\u70ba\u71b1\u9580\u7684\u7814\u7a76\u65b9\u6cd5\u3002CNN \u6700\u521d\u662f\u7531 Yann \u4e4b\u67b6\u69cb\u5206\u5225\u5982\u5716 2(a)\u53ca 2(b)\u6240\u793a\uff1a \u96a8\u8457\u96fb\u8166\u904b\u7b97\u80fd\u529b\u7684\u63d0\u5347\u8207\u5716\u5f62\u8655\u7406\u5668(Graphics Processing Unit, GPU)\u7684\u767c\u5c55\uff0c \u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u7684\u68af\u5ea6\u7206\u70b8\u8207\u68af\u5ea6\u6d88\u5931\u554f\u984c\uff0c\u6642\u9593\u6548\u7387\u4e5f\u6bd4 LSTM \u66f4\u597d\u3002LSTM \u8207 GRU \u985e\u6548\u679c\u5f97\u4ee5\u63d0\u5347\u3002 Gulcehre, Cho & Bengio., 2014)\u7684\u5be6\u9a57\u8207\u63a2\u8a0e\uff0c\u767c\u73fe GRU \u5176\u4e0d\u50c5\u8207 LSTM \u4e00\u6a23\u53ef\u4ee5\u89e3\u6c7a \u7d50\u5408\u6ce8\u610f\u529b\u6a5f\u5236\u4f86\u53d6\u5f97\u767c\u6587\u5167\u5bb9\u5b57\u8a5e\u4e4b\u9593\u7684\u95dc\u4fc2\uff0c\u4e26\u8a66\u5716\u627e\u51fa\u91cd\u9ede\u5b57\u8a5e\uff0c\u4f7f\u5f8c\u7e8c\u8b20\u8a00\u5206 \u7a31\u70ba Gated Recurrent Unit (GRU)\uff0c\u5247\u9032\u4e00\u6b65\u7c21\u5316\u8655\u7406\u55ae\u5143\u3002\u7d93\u904e Chung \u7b49\u4eba(Chung , \u66f4\u80fd\u8868\u9054\u8a72\u5716\u50cf\u7684\u8a9e\u610f\u3002\u5176\u6b21\uff0c\u91dd\u5c0d\u6587\u5b57\u7279\u5fb5\u90e8\u5206\uff0c\u6211\u5011\u4f7f\u7528\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def(BRNN) \u4e0d\u6703\u56e0\u70ba\u6b0a\u91cd\u592a\u5c0f\u800c\u88ab\u795e\u7d93\u7db2\u8def\u5ffd\u7565\u3002Cho \u7b49\u4eba (Cho et al., 2014) \u63d0\u51fa\u4e00\u500b\u5d84\u65b0\u7684\u67b6\u69cb\uff0c \u5206\uff0c\u6211\u5011\u4f7f\u7528\u4e00\u53e5\u77ed\u8a9e\u4f86\u8868\u9054\u8a72\u5716\u50cf\u7684\u5167\u5bb9\uff0c\u6bd4\u8d77\u76f4\u63a5\u7528 CNN \u7279\u5fb5\u5411\u91cf\u4f86\u4ee3\u8868\u5716\u50cf\uff0c \u5584\u3002\u900f\u904e\u4e09\u500b Gate\uff1aInput Gate\u3001Forget Gate\u3001Output Gate\uff0c\u63a7\u5236\u8cc7\u8a0a\u7684\u6d41\u52d5\uff0c\u78ba\u4fdd\u7279\u5fb5 \u6709\u9032\u4e00\u6b65\u6539\u5584\u7684\u7a7a\u9593\u3002\u56e0\u6b64\u672c\u8ad6\u6587\u91dd\u5c0d\u4ee5\u4e0a\u5169\u9ede\u554f\u984c\u9032\u884c\u6539\u5584\uff1a\u9996\u5148\uff0c\u91dd\u5c0d\u5716\u50cf\u7279\u5fb5\u90e8 \u65b9\u5f0f\u4f86\u89e3\u6c7a\u8a72\u554f\u984c\uff0c\u5176\u4e2d\u6700\u5e38\u898b\u7684\u65b9\u6cd5\u5c31\u662f\u5229\u7528\u9577\u77ed\u671f\u8a18\u61b6\u795e\u7d93\u7db2\u8def(LSTM)\u9032\u884c\u6539 \u7279\u5fb5\u7684\u8655\u7406\uff0c\u50c5\u4f7f\u7528 LSTM \u9032\u884c\u7279\u5fb5\u64f7\u53d6\uff0c\u96a8\u8457\u66f4\u591a\u6df1\u5c64\u7684\u795e\u7d93\u7db2\u8def\u6a21\u578b\u4e0d\u65b7\u9032\u6b65\uff0c\u4ecd \u8def\u53ef\u80fd\u7121\u6cd5\u5b78\u7fd2\u5230\u9577\u6642\u9593\u7684\u8a0a\u606f\uff0c\u5176\u554f\u984c\u7a31\u4e4b\u70ba\u68af\u5ea6\u7206\u70b8\u6216\u68af\u5ea6\u6d88\u5931\u3002\u76ee\u524d\u5df2\u7d93\u6709\u8a31\u591a \u91cd\u5411\u91cf\u4e4b\u9593\uff0c\u7dad\u5ea6\u4e0d\u76f8\u540c\u4e14\u5169\u5411\u91cf\u5404\u7dad\u5ea6\u4e26\u6c92\u6709\u4efb\u4f55\u95dc\u806f\u3002\u540c\u6642\u5728\u8a72\u8ad6\u6587\u4e2d\uff0c\u5c0d\u65bc\u6587\u5b57 \u50b3\u905e\u7279\u5fb5\uff0c\u5bb9\u6613\u56e0\u70ba\u7279\u5fb5\u6b0a\u91cd\u7684\u5927\u5c0f\uff0c\u5f71\u97ff\u4e0b\u4e00\u5c64\u96b1\u85cf\u5c64\u8f38\u51fa\u7684\u8cc7\u8a0a\uff0c\u9032\u800c\u5c0e\u81f4\u795e\u7d93\u7db2 \u76f8\u4e58\u4e26\u6c92\u6709\u5177\u9ad4\u53ef\u89e3\u91cb\u7684\u5be6\u8cea\u610f\u7fa9\uff0c\u56e0\u70ba CNN \u6240\u53d6\u51fa\u7684\u7279\u5fb5\u5411\u91cf\u8207 LSTM \u5f8c\u6ce8\u610f\u529b\u6b0a \u7136\u800c\uff0c\u7531\u65bc RNN \u662f\u4ee5 Back-Propagation Through Time(BPTT)\u7684\u65b9\u5f0f\u9032\u884c\u8a13\u7df4\u8207 \u7279\u5fb5\uff0c\u4e26\u4e14\u5c07\u6ce8\u610f\u529b\u6b0a\u91cd\u8207\u5716\u50cf\u7279\u5fb5\u76f4\u63a5\u9032\u884c elementwise multiplication\u3002\u7136\u800c\uff0c\u9019\u6a23\u7684 \u9032\u884c\u8a18\u61b6\u4e26\u5b78\u7fd2\u3002 \u53ca\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u64f7\u53d6\u7279\u5fb5\u4e26\u8a08\u7b97\u51fa\u6ce8\u610f\u529b\u6b0a\u91cd\uff1b\u800c\u5716\u50cf\u5167\u5bb9\u5247\u662f\u76f4\u63a5\u4ee5 CNN \u67b6\u69cb\u53d6\u51fa \u53ef\u4ee5\u4f7f\u6bcf\u500b\u6642\u9593\u9ede\u7684\u8f38\u51fa\u90fd\u80fd\u8207\u4e0a\u500b\u6642\u9593\u9ede\u7684\u8f38\u5165\u6709\u95dc\uff0c\u8b93\u795e\u7d93\u7db2\u8def\u80fd\u5c0d\u6574\u500b\u5e8f\u5217\u9806\u5e8f \u5408\u793e\u7fa4\u7db2\u8def\u4e0a\u7684\u591a\u5a92\u9ad4\u8a0a\u606f\uff0c\u5982: \u6587\u5b57\u3001\u5716\u50cf\u3001\u53ca\u793e\u7fa4\u7279\u5fb5\uff0c\u5176\u4e2d\u6587\u5b57\u5167\u5bb9\u900f\u904e LSTM \u85cf\u5c64\uff0c\u4e26\u5c07\u4e0a\u4e00\u6642\u9593\u9ede\u96b1\u85cf\u5c64\u7684\u8f38\u51fa\u4f5c\u70ba\u4e0b\u4e00\u6642\u9593\u9ede\u96b1\u85cf\u5c64\u7684\u8f38\u5165\u3002\u900f\u904e\u9019\u6a23\u7684\u65b9\u5f0f\uff0c \u6574\u5408\uff0c\u8a66\u5716\u900f\u904e\u5224\u5225\u8a0a\u606f\u7684\u7acb\u5834\u4f86\u8f14\u52a9\u5047\u8a0a\u606f\u7684\u5224\u65b7\uff1bJin \u7b49\u4eba (Jin et al., 2017) \u5247\u662f\u7d50 \u5b50(Sequence)\u3002Seq2Seq \u67b6\u69cb\u4e3b\u8981\u662f\u7531\u5169\u500b\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u6240\u7d44\u6210\uff0c\u5206\u5225\u7a31\u70ba Encoder \u5716 1. \u7531\u4e0a\u5716\u53ef\u5f97\u77e5\uff0c\u82e5\u8f38\u5165\u7684\u8cc7\u6599\u662f\u4e00\u4e32\u5e8f\u5217\uff0c\u5247\u8cc7\u6599\u5c07\u6703\u6309\u7167\u6642\u9593\u9806\u5e8f\u4f9d\u6b21\u8f38\u5165\u81f3\u96b1 \u8207 Decoder\uff0c\u5176\u67b6\u69cb\u5982\u5716 3 \u6240\u793a:
", "num": null }, "TABREF22": { "type_str": "table", "text": "hidden state \u70ba S t \uff0c\u6700\u5f8c\u8f38\u51fa\u70ba y t \uff0cS t \u7531\u524d\u4e00\u500b hidden state S t-1 \uff0c\u524d\u4e00\u500b \u8f38\u51fa y t-1 \uff0c\u4ee5\u53ca \u7d93\u904e\u51fd\u6578 f \u8a08\u7b97\u800c\u5f97\u3002\u672c\u8ad6\u6587\u5c07 GRU \u53ca LSTM \u7b49 RNN \u67b6\u69cb\u7d50\u5408\u6ce8\u610f\u529b \u6a5f\u5236\uff0c\u4ee5\u63d0\u5347\u795e\u7d93\u7db2\u8def\u5c0d\u91cd\u8981\u7279\u5fb5\u7684\u95dc\u6ce8\u7a0b\u5ea6\uff0c\u4f7f\u8b20\u8a00\u5075\u6e2c\u7684\u6e96\u78ba\u5ea6\u5f97\u4ee5\u63d0\u5347\u3002 \u6211\u5011\u53c3\u8003 Vinyals \u7b49\u4eba\u6240\u63d0\u51fa\u7684\u67b6\u69cb (Vinyals et al., 2015)\uff0c\u4f7f\u7528\u7d50\u5408 CNN \u8207 LSTM \u7d44\u6210 \u7684 Seq2Seq \u7db2\u8def\u67b6\u69cb\uff0c\u7522\u751f\u51fa\u80fd\u63cf\u8ff0\u8a72\u5716\u50cf\u7684\u6587\u5b57\u6558\u8ff0\u3002\u6a21\u578b\u67b6\u69cb\u5982\u5716 6 \u6240\u793a\uff1a \u5716 \u5982\u5716 6 \u6240\u793a\uff0c\u5716\u50cf\u7684\u90e8\u5206\u63a1\u7528 Google Inception Net V3 \u7684 CNN \u67b6\u69cb\u3002\u6b64\u67b6\u69cb\u5171\u6709 42 \u5c64\uff0c\u5171\u4f7f\u7528\u4e86 4 \u7a2e\u4e0d\u540c\u7dad\u5ea6\u5927\u5c0f\u7684\u5377\u7a4d\u6838\uff0c\u53ef\u4ee5\u53d6\u5f97\u5716\u50cf\u5728\u4e0d\u540c\u5c3a\u5ea6\u4e0b\u7684\u7279\u5fb5\uff0c\u907f\u514d", "html": null, "content": "
66\u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c\u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43 \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43 67 \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43
\u2211 3.1 \u7279\u5fb5\u64f7\u53d6 (Feature Extraction) \u4e00\u4e9b\u7d30\u5fae\u7279\u5fb5\u88ab\u5ffd\u7565\u3002\u7d93\u904e Inception \u6a21\u7d44\u7d50\u53d6\u51fa\u4f86\u7684\u5716\u50cf\u7279\u5fb5\u5411\u91cf\u6703\u8207\u7d93\u904e one-hot \u7de8 \u591a\u5c64\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u7684\u8f38\u5165\u8207\u8f38\u51fa\u8207\u55ae\u5c64 BRNN \u4e00\u6a23\uff0c\u900f\u904e\u5728\u8f38\u5165\u5c64\u4f9d\u5e8f\u8f38\u5165\u6587\u4ef6 (2) \u4e00\u7bc7\u63a8\u6587 (Tweet)\u4e2d\u901a\u5e38\u5305\u542b\u4e86\u6587\u5b57\u6558\u8ff0\u3001\u5716\u50cf\u8a0a\u606f\u4ee5\u53ca\u793e\u7fa4\u8cc7\u8a0a\u3002\u9996\u5148\uff0c\u6211\u5011\u64f7\u53d6\u51fa \u78bc\u7684\u6587\u5b57\u6558\u8ff0\u4e00\u540c\u8f38\u5165 LSTM \u904b\u7b97\u3002\u904b\u7b97\u904e\u7a0b\u4e2d\u5716\u50cf\u7684\u7279\u5fb5\u5411\u91cf\u53ea\u6703\u8f38\u5165\u4e00\u6b21\uff0c\u4e4b\u5f8c\u6bcf \u4e2d\u7684\u5b57\u8a5e\uff0c\u5f97\u51fa\u4ee3\u8868\u8a72\u5b57\u8a5e\u7684\u6578\u503c\u5411\u91cf\u3002\u5176\u4e2d\uff0c\u7576\u6bcf\u500b\u5b57\u8a5e\u7d93\u904e\u7b2c\u4e00\u5c64\u7684 BRNN \u5f8c\uff0c\u5176
\u63a8\u6587\u4e2d\u7684\u6587\u5b57\u6558\u8ff0\uff0c\u900f\u904e\u5404\u7a2e RNN\uff0c\u5617\u8a66\u627e\u51fa\u6587\u5b57\u5167\u5bb9\u7684\u4e0a\u4e0b\u6587\u95dc\u4fc2\u3002\u5176\u6b21\uff0c\u901a\u5e38\u63a8\u6587 \u500b\u6642\u9593\u9ede\u4f9d\u5e8f\u8f38\u5165\u6587\u5b57\u6558\u8ff0\u4e2d\u7684\u5b57\u8a5e(S i )\uff0c\u4e26\u8f38\u51fa\u76f8\u95dc\u5206\u6578 p i \u6700\u9ad8\u7684 k \u500b\u5019\u9078\u8a5e\u3002\u800c\u9019\u4e9b \u5f97\u5230\u7684\u8f38\u51fa\u5df2\u7d93\u5305\u542b\u4e86\u6587\u4ef6\u4e2d\u5411\u524d\u50b3\u905e\u8207\u5411\u5f8c\u50b3\u905e\u7684\u8cc7\u8a0a\uff0c\u56e0\u6b64\u7576\u7b2c\u4e00\u5c64 BRNN \u7684\u8f38\u51fa \u6700\u5f8c\uff0c\u5c07\u6ce8\u610f\u529b\u6b0a\u91cd\u8207\u5404\u500b\u96b1\u85cf\u5c64\u72c0\u614b \u9032\u884c\u52a0\u6b0a\u904b\u7b97\uff0c\u5f97\u51fa Encoder \u7684\u8f38\u51fa\u5411\u91cf (context \u5e38\u542b\u6709\u8207\u6587\u5b57\u5167\u5bb9\u76f8\u95dc\u7684\u5716\u50cf\uff0c\u56e0\u6b64\u6211\u5011\u5229\u7528\u5716\u50cf\u63cf\u8ff0\u6a21\u7d44\uff0c\u64f7\u53d6\u5716\u50cf\u7279\u5fb5\u4e26\u7522\u751f\u63cf\u8ff0 \u5019\u9078\u8a5e\u6703\u5206\u5225\u8f38\u5165\u5230\u4e0b\u4e00\u500b\u6642\u9593\u9ede\uff0c\u8207\u7576\u6642\u6240\u8f38\u51fa\u7684\u5019\u9078\u8a5e\u7d50\u5408\u5f8c\uff0c\u5f9e\u4e2d\u9078\u51fa\u76f8\u95dc\u5206\u6578 \u50b3\u5165\u7b2c\u4e8c\u5c64\u6642\uff0c\u4e0d\u50c5\u4fdd\u7559\u4e86\u539f\u59cb\u6587\u4ef6\u4e2d\u5b57\u8a5e\u4e4b\u9593\u7684\u95dc\u4fc2\uff0c\u6bcf\u500b\u5b57\u8a5e\u5411\u91cf\u4e5f\u8a18\u61b6\u4e86\u7d93\u904e\u7b2c vector) \uff0c\u4e26\u50b3\u5165 Decoder \u4e2d\uff0c\u5176\u516c\u5f0f\u70ba\uff1a \u8a72\u5716\u50cf\u8a0a\u606f\u7684\u77ed\u53e5\uff0c\u4ee5\u627e\u51fa\u5716\u50cf\u4e2d\u96b1\u542b\u7684\u8a9e\u610f\u3002\u6b64\u5916\uff0c\u6211\u5011\u8003\u616e\u5404\u7a2e\u793e\u7fa4\u7279\u5fb5\uff0c\u5305\u62ec\u63a8 \u6700\u9ad8\u7684\u5b57\u8a5e\uff0c\u518d\u50b3\u5165\u4e0b\u4e00\u500b\u6642\u9593\u9ede\u3002\u7d93\u904e\u8fed\u4ee3\u5f8c\uff0c\u6700\u5f8c\u4e00\u500b\u6642\u9593\u9ede\u5c07\u6703\u8f38\u51fa\u4e00\u500b\u878d\u5408\u4e86 \u4e00\u5c64\u8a08\u7b97\u5f8c\u7684\u7279\u5fb5\u3002\u7576\u4e0b\u4e00\u5c64\u7db2\u8def\u5728\u8a08\u7b97\u6642\uff0c\u7531\u65bc\u5176\u5b57\u8a5e\u4e4b\u9593\u7684\u95dc\u4fc2\u5df2\u7d93\u5728\u4e0a\u4e00\u5c64\u88ab\u627e \u2211 (3) \u6587\u7684\u60c5\u7dd2\u6975\u6027\u3001\u63a8\u6587\u4e2d\u7684\u6a19\u7c64(hashtag)\u3001\u53ca\u767c\u6587\u8005\u7684\u4f7f\u7528\u8005\u7279\u5fb5\u7b49\u3002\u4f7f\u7528\u8005\u5728\u63a8\u6587\u4e2d\u5e38 \u5716\u50cf\u8a0a\u606f\u7684\u5b8c\u6574\u6587\u5b57\u63cf\u8ff0\u3002 \u51fa\uff0c\u6545\u4e0d\u5fc5\u518d\u91cd\u65b0\u8a08\u7b97\uff0c\u4f7f\u5f97\u7db2\u8def\u80fd\u66f4\u5feb\u901f\u7684\u6536\u6582\uff0c\u63d0\u5347\u6574\u9ad4\u6a21\u578b\u7684\u8a08\u7b97\u6548\u7387\u3002
Network)\u3001\u6ce8\u610f\u529b\u6a5f\u5236 (Attention Layer)\uff0c\u5982\u5716 5 \u6240\u793a\u3002 \u6b63\u9762\u3001\u4e2d\u7acb\u3001\u8ca0\u9762\u3002\u82e5\u8a72\u63a8\u6587\u7684\u60c5\u7dd2\u5206\u6578\u7e3d\u5206\u70ba\u5927\u65bc 1\uff0c\u5247\u5b83\u8996\u70ba\u6b63\u9762\uff1b\u82e5\u60c5\u7dd2\u5206\u6578\u7e3d \u8a8d\u8b49\u7b49\u3002\u6700\u5f8c\u7d50\u5408\u4f7f\u7528\u8005\u7279\u5fb5\u3001\u60c5\u7dd2\u7279\u5fb5\u3001\u8207\u6a19\u7c64\u7279\u5fb5\u4fbf\u69cb\u6210\u793e\u7fa4\u7279\u5fb5\u3002 \u63cf\u8ff0 (Image Captioning)\u3001\u7279\u5fb5\u878d\u5408 (Feature Fusion)\u3001\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def (Recurrent Neural \u6703\u8868\u9054\u500b\u4eba\u7684\u610f\u898b\u6216\u60c5\u7dd2\uff0c\u56e0\u6b64\u6211\u5011\u900f\u904e\u60c5\u7dd2\u5206\u6790\u6a21\u7d44\uff0c\u63a1\u7528 SentiWordNet (Esuli & Sebastiani, 2006)\u5c0d\u5b57\u8a5e\u9032\u884c\u60c5\u7dd2\u6975\u6027\u7684\u64f7\u53d6\uff0c\u900f\u904e\u8a08\u7b97\u51fa\u63a8\u6587\u4e2d\u6bcf\u500b\u5b57\u8a5e\u7684\u60c5\u7dd2\u5206\u6578\uff0c \u52a0\u7e3d\u5e73\u5747\u5f8c\u5f97\u51fa\u8a72\u63a8\u6587\u6240\u8868\u9054\u7684\u610f\u898b\u50be\u5411\uff0c\u5305\u62ec: \u6b63\u9762\u3001\u4e2d\u7acb\u3001\u8ca0\u9762\u3002\u793e\u7fa4\u4f7f\u7528\u8005\u4e5f\u5e38 \u52d5\u4f5c\u70ba\u4f7f\u7528\u8005\u7279\u5fb5\uff0c\u5305\u62ec: \u8ffd\u96a8\u3001\u767c\u6587\u3001\u8207\u56de\u8986\u7b49\uff0c\u70ba\u4e86\u8207\u76f8\u95dc\u8ad6\u6587\u9032\u884c\u516c\u5e73\u6bd4\u8f03\uff0c\u5728 \u4f7f\u7528\u8005\u7279\u5fb5\u7684\u90e8\u4efd\u6211\u5011\u63a1\u7528\u8207 Jin \u7b49\u4eba (Jin et al., 2017) \u76f8\u540c\u7684\u7279\u5fb5\uff0c\u5305\u62ec: \u4f7f\u7528\u8005\u5728 \u8207\u8a72\u63a8\u6587\u6240\u8868\u9054\u7684\u60c5\u7dd2\u3002\u5728\u63a8\u6587\u60c5\u7dd2\u65b9\u9762\uff0c\u6211\u5011\u6839\u64da\u64f7\u53d6\u7684\u60c5\u7dd2\u5206\u6578\uff0c\u5206\u70ba\u4e09\u7a2e\u985e\u5225\uff1a Twitter \u7684\u670b\u53cb\u6578\u91cf\u3001\u8ffd\u96a8\u6578\u91cf\u3001\u8ffd\u96a8\u6578\u91cf\u4e2d\u662f\u670b\u53cb\u7684\u6bd4\u4f8b\u3001\u7e3d\u767c\u6587\u6578\u91cf\u8207\u662f\u5426\u6709\u88ab Twitter 3.3 \u7279\u5fb5\u878d\u5408 (Feature Fusion) \u5728\u64f7\u53d6\u4e86\u4e09\u7a2e\u591a\u6a21\u5f0f\u7684\u7279\u5fb5\uff0c\u5305\u62ec: \u6587\u5b57\u7279\u5fb5\u3001\u5716\u50cf\u7279\u5fb5\u8207\u793e\u7fa4\u7279\u5fb5\u4e4b\u5f8c\uff0c\u6211\u5011\u63d0\u51fa\u7279 \u91cf\u4f86\u8868\u793a\u6bcf\u500b\u5b57\u8a5e\u7684\u7279\u5fb5\u3002\u5716\u50cf\u5728\u7d93\u904e\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\u8f49\u63db\u6210\u63cf\u8ff0\u8a9e\u53e5\u5f8c\uff0c\u5c07\u8a72\u8a9e\u53e5\u8f49\u63db \u6210\u8207\u63a8\u6587\u7684\u6587\u5b57\u7279\u5fb5\u540c\u6a23 300 \u7dad\u7684\u5411\u91cf\u3002\u6211\u5011\u4e5f\u5f9e\u63a8\u6587\u7684\u6587\u5b57\u8a0a\u606f\u4e2d\u64f7\u53d6\u51fa\u6a19\u7c64 (hashtag) \u5176 \u6b21 \uff0c \u6211 \u5011 \u8a2d \u8a08 \u4e86 \u53e6 \u4e00 \u7a2e \u5806 \u758a \u96d9 \u5411 \u7db2 \u8def \u7684 \u65b9 \u6cd5 : \u591a \u55ae \u5143 \u96d9 \u5411 \u905e \u8ff4 \u5f0f \u795e \u7d93 \u7db2 \u8def (Multi-cell BRNN)\uff0c\u900f\u904e\u589e\u52a0 BRNN \u4e2d\u6bcf\u500b\u65b9\u5411\u7684\u55ae\u5143\u6578\u91cf\uff0c\u9032\u884c\u66f4\u6df1\u5165\u7684\u8a08\u7b97\uff0c\u7576\u524d Cell \u7684\u8f38\u51fa\u6703\u4f5c\u70ba\u4e0b\u4e00\u5c64 Cell \u7684\u8f38\u5165\uff0c\u540c\u4e00\u500b\u795e\u7d93\u5143\u4e2d\u7684\u591a\u500b Cell \u540c\u6642\u9032\u884c\u5e8f\u5217\u8cc7\u6599\u7684 RNN Decoder \u7684 \u672c\u8ad6\u6587\u63d0\u51fa\u7684\u65b9\u6cd5\u4e3b\u8981\u53ef\u5206\u70ba\u4e94\u5927\u6b65\u9a5f\uff0c\u5206\u5225\u70ba\uff1a\u7279\u5fb5\u64f7\u53d6 (Feature Extraction)\u3001\u5716\u50cf \u900f\u904e hashtag \u6a19\u793a\u672c\u6587\u91cd\u9ede\u4e3b\u984c\uff0c\u5c0d\u5206\u985e\u53ef\u80fd\u6709\u5e6b\u52a9\u3002\u6b64\u5916\uff0c\u6211\u5011\u4e5f\u8003\u616e\u4f7f\u7528\u8005\u4e4b\u9593\u7684\u4e92 \u5fb5\u878d\u5408\u7684\u65b9\u6cd5\u4f86\u6574\u5408\u4e0d\u540c\u7279\u5fb5\u3002\u63a8\u6587\u7684\u6587\u5b57\u7279\u5fb5\u63a1\u53d6 one-hot \u7de8\u78bc\u7684\u65b9\u6cd5\uff0c\u7528 300 \u7dad\u7684\u5411 \u8a18\u61b6\u8207\u5b78\u7fd2\u3002\u67b6\u69cb\u5982\u5716 9 \u6240\u793a\uff1a
3.2 \u5716\u50cf\u63cf\u8ff0 (Image Captioning) \u5206\u5c0f\u65bc 0\uff0c\u5247\u8996\u70ba\u8ca0\u9762\uff1b\u82e5\u60c5\u7dd2\u5206\u6578\u7e3d\u5206\u4ecb\u65bc 0 \u5230 1 \u4e4b\u9593\uff0c\u5247\u8996\u70ba\u4e2d\u7acb\u3002\u6700\u5f8c\u5c07\u4e0a\u8ff0\u7684 \u6587\u5b57\u7279\u5fb5\u3001\u5716\u50cf\u7279\u5fb5\u3001\u4ee5\u53ca\u60c5\u7dd2\u8207 hashtag \u5169\u8005\u7d93\u904e one-hot \u7de8\u78bc\u5f8c\u7684\u5411\u91cf\u9032\u884c\u4e32\u806f (concatenate)\uff0c\u5373\u5f97\u5230\u6240\u6709\u7279\u5fb5\u7684\u5411\u91cf\u3002 \u7531\u65bc\u793e\u7fa4\u7279\u5fb5\u8207\u5176\u4ed6\u7279\u5fb5\u5dee\u7570\u5f88\u5927\uff0c\u6211\u5011\u8003\u616e\u5169\u7a2e\u4e0d\u540c\u7684\u7279\u5fb5\u878d\u5408\u7b56\u7565: \u65e9\u671f\u878d\u5408 \u5716 7\u5c0d\u65bc\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u7684\u7d50\u679c\uff0c\u6211\u5011\u900f\u904e\u4e32\u63a5\u7684\u65b9\u5f0f\u4f86\u7d44\u5408\u5411\u524d\u8207\u5411\u5f8c\u50b3\u905e\u7684\u8f38\u51fa\u4ee5
(early fusion)\uff0c\u548c\u665a\u671f\u878d\u5408 (late fusion)\u3002\u5728\u65e9\u671f\u878d\u5408\u7684\u7b56\u7565\u4e2d\uff0c\u6211\u5011\u540c\u6a23\u5229\u7528 one-hot \u7de8 \u8868\u793a\u4e00\u500b\u5b57\u8a5e\u3002
\u78bc\uff0c\u5c07\u793e\u7fa4\u7279\u5fb5\u8f49\u63db\u6210\u5411\u91cf\u3002\u70ba\u4e86\u8b93\u793e\u7fa4\u7279\u5fb5\u8207\u5716\u6587\u7279\u5fb5\u80fd\u6709\u76f8\u7576\u7684\u91cd\u8981\u6027\uff0c\u6211\u5011\u5229\u7528 \u57fa\u65bc\u55ae\u5c64\u7684\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\uff0c\u672c\u8ad6\u6587\u8a2d\u8a08\u4e86\u4ee5\u4e0b\u5169\u7a2e\u4e0d\u540c\u7684\u65b9\u5f0f\uff0c\u9032\u884c\u591a\u5c64\u96d9
\u4e00\u500b autoencoder\uff0c\u5c07\u793e\u7fa4\u7279\u5fb5\u58d3\u7e2e\u70ba 300 \u7dad\uff0c\u4e26\u4e14\u4e32\u63a5\u5728\u5716\u6587\u7279\u5fb5\u4e4b\u5f8c\uff0c\u4ee5\u8a13\u7df4\u5206\u985e\u5668\u3002 \u5411\u7db2\u8def\u7684\u5806\u758a\u3002\u9996\u5148\uff0c\u591a\u5c64\u96d9\u5411\u905e\u8ff4\u795e\u7d93\u7db2\u8def (Mutli-layer BRNN)\uff0c\u662f\u900f\u904e\u8b93\u6587\u5b57\u8a0a\u606f
\u800c\u5728\u665a\u671f\u878d\u5408\u7684\u7b56\u7565\u4e2d\uff0c\u6211\u5011\u5148\u4ee5\u5716\u6587\u7279\u5fb5\u8f38\u5165 RNN \u548c\u6ce8\u610f\u529b\u6a5f\u5236\uff0c\u5f97\u5230\u4e00\u7cfb\u5217\u7684\u8f38 \u7d93\u904e\u591a\u500b\u56de\u5408\u7684\u96d9\u5411\u905e\u8ff4\u5f0f\u7db2\u8def\u8a08\u7b97\uff0c\u5f37\u5316\u6587\u4ef6\u4e2d\u5b57\u8a5e\u4e4b\u9593\u7684\u76f8\u4e92\u95dc\u4fc2\u3002\u67b6\u69cb\u5982\u5716 8 \u6240
\u51fa\uff0c\u7136\u5f8c\u518d\u5c07\u793e\u7fa4\u7279\u5fb5\u4ee5 one-hot \u7de8\u78bc\u8f49\u63db\u6210\u5411\u91cf\uff0c\u8207\u5716\u6587\u8f38\u51fa\u7d50\u679c\u4e00\u8d77\u8f38\u5165 Fully \u793a\uff1a \u5716 \u5982\u5716 5 \u6240\u793a\uff0cTwitter \u4e0a\u7684\u63a8\u6587\u5148\u7d93\u904e Feature Extraction\uff0c\u53d6\u5f97\u6587\u5b57\u5167\u5bb9\u3001\u5716\u50cf\u3001\u8207 Connected Layer \u9032\u884c\u5206\u985e\u3002 \u5716 \u591a\u55ae\u5143\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u67b6\u69cb\uff0c\u5728\u8f38\u5165\u5c64\u4f9d\u5e8f\u8f38\u5165\u6587\u4ef6\u4e2d\u7684\u5b57\u8a5e\uff0c\u4e26\u5728\u8f38\u51fa\u5c64\u5f97 \u793e\u7fa4\u7279\u5fb5\u3002\u9996\u5148\uff0c\u5c07\u5716\u50cf\u7279\u5fb5\u8f38\u5165\u5716\u50cf\u63cf\u8ff0\u6a21\u7d44\uff0c\u7d93\u904e\u5377\u7a4d\u795e\u7d93\u7db2\u8def(Convolution Neural \u672c\u8ad6\u6587\u5728 RNN \u6a21\u7d44\u4e2d\u4f7f\u7528 GRU Cell \u53d6\u4ee3\u50b3\u7d71\u7684 LSTM\uff0c\u4e26\u8a2d\u8a08\u591a\u5c64\u7684 BRNN \u5806\u758a\u7684\u67b6 \u51fa\u4ee3\u8868\u8a72\u5b57\u8a5e\u7684\u6578\u503c\u5411\u91cf\uff0c\u5982\u540c\u4e0a\u8ff0\u7684\u5169\u7a2e\u67b6\u69cb\u3002\u4f46\u7531\u5716 8 \u53ef\u77e5\uff0c\u5c0d\u65bc\u5411\u524d\u50b3\u905e\u8207\u5411\u5f8c Network)\u8207 Sequence to Sequence (Seq2Seq) \u795e\u7d93\u7db2\u8def\u67b6\u69cb\u8a08\u7b97\u5f8c\uff0c\u7522\u751f\u51fa\u63cf\u8ff0\u8a72\u5716\u50cf \u69cb\uff0c\u4ee5\u63a2\u8a0e\u8b20\u8a00\u5075\u6e2c\u7684\u6548\u679c\u3002 \u50b3\u905e\u800c\u8a00\uff0c\u6bcf\u500b\u65b9\u5411\u540c\u4e00\u6642\u9593\u9ede\u7684\u8f38\u5165\u7d93\u904e\u4e00\u500b Cell \u8a08\u7b97\u5f8c\uff0c\u6703\u518d\u50b3\u5165\u4e0b\u4e00\u500b Cell \u7e7c\u7e8c \u7684\u8a9e\u53e5\u3002\u5176\u6b21\uff0c\u8a9e\u53e5\u8207\u6587\u5b57\u5167\u5bb9\u4e32\u63a5\uff0c\u7d93\u904e Word Embedding \u7de8\u78bc\uff0c\u900f\u904e Feature Fusion \u55ae\u5c64\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def(Bi-directional Recurrent Neural Networks, or BRNNs)\u6700 \u8a08\u7b97\u3002\u8a72\u67b6\u69cb\u8207\u591a\u5c64\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u4e0d\u540c\u7684\u662f\uff0c\u6bcf\u500b\u6642\u9593\u9ede\u7684\u8f38\u5165\u53ea\u6709\u8003\u616e\u5230\u55ae\u4e00 \u8207\u793e\u7fa4\u7279\u5fb5\u878d\u5408\u3002\u63a5\u8457\uff0c\u878d\u5408\u5f8c\u7684\u7279\u5fb5\u5411\u91cf\u50b3\u5165\u96d9\u5411\u905e\u8ff4\u5f0f\u985e\u795e\u7d93\u7db2\u8def\u5c64(Bi-directional \u65e9\u662f\u7531 Schuster \u7b49\u4eba\u63d0\u51fa (Schuster & Paliwal, 1997)\uff0c\u5206\u5225\u5c07\u905e\u8ff4\u795e\u7d93\u7db2\u8def\u4e2d\u6bcf\u4e00\u500b\u8a13\u7df4 \u65b9\u5411\u7684\u5f71\u97ff\uff0c\u5411\u524d\u50b3\u905e\u8207\u5411\u5f8c\u50b3\u905e\u76f8\u5c0d\u65bc\u6574\u9ad4\u67b6\u69cb\u4f86\u8aaa\uff0c\u9084\u662f\u5169\u500b\u7368\u7acb\u7684\u795e\u7d93\u7db2\u8def\u67b6\u69cb\uff0c Recurrent Neural Network, BRNN)\uff0c\u627e\u51fa\u6587\u5b57\u5167\u5bb9\u4e2d\u5404\u5b57\u8a5e\u4e4b\u9593\u7684\u95dc\u4fc2\u3002\u6211\u5011\u4ee5\u55ae\u5c64 \u5e8f\u5217\u5206\u6210\u5411\u524d\u50b3\u905e(forward pass)\u8207\u5411\u5f8c\u50b3\u905e(backward pass)\u3002\u5169\u8005\u5206\u5225\u662f\u7368\u7acb\u7684\u55ae \u76f4\u5230\u6700\u5f8c\u8f38\u51fa\u6642\u624d\u9032\u884c\u6574\u5408\u3002\u6b64\u65b9\u6cd5\u80fd\u6bd4\u591a\u5c64 BRNN \u66f4\u6df1\u5165\u7684\u8a08\u7b97\u6bcf\u500b\u5b57\u8a5e\u4e4b\u9593\u7684\u95dc\u4fc2\uff0c BRNN \u70ba \u57fa \u790e \uff0c \u8a2d \u8a08 \u51fa \u5169 \u7a2e \u4e0d \u540c \u7684 \u5806 \u758a \u65b9 \u5f0f \uff0c \u5206 \u5225 \u70ba \u591a \u5c64 \u96d9 \u5411 \u905e \u8ff4 \u5f0f \u795e \u7d93 \u7db2 \u8def (Multi-layer BRNN)\u3001\u591a\u55ae\u5143\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def(Multi-cell BRNN)\u3002\u6700\u5f8c\uff0c\u900f\u904e \u5411 RNN\uff0c\u4e14\u5169\u500b\u795e\u7d93\u7db2\u8def\u90fd\u9023\u63a5\u5230\u540c\u4e00\u5c64\u8f38\u51fa\u5c64\uff0c\u5982\u5716 7 \u6240\u793a\uff1a \u4f46\u76f8\u5c0d\u7684\u9700\u8981\u82b1\u8cbb\u66f4\u591a\u7684\u8cc7\u6e90\u8207\u6642\u9593\u624d\u6703\u8b93\u795e\u7d93\u7db2\u8def\u6536\u6582\u3002
\u6ce8\u610f\u529b\u6a5f\u5236(Attention Layer)\u7684\u8a08\u7b97\uff0c\u52a0\u5f37\u63a8\u6587\u4e2d\u91cd\u8981\u5b57\u8a5e\u7684\u6b0a\u91cd\uff0c\u4e26\u8f38\u5165\u4e00\u500b\u5168\u9023\u63a5
\u5176\u4e2d \u70ba\u8a08\u7b97\u76f8\u95dc\u6027\u7684\u51fd\u6578\uff0c\u4f8b\u5982\u5167\u7a4d\u6216\u52a0\u6b0a\u5167\u7a4d\u7b49\u3002\u5176\u6b21\uff0c\u900f\u904e Softmax \u51fd\u6578\uff0c\u5c0d \u5c64(Fully Connected Layer)\uff0c\u4ee5\u9032\u884c\u5047\u8a0a\u606f\u7684\u5206\u985e\u3002\u9032
\u884c\u6b63\u898f\u5316\uff0c\u5373\u5f97\u5230\u6ce8\u610f\u529b\u6b0a\u91cd\uf061 tj \uff0c\u5b9a\u7fa9\u70ba\uff1a
\u5716 8. \u591a\u5c64\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u5716
", "num": null }, "TABREF23": { "type_str": "table", "text": "\u5982\u5716 12 \u6240\u793a\uff0c\u4ee5 MSCOCO 2014 \u8cc7\u6599\u96c6\u8a13\u7df4\u7684\u7ffb\u8b6f\u6a21\u578b\u5176 BLEU-1, BLEU-2 \u8a55\u4f30 \u5206\u6578\u5206\u5225\u70ba 0.695 \u548c 0.51\uff0c\u660e\u986f\u512a\u65bc MediaEval 2015, 2016 \u6240\u8a13\u7df4\u7684\u6a21\u578b\uff0c\u4e14\u76f8\u7576\u63a5\u8fd1 Xu \u7b49\u4eba\u7684\u7d50\u679c(Xu et al., 2015)\u3002\u7531\u65bc Mediaeval 2015, 2016 \u8cc7\u6599\u96c6\u5c6c\u65bc Tweets\uff0c\u7919\u65bc \u7531\u5716 14 \u6240\u793a\uff0c\u7576\u53ea\u8003\u616e\u6587\u5b57\u7279\u5fb5\u6642\uff0c\u591a\u5c64 BRNN \u8207\u591a\u55ae\u5143 BRNN \u67b6\u69cb\u7684 F-measure \u90fd\u9054\u5230 0.816\uff0c\u6548\u679c\u7686\u512a\u65bc Jin \u7b49\u4eba\u6240\u63d0\u51fa\u7684\u65b9\u6cd5 (Jin et al., 2017)\u3002\u800c\u7531\u5716 14 \u4e5f\u53ef\u4ee5\u5f97 \u77e5\uff0c\u5728\u9019\u4e09\u7a2e\u4e0d\u540c\u7684 BRNN \u67b6\u69cb\u4e2d\uff0c\u82e5\u53ea\u770b\u6587\u5b57\u7279\u5fb5\uff0c\u4e09\u8005\u4e26\u6c92\u6709\u660e\u986f\u7684\u512a\u52a3\uff0cF-measure \u90fd\u63a5\u8fd1 0.8\u3002 \u5982\u5716 17 \u6240\u793a\uff0c\u5728\u7d50\u5408\u9664\u4e86\u4f7f\u7528\u8005\u4e4b\u5916\u6240\u6709\u7279\u5fb5\u5f8c\uff0c\u591a\u55ae\u5143 BRNN \u5728\u6240\u6709\u7684\u8a55\u4f30\u6a19 \u6e96\u4e0b\u6700\u70ba\u7a81\u51fa\uff0cF-measure \u9054\u5230 0.882\uff0c\u5176\u6b21\u70ba\u55ae\u5c64\u8207\u591a\u5c64 BRNN\u3002\u7d93\u904e\u53cd\u8986\u5be6\u9a57\u8207\u63a2\u8a0e\uff0c \u6211\u5011\u767c\u73fe\u4f7f\u7528\u591a\u5c64 BRNN \u6642\uff0c\u7d93\u904e\u7b2c\u4e00\u5c64 BRNN \u5f8c\uff0c\u7279\u5fb5\u5411\u91cf\u5df2\u7d93\u904e\u5167\u90e8\u795e\u7d93\u5143\u904b\u7b97\uff0c \u627e\u51fa\u6240\u6709\u5b57\u8a5e\u7684\u95dc\u806f\uff0c\u4e26\u9032\u884c\u4e86\u8abf\u6574\uff0c\u6545\u5176\u8f38\u51fa\u7684\u7279\u5fb5\u5411\u91cf\u5df2\u7d93\u8207\u539f\u5148\u7684\u8f38\u5165\u4e0d\u540c\u3002\u4e14 \u7531\u65bc\u8a72\u5411\u91cf\u8207\u6240\u6709\u7279\u5fb5\u7684\u95dc\u806f\u6027\u5df2\u7d93\u88ab\u78ba\u5b9a\uff0c\u5f8c\u7e8c\u518d\u9032\u5165\u4e0b\u4e00\u5c64 BRNN \u6642\uff0c\u8a72\u7279\u5fb5\u5411\u91cf \u4e26\u4e0d\u6703\u518d\u6709\u592a\u5927\u7684\u8b8a\u52d5\uff0c\u6240\u4ee5\u4f7f\u7528\u55ae\u5c64 BRNN \u624d\u6703\u8207\u4f7f\u7528\u591a\u5c64 BRNN \u7684\u7d50\u679c\u76f8\u8fd1\u3002\u70ba\u4e86 \u9a57\u8b49\u63a8\u8ad6\u7684\u6b63\u78ba\u6027\uff0c\u672c\u5be6\u9a57\u9032\u4e00\u6b65\u63a1\u7528 T \u6aa2\u5b9a\uff0c\u5206\u5225\u91dd\u5c0d F-measure \u53ca accuracy\uff0c\u9032\u884c \u5169\u7a2e\u67b6\u69cb\u7684\u8a55\u6bd4\uff0c\u5224\u65b7\u5176\u5dee\u7570\u662f\u5426\u70ba\u5e38\u614b\u3002\u7d93\u904e\u8a08\u7b97\uff0c\u5169\u7a2e\u67b6\u69cb\u91dd\u5c0d F-measure \u53ca accuracy \u7684 p-value \u5206\u6578\u7686\u70ba 0.006\uff0c\u7686\u5c0f\u65bc 0.01\uff0c\u8b49\u5be6\u5176\u5be6\u9a57\u7d50\u679c\u4e26\u975e\u5076\u7136\uff0c\u5177\u6709\u7d71\u8a08\u610f\u7fa9\u3002 \u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c 77 \u55ae\u5143 BRNN \u6bd4\u5176\u4ed6\u65b9\u6cd5\u6548\u679c\u597d\uff0cF-measure \u53ef\u4ee5\u9054\u5230\u6700\u9ad8\u7684 0.882\u3002\u6211\u5011\u4e5f\u63a1\u7528\u591a\u55ae\u5143 BRNN \u8207\u591a\u5c64 BRNN \u9032\u884c T \u6aa2\u5b9a\uff0c\u91dd\u5c0d F-measure \u53ca accuracy\uff0c\u5169\u8005\u7684 p-value \u6578\u503c\u70ba 0.04 \u53ca 0.014\uff0c\u7686\u5c0f\u65bc 0.05\uff0c\u8b49\u5be6\u8a72\u7d50\u679c\u5177\u6709\u7d71\u8a08\u610f\u7fa9\u3002 \u5982\u5716 18 \u6240\u793a\uff0c\u5728\u4f7f\u7528 Late Fusion \u7279\u5fb5\u878d\u5408\u7b56\u7565\u6642\uff0c\u4e0d\u8ad6\u662f\u591a\u5c64\u6216\u591a\u55ae\u5143 BRNN\uff0c \u5169\u8005\u7684\u6548\u679c\u975e\u5e38\u76f8\u8fd1\u3002\u800c Early Fusion \u7279\u5fb5\u878d\u5408\u7b56\u7565\u5c0d\u65bc\u55ae\u5c64\u4ee5\u53ca\u591a\u55ae\u5143 BRNN\uff0c\u80fd\u5927 \u5e45\u63d0\u5347\u5176\u6548\u679c\u3002\u7d93\u904e\u591a\u6b21\u5be6\u9a57\u8207\u63a2\u8a0e\uff0c\u6211\u5011\u767c\u73fe\u5176\u539f\u56e0\u662f\uff0c\u900f\u904e BRNN \u8207\u6ce8\u610f\u529b\u6a5f\u5236\uff0c \u793e\u7fa4\u7279\u5fb5\u5728 \u6700\u5f8c\u6211\u5011\u63a2\u8a0e RNN \u67b6\u69cb\u4e2d\uff0c\u63a1\u7528 LSTM \u6216 GRU \u4e0d\u540c\u8655\u7406\u55ae\u5143\u5c0d\u5206\u985e\u7d50\u679c\u7684\u5f71\u97ff\u3002\u7531\u65bc \u8b20\u8a00\u6aa2\u6e2c\u8cc7\u6599\u96c6\u8cc7\u6599\u91cf\u8f03\u5c0f\uff0c\u70ba\u4e86\u964d\u4f4e\u8cc7\u6599\u5206\u5e03\u7684\u5f71\u97ff\uff0c\u6211\u5011\u63a1\u7528 5-fold cross-validation\u3002 \u540c\u6642\u7531\u65bc\u539f MediaEval \u8cc7\u6599\u96c6\u7684\u7d44\u6210\u662f\u6839\u64da\u4e0d\u540c\u4e8b\u4ef6 (event) \u5340\u5206\u70ba real \u6216 fake\uff0c\u4e26\u5c07 \u8a72\u4e8b\u4ef6\u76f8\u95dc\u767c\u6587\u8207\u56de\u61c9 tweets \u8ddf\u8457\u5217\u70ba real \u6216 fake\uff0c\u4e26\u4e14\u5df2\u7d93\u5340\u5206\u70ba training \u8207 test data\uff0c \u56e0\u6b64\u5728\u9032\u884c 5-fold cross-validation \u7684\u8cc7\u6599 partition \u6642\uff0c\u6211\u5011\u5c07 training \u53ca test data \u5206\u5225\u96a8 \u6a5f\u5207\u70ba 5 \u4efd\uff0c\u5206\u5225\u53d6 4 \u4efd training \u53ca\u4e00\u4efd test\uff0c\u9032\u884c 5 \u6b21\u5be6\u9a57\u5f8c\u53d6\u5176\u5e73\u5747\u3002\u5be6\u9a57\u7d50\u679c\u5982\u5716 \u90fd\u6bd4\u57fa\u65bc LSTM \u7684 BRNN \u67b6\u69cb (BiLSTM)\u6548\u679c\u8981\u597d\u3002\u540c\u6642\uff0c\u591a\u55ae\u5143(Multi-cell) BRNN \u4e5f \u90fd\u6bd4\u591a\u5c64(Multi-layer) BRNN \u6548\u679c\u8981\u597d\uff0c\u6700\u4f73\u7684\u6548\u679c\u70ba Multi-cell BiGRU\uff0cF-Measure \u9054 \u5230 0.89\u3002\u56e0\u6b64\uff0c\u9019\u4e5f\u9a57\u8b49\u4e86 LSTM \u8207 GRU \u4e26\u6c92\u6709\u7d55\u5c0d\u7684\u512a\u52a3\uff0c\u4e0d\u540c\u4efb\u52d9\u9808\u63a1\u7528\u4e0d\u540c\u67b6 \u69cb\u7684\u8a2d\u8a08\u624d\u80fd\u7372\u5f97\u8f03\u4f73\u7684\u6548\u679c\u3002", "html": null, "content": "
\u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c \u61c9\u7528\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u6ce8\u610f\u529b\u7db2\u8def\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c71 \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43 \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43 75 \u738b\u6b63\u8c6a\u8207\u9ec3\u9756\u5e43
\u5176\u4e2d \u8868\u793a\u70ba\u7b2c\u5e7e\u500b scaled dot-product attention \u7db2\u8def\uff0c , \u2208 , \u2208 , \u2208 \u70ba\u7dda\u6027\u8f49\u63db\u5c64\u7684\u6b0a\u91cd\u77e9\u9663\uff0c\u800c \u56e0\u70ba self-attention \u6a5f\u5236\u662f\u5c0d\u6bcf\u500b\u8f38\u5165\u7684\u5b57\u8a5e\u8207\u6240\u6709\u5b57\u8a5e\u9032\u884c\u8a08\u7b97\uff0c\u5b78\u7fd2\u6587\u4ef6\u5167\u90e8\u7684 , \u2208 \u3002 , \u7686\u70ba\u7d93\u904e\u8a13\u7df4\u7684\u6b0a\u91cd\u77e9\u9663\uff0c \u5247\u8868\u793a\u70ba\u77e9\u9663\u7684\u7dad\u5ea6\u3002 \u7d50\u69cb\u8207\u5b57\u8a5e\u4e4b\u9593\u7684\u4f9d\u8cf4\u95dc\u4fc2\uff0c\u6545\u8a08\u7b97\u6bcf\u500b\u5b57\u8a5e\u7684\u6700\u5927\u8def\u5f91\u9577\u70ba 1\uff0c\u5373\u6bcf\u500b\u5b57\u8a5e\u90fd\u6703\u88ab\u8a08 \u7b97\u4e00\u6b21\uff0c\u4e0d\u6703\u50cf\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u4e00\u6a23\u767c\u751f\u56e0\u8a0a\u606f\u50b3\u905e\u8def\u5f91\u904e\u9577\u5c0e\u81f4\u904e\u5c0f\u7684\u7279\u5fb5\u88ab\u5ffd\u7565\uff0c \u9032\u800c\u7522\u751f\u7684\u68af\u5ea6\u6d88\u5931\u6216\u68af\u5ea6\u7206\u70b8\u554f\u984c\u3002\u5728\u672c\u8ad6\u6587\u67b6\u69cb\u4e2d\uff0cinput \u7d93\u904e\u5404\u7a2e RNN\uff0c\u4ee5\u53ca self-attention \u6a5f\u5236\u5f8c\uff0c\u6703\u8a08\u7b97\u51fa\u4e00\u9023\u4e32\u7684 output \u5411\u91cf\uff0c\u6211\u5011\u518d\u5c07\u6240\u6709\u5411\u91cf\u8f38\u5165\u5230\u5168\u9023\u63a5 \u5c64 (Fully Connected Layer)\uff0c\u9032\u884c\u6700\u5f8c\u7684\u5206\u985e\u3002 4. \u5be6\u9a57\u8207\u8a0e\u8ad6 (Experiments and Discussions) \u672c\u8ad6\u6587\u63a1\u7528\u7684\u8cc7\u6599\u96c6\u5206\u70ba\u5169\u5927\u90e8\u4efd\uff1a\u5716\u50cf\u63cf\u8ff0\u8cc7\u6599\u96c6\u8207\u8b20\u8a00\u6aa2\u6e2c\u8cc7\u6599\u96c6\u3002\u9996\u5148\uff0c\u5728\u5716\u50cf \u63cf\u8ff0\u65b9\u9762\uff0c\u6211\u5011\u4f7f\u7528 Microsoft COCO 2014 (Tan et al., 2018) \u8cc7\u6599\u96c6\uff0c\u5b83\u5728\u5716\u50cf\u76f8\u95dc\u4efb\u52d9 \u4e2d\u5ee3\u6cdb\u88ab\u4f7f\u7528\uff0c\u5305\u542b\u5716\u50cf\u8b58\u5225\u8207\u5716\u50cf\u7279\u5fb5\u9ede\u6aa2\u6e2c\uff0c\u5982: Vinyals \u7b49\u4eba (Vinyals et al, 2015) \u8207 Xu \u7b49\u4eba (Xu et al., 2015)\u3002\u8a72\u8cc7\u6599\u96c6\u7684\u6bcf\u4e00\u5f35\u5716\u50cf\u8cc7\u6599\u90fd\u542b\u6709 5 \u53e5\u77ed\u8a9e\u9032\u884c\u63cf\u8ff0\uff0c\u6bcf\u4e00 \u53e5\u77ed\u8a9e\u5728\u8cc7\u6599\u96c6\u4e2d\u7686\u70ba\u552f\u4e00\u3002 \u5176\u6b21\uff0c\u5728\u8b20\u8a00\u6aa2\u6e2c\u65b9\u9762\uff0c\u7531\u65bc\u8cc7\u6599\u53d6\u5f97\u4e0d\u6613\uff0c\u6bcf\u7b46\u8a0a\u606f\u90fd\u9700\u8981\u7d93\u904e\u7b2c\u4e09\u65b9\u6a5f\u69cb\u516c\u958b \u8a8d\u8b49\u5176\u771f\u507d\uff0c\u624d\u80fd\u78ba\u5b9a\u8a72\u8a0a\u606f\u662f\u5c6c\u65bc\u8b20\u8a00\u6216\u4e8b\u5be6\u3002\u672c\u5be6\u9a57\u63a1\u7528 MediaEval 2015\u30012016 \u4efb \u52d9\u4e2d\u6240\u63d0\u4f9b\u7684 Twitter \u8b20\u8a00\u6aa2\u6e2c\u8cc7\u6599\u96c6\uff0c\u5df2\u7d93\u904e Twitter \u5b98\u65b9\u8a8d\u8b49\uff0c\u5176\u4e2d\u4e5f\u5305\u542b\u4e86\u6bcf\u5247\u63a8 \u6587\u7684\u591a\u5a92\u9ad4\u8a0a\u606f\u8207\u767c\u6587\u8005\u7684\u76f8\u95dc\u7279\u5fb5\u3002\u5169\u500b\u8cc7\u6599\u96c6\u7684\u5206\u5e03\u72c0\u6cc1\u5982\u8868 1 \u8207\u8868 2 \u6240\u793a\uff1a \u8868 1. \u5716\u50cf\u6a19\u8a3b\u8cc7\u6599\u5206\u5e03\u7d71\u8a08 [Table 1. Data distribution in image captioning dataset ] \u8cc7\u6599\u96c6 \u5716\u50cf\u6578\u91cf/\u63cf\u8ff0\u77ed\u53e5\u6578\u91cf Training Data 82783 / 413915 Test Data 36454 / 182270. \u8868 2. \u8b20\u8a00\u8cc7\u6599\u96c6\u8cc7\u6599\u5206\u5e03\u7d71\u8a08 [Table 2. Data distribution in rumor dataset] \u8cc7\u6599\u96c6 \u63a8\u6587\u6578\u91cf (event) Training Data Real: 189 / fake: 157 Test Data Real: 21 / fake: 24 \u5728\u5716\u50cf\u63cf\u8ff0\u7684\u76f8\u95dc\u5be6\u9a57\u4e2d\uff0c\u6211\u5011\u63a1\u7528\u96d9\u8a9e\u4e92\u8b6f\u8a55\u4f30(Bilingual Evaluation Understudy, (Papineni, Roukos, Ward & Zhu, 2002)\uff0c\u4e3b\u8981\u662f\u7528\u4f86\u8a55\u50f9\u6a21\u578b\u7684\u7ffb\u8b6f\u7d50\u679c\u8207\u53c3\u8003\u6587\u4ef6\u662f\u5426 \u76f8\u4f3c\u3002BLEU \u7684\u5b9a\u7fa9\u70ba: modified n-gram precision \u7684\u5e7e\u4f55\u5e73\u5747 (geometric mean)\uff0c \u2022 exp 1 / (6) \u5176\u4e2d \u8868\u793a\u8b6f\u6587\u7684\u9577\u5ea6\uff0c \u8868\u793a\u53c3\u8003\u6587\u4ef6\u7684\u9577\u5ea6\u3002 Modified n-gram precision \u70ba clipped n-gram \u500b\u6578\u9664\u4ee5\u6240\u6709 n-gram \u500b\u6578\uff0c \u2211 \u2211 \u2208 \u2208 \u2211 \u2211 \u2208 \u2208 (7) \u5176\u4e2d clipped n-gram \u500b\u6578\u8a08\u7b97\u65b9\u5f0f\u5982\u4e0b: min , _ _ (8) \u800c\u5728\u8b20\u8a00\u6aa2\u6e2c\u76f8\u95dc\u5be6\u9a57\u4e2d\uff0c\u4e3b\u8981\u8457\u91cd\u5728\u4e8c\u5143\u5206\u985e\u4efb\u52d9\uff0c\u56e0\u6b64\u63a1\u7528\u6e96\u78ba\u7387(Accuracy)\u3001 \u7279\u5fb5\u5411\u91cf\u8207\u7279\u5fb5\u5e8f\u5217(\u5411\u524d\u50b3\u905e\u8207\u5411\u5f8c\u50b3\u905e)\u4e4b\u9593\u7684\u95dc\u4fc2\u3002\u6700\u5f8c\u7684\u5be6\u9a57\u7d50\u679c\u4e5f\u986f\u793a\u4e86\u591a \u7cbe\u78ba\u7387(Precision)\u3001\u67e5\u5168\u7387(Recall)\u8207 F-Measure \u9032\u884c\u8a55\u4f30\uff0c\u4e26\u5229\u7528 T \u6aa2\u5b9a\u4f86\u6bd4\u8f03\u4e0d \u5716 12. \u5716\u50cf\u63cf\u8ff0\u5be6\u9a57\u8a55\u4f30\u7d50\u679c Twitter \u7684\u8cc7\u6599\u7279\u6027\uff0cTweets \u4e2d\u7684\u6587\u5b57\u8a0a\u606f\u4e26\u4e0d\u4e00\u5b9a\u662f\u5728\u63cf\u8ff0\u5176\u4e2d\u7684\u5716\u50cf\u8cc7\u8a0a\uff0c\u4e14\u767c\u6587\u8005 \u8207\u56de\u6587\u8005\u4e5f\u4e0d\u4e00\u5b9a\u662f\u5ba2\u89c0\u7684\u5c0d\u5716\u50cf\u8a0a\u606f\u9032\u884c\u63cf\u8ff0\uff0c\u9019\u6703\u4f7f\u5f97\u53c3\u8003\u6587\u4ef6\u4e0d\u5b8c\u6574\uff0c\u7121\u6cd5\u8a13\u7df4 \u51fa\u826f\u597d\u7684\u7ffb\u8b6f\u6a21\u578b\uff0c\u56e0\u6b64\u5c0e\u81f4\u6700\u5f8c\u5176 BLEU \u5206\u6578\u90fd\u504f\u4f4e\uff0c\u6545\u5f8c\u7e8c\u5be6\u9a57\u5c07\u63a1\u7528 MSCOCO 2014 \u8cc7\u6599\u96c6\u8a13\u7df4\u51fa\u7ffb\u8b6f\u6a21\u578b\u505a\u70ba\u8b20\u8a00\u6aa2\u6e2c\u7684\u5716\u50cf\u63cf\u8ff0\u6a21\u7d44\u3002 0.32 0.18 0.269 0.24 0.225 0.217 0 0.0875 0.175 0.2625 0.35 0.4375 BLEU 1-gram BLEU 2-gram BLEU 3-gram BLEU 4-gram \u7531\u5716 13 \u6240\u793a\uff0c\u672c\u8ad6\u6587\u6240\u63d0\u65b9\u6cd5\u91dd\u5c0d\u8b20\u8a00\u6aa2\u6e2c\u8cc7\u6599\u96c6\u9032\u884c\u8a13\u7df4\u6642\uff0c\u5229\u7528\u96a8\u6a5f\u521d\u59cb\u5316 \u6cd5\u7522\u751f\u5b57\u8a5e\u5411\u91cf\u6703\u6709\u8f03\u597d\u7684\u7d50\u679c\uff0c\u5176 F-measure \u9ad8\u9054 0.822\u3002\u7d93\u904e\u591a\u6b21\u5be6\u9a57\u8207\u63a2\u8a0e\uff0c\u767c\u73fe \u4e3b\u8981\u7531\u5169\u500b\u56e0\u7d20\u5f71\u97ff\u6b64\u5be6\u9a57\u7d50\u679c\u3002\u7b2c\u4e00\uff0c\u4f7f\u7528 Google News \u9810\u8a13\u7df4\u7684 Word2Vec \u5b57\u5178\uff0c \u6bcf\u500b\u5b57\u8a5e\u5411\u91cf\u662f\u7531\u8a31\u591a\u65b0\u805e\u6587\u7ae0\u8a13\u7df4\u7522\u751f\uff0c\u5728\u6574\u500b\u5411\u91cf\u7a7a\u9593\u4e2d\u5f7c\u6b64\u90fd\u6709\u95dc\u806f\u3002\u82e5\u5728\u8a13\u7df4 \u6642\u4f7f\u7528\u5b57\u5178\u4e2d\u7684\u5b57\u8a5e\u5411\u91cf\uff0c\u4e26\u4e0d\u65b7\u7684\u5728 RNN \u4e2d\u66f4\u65b0\u5b57\u8a5e\u5411\u91cf\uff0c\u6703\u4f7f\u5f97\u8a72\u5b57\u8a5e\u5728\u5411\u91cf\u7a7a \u9593\u4e2d\u7684\u610f\u7fa9\u88ab\u6539\u8b8a\uff0c\u5931\u53bb\u8207\u5176\u4ed6\u5b57\u8a5e\u7684\u95dc\u4fc2\uff0c\u5c0e\u81f4\u6700\u5f8c\u6a21\u578b\u7684\u7cbe\u6e96\u5ea6\u4e0b\u964d\u3002\u7b2c\u4e8c\uff0c\u5728\u8b20 \u8a00\u6aa2\u6e2c\u8cc7\u6599\u96c6\u4e2d\uff0c\u6709\u4e00\u4e9b\u5b57\u8a5e\u4e26\u672a\u51fa\u73fe\u5728 Google News \u7684 Word2Vec \u5b57\u5178\u88e1\uff0c\u5176\u5b57\u8a5e\u50cf \u5411\u91cf\u70ba\u96f6\uff0c\u5c0e\u81f4\u8a72\u5b57\u8a5e\u88ab\u795e\u7d93\u7db2\u8def\u5ffd\u7565\uff0c\u9032\u800c\u964d\u4f4e\u6a21\u578b\u7684\u6e96\u78ba\u7387\u3002\u900f\u904e\u5716 13\uff0c\u6211\u5011\u4e5f\u767c \u73fe\uff0c\u4f7f\u7528 Word2Vec \u5b57\u5178\u4f46\u4e0d\u66f4\u65b0\u5b57\u8a5e\u5411\u91cf\u7684\u6548\u679c\u8f03\u4f73\uff0c\u4e5f\u9a57\u8b49\u4e86\u9810\u8a13\u7df4\u5b57\u5178\u88e1\uff0c\u82e5\u4ee5 \u4e0d\u540c\u8cc7\u6599\u4f86\u8a13\u7df4\u4e26\u66f4\u65b0\u5b57\u8a5e\u5411\u91cf\uff0c\u5c07\u6703\u5931\u53bb\u5176\u539f\u672c\u7684\u610f\u7fa9\u3002 4.3 \u905e \u8ff4 \u5f0f \u795e \u7d93 \u7db2 \u8def \u67b6 \u69cb \u6bd4 \u8f03 (\u5728\u6b64\u5be6\u9a57\u4e2d\uff0c\u6211\u5011\u5148\u50c5\u4ee5\u6587\u5b57\u7279\u5fb5\u9032\u884c\u4e0d\u540c RNN \u67b6\u69cb\u4e4b\u8b20\u8a00\u5075\u6e2c\u6548\u679c\u6bd4\u8f03\uff0c\u5982\u4e0b\u5716\u6240 \u793a: \u63a5\u8457\u6211\u5011\u63a2\u8a0e\u7d50\u5408\u6587\u5b57\u3001\u5716\u50cf\u3001\u8207\u793e\u7fa4\u7279\u5fb5\uff0c\u5c0d\u5206\u985e\u7d50\u679c\u7684\u5f71\u97ff\u3002\u7531\u65bc\u793e\u7fa4\u7279\u5fb5\u5305\u542b\u4e86 \u6a19\u7c64 (hashtag)\u3001\u60c5\u7dd2\u3001\u53ca\u4f7f\u7528\u8005\uff0c\u6211\u5011\u9996\u5148\u6bd4\u8f03\u4e0d\u540c\u7d44\u5408\u7279\u5fb5\u9078\u53d6\u7684\u5be6\u9a57\u7d50\u679c\uff0c\u5982\u5716 15 \u8207\u5716 16 \u6240\u793a: 0.74 0.777 0.861 0.817 0.63 0.726 0.72 0.723 0.571 0.741 0.55 0.635 0.646 0.712 0.795 0.751 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Acuracy Precision Recall F-measure \u50b3\u905e\u795e\u7d93\u7db2\u8def\uff0c\u5206\u5225\u7d93\u904e\u4e86\u591a\u5c64\u7684\u96b1\u85cf\u5c64\u8a08\u7b97\uff0c\u900f\u904e\u52a0\u6df1\u5167\u90e8 cell \u7684\u6578\u76ee\uff0c\u5f37\u5316\u4e86\u6bcf\u500b \u5b57\u5178\u7684\u6bd4\u5c0d\uff0c\u672a\u4f86\u5c07\u63a1\u53d6\u4e0d\u540c\u7684\u60c5\u7dd2\u5206\u6790\u65b9\u6cd5\uff0c\u4ee5\u9032\u4e00\u6b65\u63d0\u5347\u8b20\u8a00\u5075\u6e2c\u7684\u6548\u679c\u3002 19 \u6240\u793a: \u8ff0\u6a21\u578b\u80fd\u5c07\u5716\u50cf\u8f49\u70ba\u6558\u8ff0\u6587\u5b57\uff0c\u6709\u6548\u767c\u6398\u5716\u50cf\u4e2d\u7684\u8a9e\u610f\uff0c\u4e26\u8207\u6587\u5b57\u5167\u5bb9\u4e32\u63a5\uff0c\u9032\u884c Word Embedding\uff1b\u91dd\u5c0d\u793e\u7fa4\u7279\u5fb5\uff0c\u6211\u5011\u63a1\u53d6\u4e86 Early \u53ca Late Fusion \u4e0d\u540c\u7279\u5fb5\u878d\u5408\u7b56\u7565\u3002\u6211\u5011 \u4e5f\u8a2d\u8a08\u4e86\u591a\u5c64\u8207\u591a\u55ae\u5143\u5169\u7a2e\u4e0d\u540c\u7684\u96d9\u5411\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def(BRNN)\u67b6\u69cb\uff0c\u4e26\u7d50\u5408\u6ce8\u610f\u529b\u6a5f\u5236\uff0c \u4ee5\u63d0\u5347\u5206\u985e\u6548\u679c\u3002\u5be6\u9a57\u7d50\u679c\u986f\u793a\uff0c\u4f7f\u7528\u57fa\u65bc GRU \u7684\u591a\u55ae\u5143(Multi-cell) BRNN \u67b6\u69cb (Multi-cell BiGRU)\uff0c\u4ee5 Early Fusion \u65b9\u5f0f\u878d\u5408\u793e\u7fa4\u7279\u5fb5\uff0c\u4e26\u7d50\u5408\u6587\u5b57\u7279\u5fb5\u3001\u5716\u50cf\u63cf\u8ff0\u6a21 \u7d44\uff0c\u80fd\u6709\u6548\u63d0\u5347\u8b20\u8a00\u6aa2\u6e2c\u6548\u679c, \u6700\u4f73 F-measure \u9054 0.89\u3002 \u672c\u8ad6\u6587\u6240\u63d0\u51fa\u7684\u65b9\u6cd5\u4ecd\u6709\u9650\u5236\u3002\u9996\u5148\uff0c\u6211\u5011\u7684\u65b9\u6cd5\u4e3b\u8981\u662f\u91dd\u5c0d Twitter \u793e\u7fa4\u5e73\u53f0\u4e0a\u7684 \u63a8\u6587\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c\uff0c\u7531\u65bc\u6bcf\u5247\u63a8\u6587\u7684\u9577\u5ea6\u4e26\u4e0d\u6703\u592a\u9577\uff0c\u6240\u4ee5\u901a\u5e38\u4e0d\u80fd\u76f4\u63a5\u9069\u7528\u65bc\u9577\u6587\u4ef6 \u7684\u8b20\u8a00\u6aa2\u6e2c\u3002\u5176\u6b21\uff0c\u672c\u8ad6\u6587\u64f7\u53d6\u5716\u50cf\u7279\u5fb5\u7684\u65b9\u5f0f\u662f\u5c07\u5716\u50cf\u5148\u7d93\u904e\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\uff0c\u8f49\u63db\u6210 \u6709\u610f\u7fa9\u7684\u6587\u5b57\u8a0a\u606f\u3002\u96d6\u7136\u7d93\u904e\u8a55\u4f30\u8a72\u6a21\u578b\u6709\u4e00\u5b9a\u7684\u7cbe\u78ba\u5ea6\uff0c\u7d50\u679c\u7d93\u904e\u6bd4\u5c0d\u4e5f\u5927\u81f4\u7b26\u5408\u5716 \u50cf\u6240\u8868\u9054\u7684\u610f\u7fa9\uff0c\u4f46\u90e8\u5206\u7d50\u679c\u9084\u662f\u6709\u843d\u5dee\uff0c\u56e0\u6b64\u5982\u4f55\u63d0\u5347\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\u7684\u6548\u679c\uff0c\u6709\u5f85\u9032 \u4e00\u6b65\u63a2\u8a0e\u3002\u6700\u5f8c\uff0c\u793e\u7fa4\u7279\u5fb5\u4e26\u975e\u5168\u90e8\u6709\u52a9\u65bc\u63d0\u5347\u5206\u985e\u6548\u679c\uff0c\u5176\u4e2d\u7684\u60c5\u7dd2\u5206\u985e\u50c5\u63a1\u7528\u60c5\u7dd2 \u5716 \u5be6\u9a57\u4e2d\u6211\u5011\u4e5f\u767c\u73fe\uff0c\u591a\u55ae\u5143 BRNN \u96d6\u7136\u985e\u4f3c\u65bc\u55ae\u5c64 BRNN\uff0c\u4f46\u662f\u5176\u5411\u524d\u50b3\u905e\u8207\u5411\u5f8c \u672c\u8ad6\u6587\u63d0\u51fa\u4e00\u500b\u57fa\u65bc\u591a\u6a21\u5f0f\u7279\u5fb5\u878d\u5408\u7684\u6df1\u5ea6\u985e\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u9032\u884c\u8b20\u8a00\u6aa2\u6e2c\u3002\u900f\u904e\u5716\u50cf\u63cf 0.801 0.834 0.879 0.856 0.9 0 Accuracy Precision Recall F-measure [Figure 18. Experimental results of different feature fusion strategies] 1 Late fusion (All except User) 0.1 \u5716 18.\u4e0d\u540c\u7279\u5fb5\u878d\u5408\u7b56\u7565\u4e4b\u6bd4\u8f03 [Figure 13. Experimental results for embedding layer] Late fusion (only User) Late fusion (All feature) 0.2 Accuracy F-measure \u5716 13. Embedding Layer \u5be6\u9a57\u7d50\u679c Late fusion(only Tag) Late fusion (only Sentiment) 0.6 0.3 BLEU)\u8a55\u4f30\u65b9\u6cd5\u4f86\u8a55\u91cf\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\uff0cBLEU \u6700\u65e9\u662f\u7531 IBM \u7684 Papineni \u7b49\u4eba\u6240\u63d0\u51fa\u7684 \u540c\u6a21\u578b\u7684\u5dee\u7570\u7a0b\u5ea6\u3002 2 (9) \u5728\u4ee5\u4e0b\u8b20\u8a00\u6aa2\u6e2c\u7684\u5be6\u9a57\u4e2d\uff0c\u6211\u5011\u4e3b\u8981\u7684 baseline \u6bd4\u8f03\u5c0d\u8c61\u7686\u70ba Jin \u7b49\u4eba\u6240\u63d0\u51fa\u7684\u65b9\u6cd5 (Jin et al., 2017)\u3002 4.1 \u5716\u50cf\u63cf\u8ff0\u7684\u6548\u679c (The Effects of Image Captioning) \u9996\u5148\uff0c\u70ba\u4e86\u63a2\u8a0e\u5716\u50cf\u63cf\u8ff0\u6a21\u578b\u7684\u6548\u679c\uff0c\u91dd\u5c0d MSCOCO 2014 \u7684\u6bcf\u4e00\u7b46\u5716\u50cf\u8cc7\u6599\u7d93\u904e\u5716\u50cf \u63cf\u8ff0\u6a21\u578b\u5f8c\uff0c\u6211\u5011\u5229\u7528\u8a13\u7df4\u8cc7\u6599\u96c6\u4e2d\u7684 3 \u53e5\u77ed\u6587\u9032\u884c\u6a21\u578b\u7684\u8a13\u7df4\uff0c\u5176\u4ed6 2 \u53e5\u77ed\u6587\u9032\u884c\u9a57 \u8b49\u3002\u70ba\u4e86\u5be6\u9a57\u5c0d\u7167\uff0c\u6211\u5011\u4e5f\u4f7f\u7528 Mediaeval 2015, 2016 \u4e2d\u7684\u5716\u50cf\u8a0a\u606f\u9032\u884c\u8a13\u7df4\uff0cBLEU \u7684 \u5be6\u9a57\u8a55\u4f30\u7d50\u679c\uff0c\u5982\u5716 12 \u6240\u793a: 0.695 0.51 0.525 0.6125 0.7 0.7875 Mscoco 2014 Mediaeval 2015,2016 4.2 \u70ba\u4e86\u63a2\u8a0e\u6587\u5b57\u8a0a\u606f\u7684\u5411\u91cf\u7de8\u78bc\u65b9\u6cd5\u5c0d\u65bc\u8b20\u8a00\u5075\u6e2c\u7684\u6548\u679c\uff0c\u6211\u5011\u4f7f\u7528\u5169\u7a2e\u5e38\u898b\u7684\u65b9\u6cd5\u9032\u884c \u6bd4\u8f03\uff1a\u96a8\u6a5f\u521d\u59cb\u5316\u6cd5\uff0c\u4ee5\u53ca\u9810\u8a13\u7df4\u597d\u7684 Word2Vec \u5b57\u5178\u3002\u524d\u8005\u662f\u5f9e -1 ~ 1 \u4e4b\u9593\u96a8\u6a5f\u7522\u751f \u4ee3\u8868\u8a72\u5b57\u8a5e\u7684\u6578\u503c\u5411\u91cf\uff0c\u4e4b\u5f8c\u7d93\u904e\u795e\u7d93\u7db2\u8def\u7684\u8a13\u7df4\u9032\u884c\u8abf\u6574\uff1b\u5f8c\u8005\u5247\u63a1\u7528 GoogleNews \u9810\u8a13\u7df4\u7684 Word2Vec \u5b57\u5178\u5c0d\u61c9\u7684\u5b57\u8a5e\u5411\u91cf\u9032\u884c\u8a13\u7df4\u4e26\u66f4\u65b0\u3002\u5be6\u9a57\u7d50\u679c\u5982\u5716 13 \u6240\u793a\uff1a 0.72 0.811 0.796 0.803 0.574 0.738 0.57 0.643 0.749 0.786 0.861 0.822 Accuracy Precision Recall F-measure 0.4 0 \u5716 15. Early Fusion \u4e4b\u7279\u5fb5\u9078\u53d6\u6548\u679c\u6bd4\u8f03 0.5 0.1 0.2 0.3 0.4 0.5 0.7 0.8 0.9 1 Word2Vec Word2Vec_Update Random_Update 0.532 0.598 0.541 0.568 0.738 0.774 0.826 0.799 0.739 0.776 0.861 0.816 0.724 0.74 0.91 0.816 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Jin et al., 2017 (only Text) Single-layer BRNN (only Text) Multi-layer BRNN (only Text) Multi-cell BRNN (only Text) Early fusion (only Tag) Early fusion (only Sentiment) Early fusion (only User) Early fusion (All feature) \u7279\u5fb5\u53cd\u800c\u6703\u964d\u4f4e\u5206\u985e\u6548\u679c\uff0c\u5728\u4e0d\u63a1\u7528\u4f7f\u7528\u8005\u7279\u5fb5\u7684\u60c5\u6cc1\u4e0b\uff0c\u4f7f\u7528 Early Fusion \u7279\u5fb5\u878d\u5408 \u7b56\u7565\uff0cF-measure \u6700\u9ad8\u5206\u5225\u53ef\u4ee5\u9054\u5230 0.882 \u53ca 0.856\uff0c\u6bd4\u7d0d\u5165\u5168\u90e8\u7279\u5fb5\u6642\u7684\u6548\u679c\u5206\u5225\u63d0\u5347 LSTM vs. GRU 1 Early fusion (All except User) \u4e86 5.5\uff05\u53ca 10%\u3002\u7d93\u904e\u89c0\u5bdf\uff0c\u6211\u5011\u767c\u73fe\u4f7f\u7528\u8005\u7279\u5fb5\u5c0d\u65bc\u8a72\u63a8\u6587\u662f\u5426\u70ba\u8b20\u8a00\u4e26\u6c92\u6709\u592a\u5927\u7684 1 0.9 0.754 0.815 0.819 0.817 0.6 0.704 0.691 0.698 0.588 0.712 0.653 0.681 0.748 0.768 0.897 0.827 0.827 0.816 0.959 0.882 0.4 0.5 0.7 0.9 1 1.1 \u95dc\u4fc2\uff0c\u56e0\u70ba\u4e26\u4e0d\u6703\u56e0\u70ba\u8a72\u4f7f\u7528\u8005\u670b\u53cb\u6578\u91cf\u6216\u7e3d\u767c\u6587\u6578\u7684\u591a\u5be1\uff0c\u800c\u5f71\u97ff\u5176\u767c\u51fa\u7684\u63a8\u6587\u70ba\u8b20 \u63a5\u8457\u6211\u5011\u6bd4\u8f03\u5169\u8005\u7279\u5fb5\u878d\u5408\u7b56\u7565\uff0c\u5c0d\u6700\u5f8c\u5206\u985e\u7d50\u679c\u7684\u5f71\u97ff\u3002\u5be6\u9a57\u7d50\u679c\u5982\u5716 18 \u6240\u793a\uff1a 0.8 \u8a00\u6216\u4e8b\u5be6\u3002 Single-layer BRNN (Early fusion) Single-layer BRNN (Late fusion) Multi-layer BRNN (Early fusion) 0.6 Multi-layer BRNN (Late fusion) 0.8 Multi-cell BRNN (Early fusion) Multi-cell BRNN (Late fusion) 0.4 4.3\u5728\u4f7f\u7528 Early Fusion \u7279\u5fb5\u878d\u5408\u7684\u60c5\u5f62\u4e0b\uff0c\u6211\u5011\u9032\u884c\u4e0d\u540c RNN \u67b6\u69cb\u4e4b\u5be6\u9a57\u6bd4\u8f03\u3002\u5be6\u9a57\u7d50\u679c 0.882 0.9 0.6 \u5982\u5716 17 \u6240\u793a: Jin et al., 2017 (All) Single-layer BRNN (Early fusion) 0.859 0.828 0.848 0.2 0.856 0.843 0.827 0 0.6 \u5716 14. \u4e0d\u540c\u905e\u8ff4\u5f0f\u795e\u7d93\u7db2\u8def\u67b6\u69cb\u4e4b\u6bd4\u8f03(\u6587\u5b57\u7279\u5fb5) 0 0.1 Accuracy Precision Recall F-measure 0 0.1 0.2 0.3 Acuracy Precision Recall F-measure 0.682 0.78 0.615 0.6 0.7 0.7 0.689 0.8 0.81 0.91 0.859 0.77 0.8 0.776 0.924 0.843 0.827 0.816 0.959 0.882 0.9 1 Multi-layer BRNN (Early fusion) Multi-cell BRNN (Early fusion) 0.8 0.757 0.77 0.772 0.801 Accuracy Recall Precision F-Measure 0.8 LSTM (Multi-layer) LSTM (Multi-cell) GRU (Multi-layer) GRU (Multi-cell)
", "num": null }, "TABREF24": { "type_str": "table", "text": "Longitudinal Study with LENA Automatic Analysis values measured at 5, 10,14, 21, and 30 months are listed inTable 1.", "html": null, "content": "
Linguistic Input and Child Vocalization of 7 Children85
from 5 to 30 Months: A Table 1.
", "num": null }, "TABREF26": { "type_str": "table", "text": "", "html": null, "content": "
shows demographic information of
", "num": null }, "TABREF28": { "type_str": "table", "text": "", "html": null, "content": "
108\u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76 \u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76105 \u8a79\u4eac\u7ff0 \u7b49 107 \u8a79\u4eac\u7ff0 \u7b49
Bordes, Chopra & Mikolov, 2016)20 \u9805\u4efb\u52d9\u4e2d\u9a57\u8b49\uff0c\u6e96\u78ba\u7387\u6700\u591a\u53ef\u63d0\u9ad8\u7d04 9.2%\u5de6\u53f3\u7684\u6e96\u78ba \u4e8b\u5be6\u7684\u6a19\u8a3b\uff0c\u4e26\u4e0d\u5229\u65bc\u5c07\u6b64\u6a21\u578b\u61c9\u7528\u5230\u4e0d\u540c\u7684\u6578\u64da\u96c6\u6216\u4e0d\u540c\u4efb\u52d9\u4e0a\u3002\u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def \u7b46\u8cc7\u6599\u8a13\u7df4\uff0c\u6a21\u578b\u76f8\u5c0d\u901a\u904e\u66f4\u591a\u7684\u4efb\u52d9\u3002\u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u901a\u904e\u4e86 17/20 \u9805\u4efb\u52d9\uff1b\u52d5 \u795e\u7d93\u8a9e\u610f\u7de8\u78bc\u5668\u6a21\u578b(Neural Semantic Encoders, NSE) (Munkhdalai & Yu, 2016)\u67b6\u69cb\uff0c
\u7387\u3002\u95dc\u806f\u63d0\u53d6\u7684\u90e8\u5206\u9084\u6709\u964d\u4f4e\u6b0a\u91cd\u7684\u529f\u7528\uff0c\u76f8\u6bd4\u65bc\u4fdd\u5b58\u6240\u6709\u95dc\u806f\u8a08\u7b97\uff0c\u5e73\u5747\u6bcf\u9805\u4efb\u52d9\u53ef (End-to-End Memory Networks, MemN2N)\u6a21\u578b(Sukhbaatar, Szlam, Weston & Fergus, 2015)\uff0c \u614b\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u901a\u904e\u4e86 18/20 \u9805\u4efb\u52d9\u3002\u800c\u9996\u5148\u901a\u904e\u6240\u6709\u4efb\u52d9\u7684\u6a21\u578b\u70ba\u905e\u6b78\u5be6\u9ad4\u7db2\u8def\u6a21\u578b\uff0c \u5728\u904e\u5f80\u591a\u8f2a\u8b80\u53d6\u6a5f\u5236\u591a\u70ba\u56fa\u5b9a\u6b65\u6578\uff0c\u4f46\u4e26\u975e\u6240\u6709\u7684\u554f\u984c\u9700\u8981\u76f8\u540c\u63a8\u7406\u7684\u6b65\u6578\u3002\u6709\u4e9b\u554f\u984c
\u4e0b\u964d 3 \u842c\u500b\u6b0a\u91cd\u6578\u91cf\uff0c\u6574\u9ad4\u4e0b\u964d 26.8%\u6b0a\u91cd\u7684\u8a08\u7b97\u91cf\u3002 \u5728\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u7684\u57fa\u790e\u4e0a\u4fee\u6539\u8207\u5b8c\u5584\uff0c\u4f7f\u5176\u80fd\u4ee5\u7aef\u5c0d\u7aef\u7684\u65b9\u5f0f\u5b8c\u6210\u5b78\u7fd2\u3002\u900f\u904e\u5f31\u76e3\u7763\u65b9 \u5176\u5e73\u5747\u932f\u8aa4\u7387\u964d\u4f4e\u81f3 0.54%\u3002\u5982\u8868 1 \u6240\u793a\u3002 \u53ea\u9700\u8981\u7c21\u55ae\u7684\u8a5e\u53e5\u6bd4\u5c0d\u5373\u53ef\u5f97\u51fa\u7d50\u8ad6\uff0c\u6709\u4e9b\u554f\u984c\u5247\u9700\u8981\u8907\u96dc\u7684\u8a9e\u610f\u7406\u89e3\u8207\u6df1\u5ea6\u63a8\u7406\uff0c\u56e0
\u6211\u5011\u5728\u63a5\u4e0b\u4f86\u7684\u5c0f\u7bc0\u8a0e\u8ad6\u8207\u6574\u7406\u8a18\u61b6\u7db2\u8def\u76f8\u95dc\u9818\u57df\u7814\u7a76\u6587\u737b\uff1b\u7b2c\u4e09\u7bc0\u70ba\u7814\u7a76\u65b9\u6cd5\u8207 \u8a2d\u8a08\uff0c\u5c0d\u65bc\u672c\u8ad6\u6587\u7814\u7a76\u65b9\u5f0f\u8207\u65b9\u6cd5\u505a\u4e00\u7cfb\u5217\u7684\u6574\u7406\u8207\u8aaa\u660e\uff1b\u7b2c\u56db\u7bc0\u5247\u70ba\u5be6\u9a57\u7d50\u679c\u8207\u5206\u6790\uff0c \u6bd4\u8f03\u6539\u5584\u524d\u5f8c\u6a21\u578b\u7684\u8868\u73fe\uff0c\u9a57\u8b49\u6240\u63a1\u7528\u65b9\u6cd5\u7684\u53ef\u884c\u6027\u8207\u50f9\u503c\uff1b\u6700\u5f8c\u4e00\u7bc0\u70ba\u7d50\u8ad6\u8207\u5efa\u8b70\uff0c \u5f0f(Weak-Supervise Learning) \u5373\u53ef\u5b8c\u6210\u8a13\u7df4\uff0c\u6709\u5229\u6a21\u578b\u7684\u64f4\u5c55\u4e26\u61c9\u7528\u5230\u4e0d\u540c\u7684\u4efb\u52d9\u6216\u8cc7 \u6599\u96c6\u4e0a\u3002\u6b64\u6a21\u578b\u5229\u7528\u8edf\u6027\u6ce8\u610f\u529b\u6a5f\u5236(Soft Attention Mechanism) \u4f86\u4f30\u8a08\u6bcf\u689d\u8a18\u61b6\u8207\u554f\u984c \u76f8\u95dc\u7684\u7a0b\u5ea6\uff0c\u4e26\u4f7f\u7528\u76f8\u95dc\u6027\u9ad8\u7684\u8a18\u61b6\u8a08\u7b97\u51fa\u6700\u5f8c\u7684\u8f38\u51fa\u3002 MemNN MemN2N DMN EntNet \u6b64 NSE \u5229\u7528\u52d5\u614b\u6b65\u6578\u8abf\u6574\u6a21\u578b\u4ee5\u89e3\u6c7a\u6b64\u554f\u984c\u3002 \u8868 1. Model \u6574\u7406\u8ad6\u6587
\u7e3d\u7d50\u672c\u8ad6\u6587\u6240\u63a1\u7528\u65b9\u6cd5\u7684\u512a\u7f3a\u9ede\uff0c\u4ee5\u53ca\u672a\u4f86\u53ef\u5617\u8a66\u7684\u65b9\u5411\u3002 \u52d5\u614b\u8a18\u61b6\u7db2\u8def(Dynamic Memory Networks, DMN)\u6a21\u578b(Kumar et al., 2016)\uff0c\u5c07\u5927\u90e8\u4efd Mean Error 39.2 4.2 6.395 0.54
2. \u6587\u737b\u56de\u9867 (Literature Review) \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u9818\u57df\u7684\u4efb\u52d9\u8996\u70ba\u554f\u7b54\u4efb\u52d9\u7684\u4e00\u7a2e\u3002\u8a72\u6a21\u578b\u4ee5\u8a18\u61b6\u7db2\u8def\u70ba\u57fa\u790e\u9032\u884c\u6539\u5584\uff0c\u53ef \u900f\u904e\u7aef\u5c0d\u7aef\u5b78\u7fd2\u5b8c\u6210\u8a13\u7df4\uff0c\u61c9\u7528\u65bc\u554f\u7b54\u4efb\u52d9\u3001\u60c5\u611f\u5206\u6790\u4ee5\u53ca\u8a5e\u6027\u6a19\u8a3b\u7b49\u3002\u6a21\u578b\u67b6\u69cb\u8207\u8a18 Failed Tasks(error>5%) 17 3 2 0
\u61b6\u7db2\u8def\u6a21\u578b\u76f8\u4f3c\uff0c\u4e3b\u8981\u7531\u56db\u500b\u6a21\u7d44\u6240\u7d44\u6210\uff0c\u5206\u5225\u70ba\u8f38\u5165\u6a21\u7d44\u3001\u554f\u984c\u6a21\u7d44\u3001\u60c5\u666f\u8a18\u61b6\u6a21\u7d44 \u96d6\u7136\u5728\u8868 1 \u4e2d EntNet \u5728 10k \u6578\u64da\u91cf\uff0c\u932f\u8aa4\u7387\u5c0f\u65bc 5%\u7684\u6a19\u6e96\u4e2d\u901a\u904e\u6240\u6709\u4efb\u52d9\uff0c\u4f46\u5176 \u8a18\u61b6\u7db2\u8def(Memory Networks)\u4e3b\u8981\u904b\u7528\u65bc\u554f\u7b54\u4efb\u52d9\u8207\u60c5\u611f\u5206\u6790\u7b49\u61c9\u7528\u4e0a\uff0c\u63a1\u7528\u5916\u90e8\u8a18\u61b6\u7684 (Episodic Memory Module) \u8207\u61c9\u7b54\u6a21\u7d44\u3002\u8207\u524d\u8ff0\u8a18\u61b6\u7db2\u8def\u7684\u4e0d\u540c\u5728\u65bc\u7de8\u78bc\u65b9\u5f0f\u3002\u6b64\u6a21\u578b \u5728\u90e8\u4efd\u4efb\u52d9\u4e2d\u7684\u932f\u8aa4\u7387\u9084\u662f\u5927\u65bc 4%\u3002\u800c\u4e14\u82e5\u662f\u5229\u7528\u8f03\u5c11\u7684 1k \u7684\u6578\u64da\u91cf\u9032\u884c\u8a13\u7df4\u7684\u8a71\uff0c \u65b9\u5f0f\u5132\u5b58\u5148\u9a57\u77e5\u8b58\uff0c\u900f\u904e\u6ce8\u610f\u529b\u6a5f\u5236\u627e\u5230\u8207\u554f\u984c\u76f8\u95dc\u7684\u8a18\u61b6\u5167\u5bb9\uff0c\u518d\u5229\u7528\u63a8\u7406\u6a21\u7d44\u5f9e\u554f \u984c\u8207\u76f8\u95dc\u8a18\u61b6\u5f97\u51fa\u6700\u7d42\u7b54\u6848\u3002\u8a18\u61b6\u7db2\u8def\u7531\u8a31\u591a\u6a21\u7d44\u7d44\u5408\u800c\u6210\uff0c\u5404\u500b\u90e8\u5206\u53ef\u7531\u8a2d\u8a08\u8005\u63a1\u7528 \u63a1\u7528\u9580\u63a7\u5faa\u74b0\u55ae\u5143\u6a21\u578b(Gate Recurrent Unit, GRU) (Chung, Gulcehre, Cho & Bengio, 2014) \u6b63\u78ba\u5ea6\u5247\u6703\u5f9e\u539f\u672c\u7684 99.5%\u964d\u5230 89.1%\u3002\u56e0\u6b64\u5728\u8f03\u5c11\u6578\u64da\u7684\u60c5\u6cc1\u4e0b\uff0c\u6a21\u578b\u7684\u6e96\u78ba\u6027\u9084\u6709
\u95dc\u9375\u8a5e\uff1a\u8a18\u61b6\u7db2\u8def\u3001\u591a\u9ede\u8df3\u8e8d\u7db2\u8def\u3001\u95dc\u4fc2\u7db2\u8def\u3001\u6ce8\u610f\u529b\u6a5f\u5236 \u4e0d\u540c\u65b9\u5f0f\u5be6\u73fe\uff0c\u672c\u5c0f\u7bc0\u4ecb\u7d39\u8a18\u61b6\u7db2\u8def\u76f8\u95dc\u7814\u7a76\uff0c\u4ee5\u53ca\u8a9e\u8a00\u6a21\u578b\u7684\u76f8\u95dc\u7406\u8ad6\u3002 \u7de8\u78bc\uff0c\u96a8\u8457\u6642\u9593\u6b65\u7684\u63a8\u79fb\u66f4\u65b0\u96b1\u85cf\u72c0\u614b\u3002\u76f8\u8f03\u65bc\u55ae\u7d14\u4f7f\u7528\u8a5e\u888b(Bags of word, BOW) \u66f4\u53ef \u53ef\u4ee5\u6539\u9032\u7684\u7a7a\u9593\u3002\u82e5\u662f\u6a21\u578b\u80fd\u63d0\u9ad8\u5c11\u6578\u64da\u4e0b\u8a13\u7df4\u7684\u6548\u679c\uff0c\u53ef\u4ee5\u6e1b\u5c11\u8a13\u7df4\u6642\u9593\uff0c\u8207\u8cc7\u6599\u6536
Keywords:Memory Networks, Multi-hop Networks, Relation Networks, Attention \u4ee5\u8868\u793a\u51fa\u5b57\u8a5e\u4e4b\u9593\u7684\u9806\u5e8f\u95dc\u806f\u3002 \u96c6\u7684\u6210\u672c\u3002\u800c\u82e5\u662f\u8981\u61c9\u7528\u5230\u5176\u4ed6\u7684\u8cc7\u6599\u91cf\u8f03\u5c11\u7684\u60c5\u6cc1\uff0c\u4e5f\u80fd\u6709\u6bd4\u8f03\u597d\u7684\u6548\u679c\u3002
Mechanism 1. \u7dd2\u8ad6 (Introduction) 2.1 \u6ce8\u610f\u529b\u6a5f\u5236 (Attention Mechanism) \u5728\u554f\u7b54\u7cfb\u7d71\u4e2d\u52a0\u5165\u77e5\u8b58\u5eab(Knowledge Bases, KBs)\u53ef\u4ee5\u6709\u6548\u7684\u63d0\u9ad8\u6a21\u578b\u7684\u77e5\u8b58\u5132\u5b58 \u91cf\uff0c\u4f46\u5176\u4e26\u4e0d\u5920\u5b8c\u6574\uff0c\u7121\u6cd5\u652f\u6301\u4e0d\u540c\u985e\u578b\u7684\u7b54\u6848\uff0c\u7531\u65bc\u6578\u64da\u7684\u7a00\u758f\u6027\uff0c\u8f03\u96e3\u5275\u5efa\u5305\u542b\u6240 2.3 \u591a\u8df3\u8e8d\u6ce8\u610f\u6a5f\u5236 (Multi-hop Attention) \u6ce8\u610f\u529b\u6a5f\u5236(Attention mechanism)\u6700\u65e9\u61c9\u7528\u65bc\u5716\u50cf\u9818\u57df\uff0c\u8ad6\u6587(Bahdanau, Cho & Bengio, 2015)\u7d50\u5408\u985e\u795e\u7d93\u7db2\u8def\u6a21\u578b\uff0c\u5c07\u5176\u904b\u7528\u65bc\u6a5f\u5668\u7ffb\u8b6f\u4efb\u52d9\u4e0a\uff0c\u9996\u6b21\u5c07\u6ce8\u610f\u529b\u6a5f\u5236\u61c9\u7528\u65bc\u81ea\u7136 \u6709\u9818\u57df\u7684 KB\uff0c\u4e0d\u5229\u65bc\u64f4\u5c55\u5230\u4e0d\u540c\u7684\u9818\u57df\u3002\u9375\u503c\u8a18\u61b6\u7db2\u8def(Key-Value Memory Networks) \u591a\u8df3\u8e8d\u6ce8\u610f(Multi-hop Attention)\u6a5f\u5236\u65bc\u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u4e2d\u6240\u63d0\u51fa\uff0c\u900f\u904e\u4e0d\u65b7\u6bd4\u5c0d\u554f
\u6df1\u5ea6\u5b78\u7fd2\u7814\u7a76\u8fd1\u5e74\u5927\u5e45\u6210\u9577\uff0c\u5176\u4e2d\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u65bc\u6587\u5b57\u76f8\u95dc\u7684\u81ea\u7136\u8a9e\u8a00\u4efb\u52d9\u53d7\u5230\u4e86\u4e0d\u5c11 \u8a9e\u8a00\u8655\u7406\u9818\u57df\u4e0a\u3002 \u6a21\u578b(Miller et al., 2016)\u4f7f\u7528\u9375\u503c(key-value) \u7684\u65b9\u5f0f\u5c07\u6587\u7ae0\u4e2d\u7684\u7de8\u78bc\u5b58\u53d6\u4e0b\u4f86\uff0c\u67b6\u69cb\u57fa\u65bc \u984c\u8207\u8a18\u61b6\u5f97\u51fa\u554f\u984c\u7b54\u6848\uff0c\u9019\u500b\u904e\u7a0b\u6a21\u64ec\u4eba\u985e\u5728\u63a8\u7406\u904e\u7a0b\u7684\u601d\u8003\u65b9\u5f0f\uff0c\u5f8c\u7e8c\u6a21\u578b\u900f\u904e\u4e0d\u540c
\u95dc\u6ce8\u3002\u5728\u81ea\u7136\u8a9e\u8a00\u9818\u57df\u4e2d\uff0c\u804a\u5929\u6a5f\u5668\u4eba\u3001\u554f\u7b54\u4efb\u52d9\u7b49\uff0c\u90fd\u5177\u6709\u5e8f\u5217\u8cc7\u6599\u7684\u7279\u6027\uff0c\u4e5f\u5c31\u662f \u6587\u53e5\u5b57\u8a5e\u6709\u6642\u9593\u5148\u5f8c\u7684\u95dc\u4fc2\uff0c\u56e0\u6b64\u8a08\u7b97\u7684\u904e\u7a0b\u9700\u8981\u7d66\u8207\u6a21\u578b\u8a5e\u5e8f\u8cc7\u8a0a\uff0c\u6216\u662f\u4f9d\u7167\u9806\u5e8f\u8f38 \u5165\u81f3\u6a21\u578b\u5167\u3002\u5176\u4e2d\u8a18\u61b6\u6a21\u578b\u4f7f\u7528\u5916\u90e8\u8a18\u61b6\u7684\u65b9\u5f0f\u5132\u5b58\u6587\u672c\u6216\u5148\u9a57\u77e5\u8b58\uff0c\u63a8\u7406\u6642\u518d\u5f9e\u4e2d\u627e \u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def\u6a21\u578b\uff0c\u91dd\u5c0d\u5148\u9a57\u77e5\u8b58\u7684\u5132\u5b58\u63d0\u51fa\u4e0d\u540c\u65b9\u5f0f\u7de8\u78bc\uff0c\u61c9\u7528\u65bc\u81ea\u7136\u8a9e\u8a00\u4e2d\u554f\u7b54 \u7684\u65b9\u5f0f\u5be6\u73fe\u591a\u8df3\u8e8d\u6a5f\u5236\uff0c\u7528\u4ee5\u5f37\u5316\u6a21\u578b\u7684\u63a8\u7406\u80fd\u529b\u3002 \u81ea\u5f9e\u7de8\u78bc\u5668\u89e3\u78bc\u5668\u67b6\u69cb(Encoder-Decoder) (Cho et al., 2014)\u7684\u63d0\u51fa\uff0c\u6539\u5584\u4e86\u55ae\u500b RNN \u7684\u76f8\u95dc\u9818\u57df\u3002 \u554f \u984c \u7c21 \u5316 \u7db2 \u8def \u6a21 \u578b (Question Reduction Networks, QRN) (Seo, Min, Farhadi & (Recurrent Neural Networks) (Elman, 1990)\u9577\u671f\u8a18\u61b6\u7684\u4e0d\u8db3\uff0c\u4e26\u63d0\u5347\u4e86\u5728\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u9818 \u57df\u5404\u7a2e\u4efb\u52d9\u7684\u6548\u679c\u3002\u4f46\u56e0\u8f38\u51fa\u89e3\u78bc\u904e\u7a0b\u5728\u4e0d\u540c\u6642\u9593\u6b65\u6240\u8f38\u5165\u7684\u7de8\u78bc\u5411\u91cf\u76f8\u540c\uff0c\u5c0e\u81f4\u5728\u8f49 \u9375\u503c\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u8207\u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u76f8\u4f3c\uff0c\u6700\u5927\u7684\u4e0d\u540c\u5728\u65bc\u8a18\u61b6\u7684\u5132\u5b58\u65b9\u5f0f\u3002 Hajishirzi, 2017) \u6a21\u578b\u67b6\u69cb\u70ba RNN \u7684\u4e00\u7a2e\uff0c\u53ef\u6709\u6548\u8655\u7406\u77ed\u671f\u8207\u9577\u671f\u5e8f\u5217\u95dc\u4fc2\u3002\u900f\u904e\u591a\u8f2a
\u51fa\u8207\u554f\u984c\u95dc\u806f\u6027\u9ad8\u7684\u8a18\u61b6\u5167\u5bb9\uff0c\u53ef\u4ee5\u907f\u514d\u5728\u8a08\u7b97\u904e\u7a0b\u4e2d\u640d\u5931\u91cd\u8981\u7684\u8cc7\u8a0a\u3002\u5176\u4e2d\u7d50\u5408\u4e86\u6ce8 \u610f\u529b\u6a5f\u5236\u7684\u61c9\u7528\uff0c\u4f7f\u8f38\u51fa\u6a21\u7d44\u53ef\u4ee5\u6839\u64da\u73fe\u5728\u7684\u554f\u984c\u95dc\u6ce8\u91cd\u8981\u7684\u8a18\u61b6\u5167\u5bb9\uff0c\u63a8\u7406\u5f97\u51fa\u6b63\u78ba \u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def\u900f\u904e\u4e0d\u540c\u7684\u5d4c\u5165\u77e9\u9663\u5c0d\u6587\u672c\u7de8\u78bc\uff0c\u800c\u9375\u503c\u8a18\u61b6\u7db2\u8def\u5247\u900f\u904e\u9375\u503c\u7684\u65b9\u5f0f\u8868 \u8b80\u53d6\u6a5f\u5236\uff0c\u9010\u6f38\u7c21\u5316\u554f\u984c\uff0c\u9054\u5230\u6df1\u5165\u8a9e\u610f\u7406\u89e3\u7684\u6548\u679c\uff0c\u6700\u5f8c\u63a8\u7406\u5f97\u51fa\u6700\u7d42\u7b54\u6848\u4e26\u8f49\u5316\u70ba \u63db\u7684\u904e\u7a0b\u4e2d\u5bb9\u6613\u640d\u5931\u8a31\u591a\u8a0a\u606f\uff0c\u82e5\u63d0\u9ad8\u5411\u91cf\u5927\u5c0f\u5247\u6703\u5c0e\u81f4\u8a08\u7b97\u91cf\u589e\u52a0\uff0c\u800c\u6ce8\u610f\u529b\u6a5f\u5236\u7684 \u63d0\u51fa\u53ef\u4ee5\u6709\u6548\u63d0\u9ad8\u6a21\u578b\u7de8\u78bc\u7684\u6548\u7387\u3002 \u793a\uff0c\u4ee5\u9375\u8a18\u61b6(key memory)\u8207\u503c\u8a18\u61b6(value memory)\u5f62\u5f0f\u5132\u5b58\u3002\u9375\u503c\u8a18\u61b6\u7db2\u8def\u512a\u9ede\u70ba\u5728\u8a13 \u81ea\u7136\u8a9e\u8a00\u8f38\u51fa\u3002\u6b64\u5916 QRN \u6a21\u578b\u4e2d\u6240\u63d0\u51fa\u7684\u516c\u5f0f\u5141\u8a31\u5728\u905e\u6b78\u795e\u7d93\u7db2\u8def\u6642\u9593\u8ef8\u4e0a\u4e26\u884c\u5316\uff0c
\u7684\u7b54\u6848\u3002 \u7df4\u7db2\u8def\u4e4b\u524d\uff0c\u53ef\u5148\u5c0d\u5148\u9a57\u77e5\u8b58\u9032\u884c\u9069\u5408\u7684\u7de8\u78bc\u3002\u5373\u4f7f\u662f\u4e0d\u540c\u9818\u57df\u7684\u77e5\u8b58\uff0c\u4f7f\u7528\u8005\u4e5f\u53ef\u9078 \u63d0\u5347\u8a13\u7df4\u8207\u63a8\u7406\u90e8\u5206\u7684\u6548\u7387\u3002
\u61c9\u7528\u65bc\u8a9e\u8a00\u6a21\u578b\u9818\u57df\u4e2d\u7684\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u6709\u55ae\u8df3\u8e8d\u6ce8\u610f\u6a5f\u5236(Single-hop attention) \u8207 \u64c7\u7de8\u78bc\u65b9\u5f0f\uff0c\u800c\u4e0d\u55ae\u7d14\u4f9d\u8cf4\u65bc\u8a5e\u5d4c\u5165\u77e9\u9663\u7684\u8a13\u7df4\uff0c\u5728\u4f7f\u7528\u4e0a\u6709\u4e86\u66f4\u591a\u7684\u5f48\u6027\u3002 AOA Reader \u6a21\u578b(Attention-over-Attention) (Cui et al., 2017)\uff0c\u61c9\u7528\u65bc\u586b\u7a7a\u4efb\u52d9
\u591a\u8df3\u8e8d\u6ce8\u610f\u6a5f\u5236(Multi-hop attention) \u7684\u63a8\u7406\u65b9\u5f0f\uff0c\u56e0\u8a18\u61b6\u8207\u8a18\u61b6\u9593\u76f8\u4e92\u7368\u7acb\uff0c\u5728\u6578\u64da\u91cf \u8a18\u61b6\u7db2\u8def(Memory Network, MemNN) (Weston, Chopra & Bordes, 2014)\u7531 Facebook \u4eba\u5de5\u667a \u905e\u6b78\u5be6\u9ad4\u7db2\u8def(Recurrent Entity Networks, EntNet)\u6a21\u578b(Henaff, Weston, Szlam, Bordes (Cloze-style questions)\u3002\u8207\u904e\u5f80\u6a21\u578b\u6700\u5927\u7684\u4e0d\u4e00\u6a23\u5728\u65bc\u900f\u904e\u4e0d\u4e00\u6a23\u7684\u6ce8\u610f\u529b\u6a5f\u5236\u7d44\u5408\uff0c\u8a08
\u8db3\u5920\u5927\u7684\u72c0\u6cc1\u4e0b\u5df2\u53ef\u5b78\u7fd2\u5230\u4e0d\u932f\u7684\u6548\u679c\u3002\u4f46\u5728\u6578\u64da\u91cf\u8f03\u5c0f\u7684\u524d\u63d0\u4e0b\u5247\u8f03\u70ba\u7121\u6cd5\u5b78\u7fd2\u6578\u64da \u5167\u7684\u8cc7\u8a0a\u3002\u70ba\u63d0\u9ad8\u6a21\u578b\u5b78\u7fd2\u8207\u63a8\u7406\u7684\u80fd\u529b\uff0c\u63d0\u9ad8\u8a18\u61b6\u5132\u5b58\u6548\u7387\u8207\u6a21\u578b\u7684\u63a8\u7406\u65b9\u6cd5\u5247\u986f\u5f97 & LeCun, 2017)\uff0c\u8a18\u9304\u4e16\u754c\u7684\u5be6\u9ad4\u8207\u72c0\u614b\u65bc\u8a18\u61b6\u4e2d\uff0c\u7576\u6709\u65b0\u8cc7\u8a0a\u8f38\u5165\u6642\uff0c\u5247\u6839\u64da\u8f38\u5165\u8a0a \u7b97\u51fa\u6b0a\u91cd\u9810\u6e2c\u6700\u5f8c\u7684\u7d50\u679c\uff0c\u800c\u975e\u4f7f\u7528\u55ae\u4e00\u4e00\u7a2e\u6ce8\u610f\u529b\u6a5f\u5236\u8a08\u7b97\u65b9\u6cd5\u3002\u6a21\u578b\u900f\u904e\u96d9\u5411\u9580\u63a7 \u6167\u5be6\u9a57\u5ba4\u6240\u63d0\u51fa\uff0c\u76ee\u7684\u5728\u63d0\u9ad8\u985e\u795e\u7d93\u7db2\u8def\u6a21\u578b\u9577\u671f\u8a18\u61b6\u80fd\u529b\uff0c\u61c9\u7528\u65bc\u5e8f\u5217\u6027\u8cc7\u6599\u4e0a\u3002\u5982 \u4fdd\u5b58\u554f\u7b54\u4efb\u52d9\u7684\u5148\u9a57\u77e5\u8b58\u3001\u804a\u5929\u7684\u8a9e\u5883\u8a0a\u606f\u7b49\u3002\u904e\u5f80\u5728\u8655\u7406\u5e8f\u5217\u6027\u8cc7\u6599\u6642\uff0cRNN \u53ef\u4ee5\u6709 \u606f\u66f4\u65b0\u76f8\u5c0d\u61c9\u8a18\u61b6\u55ae\u5143\uff0c\u53ef\u61c9\u7528\u65bc\u81ea\u7136\u8a9e\u8a00\u4e2d\u7684\u95b1\u8b80\u7406\u89e3\u8207\u554f\u7b54\u4efb\u52d9\u4e2d\uff0c\u5176\u5728 bAbI-10k \u5faa\u74b0\u6a21\u578b\u5c0d\u5148\u9a57\u77e5\u8b58\u8207\u554f\u984c\u9032\u884c\u7de8\u78bc\uff0c\u5c07\u7de8\u78bc\u5411\u91cf\u9ede\u7a4d\u76f8\u4e58\uff0c\u7d93\u904e softmax \u8a08\u7b97\u51fa\u8a5e\u5f59
\u76f8\u7576\u91cd\u8981\u3002 \u672c\u7814\u7a76\u7d50\u5408\u81ea\u7136\u8a9e\u8a00\u5e38\u7528\u8cc7\u6599\u96c6\u8207\u6df1\u5ea6\u5b78\u7fd2\u7684\u5de5\u5177\uff0c\u5617\u8a66\u7d50\u5408\u4e0d\u540c\u7406\u8ad6\uff0c\u5f37\u5316\u8a9e\u8a00 \u6578\u64da\u96c6\u8207 Children'sBook Test (CBT)\u6578\u64da\u96c6 single hop \u8a13\u7df4\u4e2d\uff0c\u8f03\u66f4\u65e9\u63d0\u51fa\u4e4b\u65b9\u6cd5\u8868\u73fe\u70ba \u7684\u6a5f\u7387\uff0c\u6b64\u6ce8\u610f\u529b\u6a5f\u5236\u7684\u8a08\u7b97\u65b9\u6cd5\u70ba\u8a31\u591a\u6a21\u578b\u901a\u7528\u65b9\u6cd5\uff0c\u800c\u6b64\u8ad6\u6587\u5275\u65b0\u7684\u5730\u65b9\u70ba\u5176\u4e0d\u50c5 \u6548\u7684\u8655\u7406\u77ed\u671f\u6642\u9593\u5148\u5f8c\u95dc\u4fc2\uff0c\u6bcf\u500b\u6642\u9593\u6b65\u90fd\u6703\u53c3\u8003\u4e0a\u4e00\u500b\u6642\u9593\u6b65\u8f38\u51fa\u7684\u7d50\u679c\uff0c\u4f46\u5176\u53ea\u900f \u904e\u8a18\u61b6\u55ae\u5143\u5132\u5b58\u91cd\u8981\u8cc7\u8a0a\uff0c\u96a8\u6642\u9593\u6b65\u63a8\u79fb\u66f4\u65b0\u8a18\u61b6\u55ae\u5143\u5167\u5bb9\uff0c\u5c0d\u65bc\u9577\u5e8f\u5217\u7684\u8a13\u7df4\u904e\u7a0b\u4e2d \u512a\u3002 \u8a08\u7b97 document-to-query \u7684\u6ce8\u610f\u529b\u6578\u503c\uff0c\u4e5f\u8a08\u7b97 query-to-document \u7684\u6ce8\u610f\u529b\u6b0a\u91cd\uff0c\u6700\u5f8c\u5229
\u7406\u89e3\u8207\u63a8\u7406\u80fd\u529b\uff0c\u4e26\u5206\u6790\u5be6\u9a57\u7d50\u679c\u7684\u8868\u73fe\u3002\u5be6\u9a57\u63a1\u53d6\u8f03\u5c0f\u7684\u6578\u64da\u91cf\u4f5c\u70ba\u9a57\u8b49\u76ee\u6a19\uff0c\u6539\u5584 \u53ef\u80fd\u6703\u6709\u68af\u5ea6\u6d88\u5931(gradient vanishing) \u8207\u68af\u5ea6\u7206\u70b8(gradient exploding) \u7684\u554f\u984c\u767c\u751f\uff0c\u9020\u6210 \u6211\u5011\u4f9d\u7167\u8ad6\u6587(Sukhbaatar et al., 2015) (Henaff et al., 2017)\u4e2d\u4e4b\u65b9\u6cd5\uff0c\u4ee5\u4e00\u5343\u7b46\u8a13\u7df4\u8cc7 \u7528\u5169\u8005\u77e9\u9663\u76f8\u4e58\u5f97\u5230\u6700\u5f8c\u6ce8\u610f\u529b\u6a5f\u5236\u6578\u503c\uff0c\u4e26\u900f\u904e\u5f8c\u7e8c\u6a21\u578b\u9032\u884c\u63a8\u7406\u3002
\u5373\u4f7f\u5728\u6578\u64da\u96c6\u4e0d\u8db3\u7684\u60c5\u6cc1\u4e5f\u80fd\u9054\u5230\u76f8\u7576\u7a0b\u5ea6\u7684\u6539\u5584\u6548\u679c\u3002\u76ee\u7684\u6b78\u7d0d\u5982\u4e0b\u5169\u9ede\u3002 RNN \u5728\u9577\u671f\u8a18\u61b6\u4e2d\u8868\u73fe\u4e0d\u662f\u5f88\u597d\u3002\u5373\u4f7f\u5f8c\u4f86\u9577\u77ed\u671f\u8a18\u61b6\u6a21\u578b(Long Short-Term Memory, \u6599\u9032\u884c\u5be6\u9a57\u767c\u73fe\uff0c\u6700\u521d\u8a18\u61b6\u6a21\u578b\u6709\u6548\u6539\u5584\u9577\u671f\u8a18\u61b6\u95dc\u4fc2\uff0c\u53ef\u5728 bAbI \u8cc7\u6599\u96c6\u4e2d\u901a\u904e 16/20 \u8ad6\u6587(Trischler et al., 2016)\u63d0\u51fa\u4e86 EpiReader \u795e\u7d93\u7db2\u8def\u6a21\u578b\uff0c\u7528\u4ee5\u89e3\u6c7a\u81ea\u7136\u8a9e\u8a00\u4efb\u52d9
(\u4e00) \u7814\u7a76\u591a\u8df3\u8e8d\u6ce8\u610f\u6a5f\u5236\u5c0d\u65bc\u8a18\u61b6\u7db2\u8def\u9810\u6e2c\u7684\u8868\u73fe\u3002 LSTM) (Hochreiter & Schmidhuber, 1997)\u76f8\u5c0d\u63d0\u9ad8\u4e86\u9577\u671f\u8a18\u61b6\u80fd\u529b\uff0c\u4f46\u5c0d\u65bc\u66f4\u5927\u7684\u5e8f\u5217\u4ecd \u9805\u4efb\u52d9\u3002\u4f46\u9700\u8981\u900f\u904e\u5f37\u76e3\u7763\u65b9\u5f0f\u9032\u884c\u8a13\u7df4\uff0c\u4e26\u4e0d\u5229\u65bc\u64f4\u5c55\u61c9\u7528\u3002\u800c\u5f31\u76e3\u7763\u8a13\u7df4\u5247\u53ea\u80fd\u901a \u4e2d\u7684\u586b\u7a7a\u554f\u984c\u3002EpiReader \u6a21\u578b\u5206\u70ba\u5169\u500b\u90e8\u5206\uff0c\u7b2c\u4e00\u90e8\u5206\u70ba\u63d0\u53d6\u6a21\u7d44(Extractor)\uff0c\u901a\u904e\u6dfa
\u7136\u6709\u5176\u9650\u5236\u5b58\u5728\u3002 \u904e 2/20 \u9805\u4efb\u52d9\uff0c\u4e14\u932f\u8aa4\u7387\u5927\u5e45\u589e\u9577\u3002\u800c\u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u900f\u904e\u5f31\u76e3\u7763\u65b9\u5f0f\u8a13\u7df4\uff0c\u76f8\u8f03 \u5c64 \u6587\u672c \u8207\u554f\u984c \u7684\u6bd4 \u5c0d\uff0c\u63d0 \u53d6\u51fa \u82e5\u5e72\u500b \u554f\u984c \u7684\u53ef\u80fd \u5019\u9078 \u7b54\u6848\uff1b \u7b2c\u4e8c \u90e8\u5206\u70ba \u63a8\u7406 \u6a21\u7d44 (\u4e8c) \u7814\u7a76\u8a18\u61b6\u7db2\u8def\u8a18\u61b6\u95dc\u806f\u63d0\u53d6\u5c0d\u65bc\u63a8\u7406\u80fd\u529b\u7684\u63d0\u5347\u3002 \u8a18 \u61b6 \u7db2 \u8def \u7684 \u9650 \u5236 \u5728 \u65bc \u8a13 \u7df4 \u904e \u7a0b \u9700 \u8981 \u900f \u904e \u5f37 \u76e3 \u7763 \u7684 \u65b9 \u5f0f \u5b78 \u7fd2 (Strong-Supervised (Reasoner)\uff0c\u901a\u904e\u66f4\u6df1\u5c64\u7684\u8a9e\u610f\u6bd4\u8f03\u5019\u9078\u7b54\u6848\u8207\u554f\u984c\u4e4b\u9593\u7684\u95dc\u806f\u3002\u63d0\u53d6\u6a21\u7d44\u5f9e\u5927\u91cf\u53ef\u80fd\u6027 \u65bc\u5f31\u76e3\u7763\u8a18\u61b6\u7db2\u8def\uff0c\u901a\u904e\u4efb\u52d9\u6bd4\u4f8b\u63d0\u6607\uff0c\u4e5f\u5927\u5e45\u4e0b\u964d\u5e73\u5747\u932f\u8aa4\u7387\u3002 \u672c\u8ad6\u6587\u7814\u7a76\u5728\u5c0f\u6578\u64da\u96c6\u7684\u524d\u63d0\u4e0b\uff0c\u4e0d\u540c\u7684\u6a5f\u5236\u5c0d\u65bc\u554f\u7b54\u7cfb\u7d71\u6a21\u578b\u7684\u5f71\u97ff\u3002\u5728\u7814\u7a76\u4e2d \u6211\u5011\u5c07\u95dc\u4fc2\u7db2\u8def\u7684\u6982\u5ff5\uff0c\u4ee5\u95dc\u806f\u8a18\u61b6\u7684\u5f62\u5f0f\u8207\u8a18\u61b6\u6a21\u578b\u7d50\u5408\uff0c\u65bc bAbI \u6578\u64da\u96c6(Weston, Learning)\uff0c\u8a13\u7df4\u7528\u6578\u64da\u9700\u8981\u63d0\u4f9b\u8207\u67e5\u8a62\u76f8\u95dc\u7684\u6a19\u8a3b\u53e5\u5b50\uff0c\u7136\u800c\u4e26\u975e\u6240\u6709\u6578\u64da\u96c6\u90fd\u6709\u652f\u6301 \u800c\u7d9c\u5408\u524d\u8ff0\u8ad6\u6587\u6240\u63d0\u4f9b\u7684\u6578\u64da\uff0c\u4ee5\u4e00\u842c\u7b46\u8a13\u7df4\u8cc7\u6599\u70ba\u524d\u63d0\u5be6\u9a57\uff0c\u76f8\u8f03\u65bc\u524d\u8ff0\u4ee5\u4e00\u5343 \u4e2d\u7be9\u9078\u51fa\u5c0f\u90e8\u5206\u5019\u9078\u7b54\u6848\uff0c\u800c\u63a8\u7406\u6a21\u7d44\u5247\u8655\u7406\u66f4\u7cbe\u78ba\u7684\u63a8\u7406\u5339\u914d\u90e8\u5206\u3002
", "num": null }, "TABREF29": { "type_str": "table", "text": "Cui et al., 2017) (Trischler et al., 2016)\u5169\u7bc7\u8ad6\u6587\u5be6\u9a57\u6240\u63d0\u4f9b\u7684\u8cc7\u6599\u70ba\u57fa\u790e\uff0c\u4f7f\u7528 bAbI \u6578\u64da\u96c6\u4e2d 10k \u6578\u64da\u91cf\u8a13\u7df4\uff0c\u4e26\u5e73\u5747 20 \u9805\u4efb\u52d9\u7684\u932f\u8aa4\u7387\u6bd4\u8f03\u7d50\u679c\u986f\u793a\u65bc\u8868 4\u3002\u7531\u5be6\u9a57 \u53ef\u4ee5\u4e86\u89e3\u5230\uff0c\u52a0\u5165\u95dc\u4fc2\u8a08\u7b97\u80fd\u6709\u6548\u7684\u63d0\u5347\u6a21\u578b\u7684\u8a13\u7df4\u7d50\u679c\u8207\u8a08\u7b97\u3002 \u8868 \u6578\u64da\u7de8\u78bc\u5b8c\u5f8c\u4ee5\u5411\u91cf\u5f62\u5f0f\u8868\u793a\u6bcf\u500b\u53e5\u5b50 \u3002t \u70ba\u4e0d\u540c\u6642\u9593\u6b65\u7684\u53e5\u5b50\uff0c\u4f9d\u7167\u9806\u5e8f\u8f38\u5165\u81f3 \u6a21\u578b\u5167\u66f4\u65b0\u8a18\u61b6\u69fd\u8207\u95dc\u4fc2\u69fd\u3002\u6bcf\u500b\u8a18\u61b6\u69fd\u7531 key \u548c value \u7d44\u6210,\u5206\u5225\u70ba wi \u548c hi \uff0c\u4ee5 key-value \u7684\u5f62\u5f0f\u4fdd\u5b58\u8cc7\u8a0a\u3002key \u8ca0\u8cac\u4fdd\u5b58\u5be6\u9ad4\u3001value \u8ca0\u8cac\u4fdd\u5b58\u72c0\u614b\uff0c\u7bc4\u4f8b\u5982\u4e0b\u65b9\u6240\u793a\u3002\u7bc4\u4f8b\u4e2d key \u4fdd\u5b58\u4e86 John \u9019\u500b\u5be6\u9ad4\uff0cvalue \u4fdd\u5b58\u4e86 John \u6240\u505a\u7684\u52d5\u4f5c\uff0c\u6bcf\u500b\u8a18\u61b6\u69fd\u90fd\u6709\u81ea\u5df1\u7684 key \u8207 value \u5411\u91cf\uff0c\u900f\u904e\u8f38\u5165\u6578\u64da\u8207 key-value \u7684\u6bd4\u5c0d\u53ef\u627e\u5230\u6b64\u6b21\u72c0\u614b\u66f4\u65b0\u61c9\u8a72\u66f4\u65b0\u65bc\u54ea\u500b\u8a18\u61b6\u69fd\u3002 John went to hallway.=>{key:John,value:went to hallway} \u7576\u6bcf\u500b\u53e5\u5b50\u8f38\u5165\u81f3\u6a21\u578b\u5167\u6642\uff0c\u7cfb\u7d71\u900f\u904e\u516c\u5f0f(2)\u8a08\u7b97\u53e5\u5b50\u8207 key\u3001value \u4e4b\u9593\u7684\u95dc\u4fc2\u3002 \u03c3\u8868\u793a sigmoid activation function\uff0cg i \u662f gate\uff0c\u8f38\u51fa\u6578\u503c\u5c07\u4ecb\u65bc 0~1 \u4e4b\u9593\uff0c\u6b64\u6578\u503c\u70ba\u9580\u63a7 \u6a5f\u5236\uff0c\u7528\u4ee5\u6c7a\u5b9a\u66f4\u65b0\u8207\u4fdd\u5b58\u591a\u5c11\u8a18\u61b6\u5167\u5bb9\u3002g i \u7531 w j \u548c h j \u6c7a\u5b9a\u3002\u524d\u8005\u8868\u793a\u8207\u95dc\u9375\u5b57\u7684\u5339\u914d \u7a0b\u5ea6\uff0c\u5f8c\u8005\u8868\u793a\u8207 memory \u5167\u5bb9\u7684\u5339\u914d\u7a0b\u5ea6\u3002\u8207\u6b64\u8a18\u61b6\u69fd\u5be6\u9ad4\u8d8a\u76f8\u95dc\u7684\u8a9e\u53e5\uff0c\u6240\u8a08\u7b97\u51fa \u7684\u6578\u503c\u6703\u8d8a\u9ad8\u3002\u516c\u5f0f(3)\u70ba RNN \u7684\u8a08\u7b97\u516c\u5f0f\uff0c\u7528\u4ee5\u8a08\u7b97\u51fa\u8f38\u5165\u53e5\u5b50\u7684\u5167\u5bb9\u3002 \u8868\u793a\u9700\u8981 \u65b0\u589e\u5230\u5df2\u6709\u7684 memory \u4e2d\u7684\u72c0\u614b\u503c\u3002 \u03d5 \u53ef\u4ee5\u662f\u4efb\u610f\u7684 activation function\uff0c\u5be6\u9a57\u9032\u884c\u6642\u4f7f \u7528\u7684\u662f PReLU\u3002 \u3001 \u3001 \u7686\u70ba\u53ef\u8a13\u7df4\u6b0a\u91cd,\u4e26\u4e14\u6240\u6709\u7684 gated RNN \u5171\u4eab\u9019\u4e9b\u5f15\u6578\uff0c\u65bc\u6574 \u65bc 20 \u9805\u4efb\u52d9\u4e2d\u63d0\u4f9b 1k \u8cc7\u6599\u91cf\u8207 10k \u8cc7\u6599\u91cf\uff0c\u53ef\u5be6\u9a57\u6578\u64da\u91cf\u591a\u5be1\u5c0d\u65bc\u6a21\u578b\u5b78\u7fd2\u7684\u5f71\u97ff\u3002 \u672c\u7814\u7a76\u76ee\u6a19\u70ba\u65bc\u6578\u64da\u96c6 1k \u7684\u524d\u63d0\u4e0b\uff0c\u63d0\u5347\u6a21\u578b\u8a13\u7df4\u6548\u679c\u3002 \u4ee5\u4e0b\u5be6\u9a57\u70ba\u6c42\u6e96\u78ba\u6027\uff0c\u4ee5\u4ea4\u53c9\u9a57\u8b49\u65b9\u5f0f\uff0c\u900f\u904e\u4e0d\u540c\u8a13\u7df4\u6578\u64da\u505a\u9a57\u8b49\uff0c\u4e26\u5e73\u5747\u591a\u6b21\u8a13 \u7df4\u7d50\u679c\u3002\u5be6\u9a57\u4f7f\u7528 1k \u6578\u64da\u91cf\u8a13\u7df4\u6a21\u578b\uff0c\u4e26\u5c07 10k \u6578\u64da\u5207\u5206\u591a\u4efd 1k \u6a94\u6848\uff0c\u900f\u904e\u591a\u6b21\u5be6\u9a57 \u9a57\u8b49\u6539\u5584\u6548\u679c\u3002 \u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76 115 4.1 \u5be6\u9a57\u4e00(\u591a\u9ede\u8df3\u8e8d\u8a0a\u606f\u63a8\u7406) (\u6bd4\u8f03\u4fdd\u5b58\u6240\u6709\u95dc\u806f\u8a08\u7b97(\u5982 RelNet \u6a21\u578b\u5373\u662f\u63a1\u7528\u6b64\u65b9\u6cd5)\uff0c\u8207\u95dc\u806f\u63d0\u53d6\u5169\u7a2e \u65b9\u6cd5\u3002\u5c0d\u65bc\u6b0a\u91cd\u7684\u4f7f\u7528\u91cf\uff0c\u6b0a\u91cd\u6578\u91cf\u8d8a\u591a\u4ee3\u8868\u6240\u9700\u8981 GPU \u6240\u9700\u8981\u7684\u8a08\u7b97\u91cf\u8d8a\u5927\u3002\u672c\u5be6\u9a57 \u4e2d 20 \u500b\u8a18\u61b6\u69fd\u5c07\u6703\u8a08\u7b97\u51fa 190 \u500b\u95dc\u4fc2\uff0c\u6b64\u5be6\u9a57\u5c07 190 \u500b\u95dc\u4fc2\u6578\u503c\u63d0\u53d6\u65bc 20 \u500b\u95dc\u4fc2\u69fd\u4fdd \u5b58\uff0c\u76ee\u7684\u53ea\u4fdd\u5b58\u91cd\u8981\u7684\u8cc7\u8a0a\u3002\u5982\u6b64\u53ef\u4ee5\u5927\u70ba\u6e1b\u5c11\u6a21\u578b\u6b0a\u91cd\u7684\u6578\u91cf 6 \u842c\u500b\uff0c\u8f03\u539f\u4f7f\u7528\u6240\u6709 \u95dc\u806f\u7684\u6b0a\u91cd\u6578\u91cf\u964d\u4f4e\u4e86 26.8%\u3002\u5be6\u9a57\u7d50\u679c\u7684\u6578\u64da\u53ef\u4ee5\u767c\u73fe\u5728\u4e0d\u540c\u4efb\u52d9\u63d0\u5347\u6548\u679c\u4e0d\u540c\uff0c\u4e5f \u6709\u90e8\u5206\u6e96\u78ba\u7387\u662f\u4e9b\u5fae\u4e0b\u964d\uff0c\u4f46\u6574\u9ad4\u4ecd\u4ee5\u63d0\u5347\u70ba\u4e3b\u3002 \u8868 Publications of the Association for Computational Linguistics and Chinese Language Processing Money Order or Check payable to \"The Association for Computation Linguistics and Chinese Language Processing \" or \"\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703\" \u2027 E-mail\uff1aaclclp@hp.iis.sinica.edu.tw", "html": null, "content": "
112 114 116 118 120\u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76 \u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76 \u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76 \u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76 \u61c9\u7528\u591a\u8df3\u8e8d\u6ce8\u610f\u8a18\u61b6\u95dc\u806f\u65bc\u8a18\u61b6\u7db2\u8def\u4e4b\u7814\u7a76109 \u8a79\u4eac\u7ff0 \u7b49 \u8a79\u4eac\u7ff0 \u7b49 113 \u8a79\u4eac\u7ff0 \u7b49 \u8a79\u4eac\u7ff0 \u7b49 117 \u8a79\u4eac\u7ff0 \u7b49 119 \u8a79\u4eac\u7ff0 \u7b49 121
\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703
\u5be6\u9a57\u6578\u64da\u4ee5\u8868 \u683c\u5448\u73fe\uff0c\u9996\u5148\u8868 2 \u5be6\u9a57\u5728 bAbI \u6578\u64da\u96c6\u4e0a\uff0c\u4ee5\u4e00\u5343\u7b46\u8cc7\u6599\u8a13\u7df4\uff0c\u900f\u904e\u8868\u683c\u4e2d\u53ef\u767c\u73fe\u900f\u904e\u591a \u8df3\u8e8d\u6a5f\u5236\u53ef\u63d0\u9ad8\u901a\u904e\u7684\u4efb\u52d9\u6578\u91cf\u6216\u662f\u964d\u4f4e\u5e73\u5747\u932f\u8aa4\u7387\uff0c\u4e0d\u540c\u591a\u8df3\u8e8d\u6b65\u6578\u4e5f\u6703\u5f71\u97ff\u7d50\u679c\u3002 \u8868 2. Model MemN2N QRN 1 hop 2 hop 3 hop 2r 3r Mean Error 9.58 8.45 8.15 9.9 11.3 Failed Tasks(error>5%) 17 11 11 7 5 \u4ee5\u5152\u7ae5\u5716\u66f8\u6e2c\u8a66\u6578\u64da(Children's Book Test, CBT)\u70ba\u5be6\u9a57\u6578\u64da\uff0c\u8868 3 \u6574\u7406\u5e7e\u7a2e\u55ae\u8df3\u8e8d \u8207\u591a\u8df3\u8e8d\u6a21\u578b\u5be6\u9a57\u7d50\u679c\uff0c\u76ee\u524d\u6548\u679c\u6700\u512a\u70ba\u591a\u8df3\u8e8d\u6a21\u578b\uff0c\u56e0\u6b64\u591a\u8df3\u8e8d\u6a5f\u5236\u6210\u70ba\u76ee\u524d\u7814\u7a76\u9818 \u57df\u7684\u8da8\u52e2\uff0c\u800c\u672c\u8ad6\u6587\u7814\u7a76\u4e5f\u5c07\u5617\u8a66\u4ee5\u4e0d\u540c\u65b9\u5f0f\u5c07\u591a\u8df3\u8e8d\u6ce8\u610f\u6a5f\u5236\u7d50\u5408\u55ae\u8df3\u8e8d\u6a21\u578b\u3002 \u8868 3. Single Pass Kneser-Ney Language Model + cache 0.439 0.577 LSTMs (context+query) 0.418 0.560 Window LSTM 0.436 0.582 EntNet (general) 0.484 0.540 EntNet (simple) 0.616 0.588 Multi Pass MemNN 0.493 0.554 MemNN+self-sup 0.666 0.630 EpiReader 0.697 0.674 AoA Reader 0.720 0.694 NSE 0.732 0.714 2.4 \u95dc\u4fc2\u7db2\u8def (Relation Network) \u95dc\u4fc2\u7db2\u8def(Relation Network) (Santoro et al., 2017)\u76ee\u7684\u5728\u65bc\u900f\u904e\u52a0\u5165\u5c0d\u7269\u4ef6\u3001\u5be6\u9ad4\u6216\u662f\u8a9e \u53e5\u4e4b\u9593\u7684\u95dc\u4fc2\u8a08\u7b97\uff0c\u63d0\u4f9b\u66f4\u591a\u8a0a\u606f\u7d66\u5f8c\u7e8c\u63a8\u7406\u6a21\u7d44\u9032\u884c\u63a8\u7406\u3002\u95dc\u4fc2\u7db2\u8def\u61c9\u7528\u65bc\u5716\u50cf\u554f\u7b54 (Visual Question Answering, VQA)\uff0c\u4f7f\u7528\u7c21\u55ae\u7684\u6a21\u578b\u4f86\u5efa\u69cb\u7269\u9ad4\u4e4b\u9593\u7684\u806f\u7e6b\uff0c\u6838\u5fc3\u6982\u5ff5\u5728 \u65bc\u6700\u7d42\u7b54\u6848\u8207\u6210\u5c0d\u7684\u5c0d\u8c61\u5177\u6709\u4e00\u5b9a\u7684\u95dc\u806f\u6027\uff0c\u800c\u554f\u984c\u4e5f\u6703\u5f71\u97ff\u5c0d\u6210\u5c0d\u5c0d\u8c61\u7684\u67e5\u8a62\u3002\u5176\u900f \u904e\u795e\u7d93\u7db2\u8def\u8a08\u7b97\u4efb\u610f\u5c0d\u8c61\u5169\u5169\u4e4b\u9593\u7684\u6f5b\u5728\u95dc\u4fc2\u3002 \u95dc\u4fc2\u7db2\u8def\u7684\u512a\u52e2\u5728\u65bc\u5176\u67b6\u69cb\u7c21\u55ae\uff0c\u4f7f\u7528\u5f48\u6027\u5927\uff0c\u53ef\u5c07\u5176\u63d2\u5165\u65bc\u4e0d\u540c\u7684\u6a21\u578b\u88e1\uff0c\u63d0\u9ad8 \u4e86\u6a21\u578b\u63a8\u7406\u7684\u80fd\u529b\uff0c\u53ef\u61c9\u7528\u65bc\u95dc\u4fc2\u63a8\u7406\u76f8\u95dc\u4efb\u52d9\u4e0a\u3002\u524d\u8ff0\u8ad6\u6587\u5728\u5be6\u9a57\u4e2d\u4f7f\u7528\u554f\u7b54\u76f8\u95dc\u8cc7 \u6599\u96c6\u505a\u70ba\u9a57\u8b49\uff0c\u5728 bAbI dataset \u4e8c\u5341\u500b\u4efb\u52d9\u4e2d\u901a\u904e\u4e86\u5341\u516b\u500b\uff0c\u800c\u5728 Sort-of-CLEVR \u4e2d\u53d6\u5f97 \u6700\u512a\u7684\u7d50\u679c\uff0c\u4e14\u8d85\u904e\u4eba\u985e\u6240\u80fd\u9054\u5230\u7684\u5206\u6578\u3002 RelNet (Bansal, Neelakantan & McCallum, 2017)\u4e2d\u5c07\u8a08\u7b97\u5169\u5169\u7269\u4ef6\u4e4b\u9593\u95dc\u4fc2\u7684\u6982\u5ff5 \u5e36\u5165\u905e\u6b78\u5be6\u9ad4\u7db2\u8def\u4e2d\uff0c\u6b64\u8ad6\u6587\u5c07\u95dc\u4fc2\u6982\u5ff5\u7528\u4f86\u8a08\u7b97\u8a18\u61b6\u8207\u8a18\u61b6\u4e4b\u9593\u7684\u95dc\u806f\u3002\u904e\u5f80\u905e\u6b78\u5be6 \u9ad4\u7db2\u8def\u6a21\u578b\u8207\u8a18\u61b6\u7db2\u8def\u76f8\u95dc\u6a21\u578b\uff0c\u8a18\u61b6\u7684\u5132\u5b58\u76f8\u4e92\u4e4b\u9593\u7368\u7acb\uff0c\u800c RelNet \u6a21\u578b\u900f\u904e\u95dc\u4fc2\u8a08 \u7b97\u5c07\u8a18\u61b6\u5f7c\u6b64\u9023\u7d50\u8d77\u4f86\u3002\u8a08\u7b97\u8a18\u61b6\u72c0\u614b\u5132\u5b58\u7684\u516c\u5f0f\u8207\u539f\u6a21\u578b\u76f8\u540c\uff0c\u5dee\u5225\u5728\u591a\u52a0\u4e0a\u4e86\u8a08\u7b97 \u4e0d\u540c\u8a18\u61b6\u4e4b\u9593\u7684\u95dc\u806f\uff0c\u4e26\u61c9\u7528\u65bc\u6700\u5f8c\u7684\u63a8\u7406\u8a08\u7b97\u4e0a\u3002 \u905e\u6b78\u95dc\u4fc2\u7db2\u8def(Recurrent Relational Networks, RRN)\u6a21\u578b(Palm, Paquet & Winther, 2017)\u904b\u7528\u7bc0\u9ede\u95dc\u4fc2\u89e3\u6c7a\u6578\u7368\u7684\u554f\u984c\uff0c\u5728 9*9 \u7684\u6578\u7368\u5167\u5171\u6709 81 \u500b\u7bc0\u9ede\uff0c\u6bcf\u500b\u7bc0\u9ede\u90fd\u9700\u8981 \u8003\u616e\u540c\u4e00\u884c\u3001\u540c\u4e00\u5217\u8207\u540c\u500b\u65b9\u6846\u5167\u7684\u8a0a\u606f\uff0c\u4e0d\u80fd\u51fa\u73fe\u540c\u6a23\u7684\u6578\u5b57\u3002\u6b64\u6a21\u578b\u5c0d\u6bcf\u500b\u7bc0\u9ede\u521d \u59cb\u5316\u7684\u72c0\u614b\u70ba{ 1, 2,\u2026, 81}\uff0c\u900f\u904e\u591a\u5c64\u611f\u77e5\u5668(MLP) \u8a08\u7b97\u6bcf\u500b\u7bc0\u9ede\u4e4b\u9593\u7684\u95dc\u806f\uff0c\u5c07\u8a08 \u7b97\u51fa\u7684\u95dc\u4fc2\u6578\u503c\u76f8\u52a0\uff0c\u7528\u4ee5\u66f4\u65b0\u7d50\u9ede\u7684\u72c0\u614b\uff0c\u6bcf\u500b\u7bc0\u9ede\u7684\u66f4\u65b0\u8003\u616e\u4e0a\u4e00\u500b\u6642\u9593\u6b65\u7684\u72c0\u614b\u3001 \u8f38\u5165\u4ee5\u53ca\u95dc\u4fc2\u6578\u503c\u3002\u6b64\u6a21\u578b\u9664\u4e86\u61c9\u7528\u65bc\u6578\u7368\u4e0a\uff0c\u4e5f\u5728 bAbI \u6578\u64da\u96c6\u3001Pretty-CLEVR1 \u8868\u73fe \u512a\u79c0\u3002 \u4ee5(Method Mean Error Rate (%) RRN 0.46\u00b10.77 RelNet 0.29 EntNet 9.7\u00b12.6 2.5 \u9810\u8a13\u7df4\u6a21\u578b (Pre-trained Model) \u8fd1\u5e74\u4f86\u7684\u8a9e\u8a00\u6a21\u578b\u7814\u7a76\u4f7f\u7528\u5927\u91cf\u6587\u7ae0\u9810\u8a13\u7df4(Pre-train)\u901a\u7528\u8a9e\u8a00\u6a21\u578b\uff0c\u7136\u5f8c\u518d\u6839\u64da\u5177\u9ad4\u61c9 \u7528\uff0c\u7528 supervised \u7684\u8a13\u7df4\u8cc7\u6599\uff0c\u5fae\u8abf(Fine-tuning)\u6a21\u578b\uff0c\u4f7f\u4e4b\u9069\u7528\u4e8e\u5177\u9ad4\u61c9\u7528\uff0c\u4f86\u63d0\u6607\u6a21 \u578b\u7684\u6548\u80fd\u3002 \u5176\u4e2d BERT \u6a21\u578b(Devlin, Chang, Lee & Toutanova, 2018)\u4ee5\u53ca\u5f8c\u7e8c\u767c\u8868\uff0c\u8b93 BERT \u66f4 \u5c0f\uff0c\u8a13\u7df4\u66f4\u5feb\u7684 Albert \u6a21\u578b(Lan et al. 2019)\uff0c\u88ab\u5ee3\u6cdb\u5730\u61c9\u7528\u65bc\u554f\u7b54\u4efb\u52d9\uff0c\u4e14\u5f97\u5230\u76f8\u7576\u512a \u7570\u7684\u6210\u679c\u3002 \u9019\u7a2e\u7d50\u5408\u9810\u8a13\u7df4\u6a21\u578b\u518d\u52a0\u4e0a\u5f8c\u7e8c\u7684\u5fae\u8abf\u8a13\u7df4\u7684\u65b9\u5f0f\uff0c\u53ef\u4ee5\u8b93\u8a31\u591a\u7684\u81ea\u7136\u8a9e\u8a00\u8655\u7406\u4efb \u52d9\u5f97\u5230\u6975\u5927\u5e45\u5ea6\u7684\u6548\u80fd\u63d0\u5347\uff0c\u4e5f\u8b93\u6211\u5011\u53ef\u4ee5\u7528\u66f4\u5c0f\u7684\u8cc7\u6599\u5c31\u80fd\u8a13\u7df4\u51fa\u6975\u597d\u7684\u6548\u679c\u3002 3. \u7814\u7a76\u65b9\u6cd5 (Research Method) \u8a18\u61b6\u7db2\u8def\u900f\u904e\u5916\u90e8\u8a18\u61b6\u7684\u4fdd\u5b58\uff0c\u5f37\u5316\u9577\u671f\u8a18\u61b6\u7684\u80fd\u529b\uff0c\u7d93\u904e\u6ce8\u610f\u529b\u6a5f\u5236\u627e\u5c0b\u8207\u554f\u984c\u76f8\u95dc \u7684\u8a18\u61b6\u69fd\uff0c\u63a8\u7406\u51fa\u5c0d\u61c9\u7684\u7b54\u6848\u3002\u8a18\u61b6\u69fd\u8207\u8a18\u61b6\u69fd\u4e4b\u9593\u76f8\u4e92\u7368\u7acb\u904b\u4f5c\uff0c\u91dd\u5c0d\u4e0d\u540c\u7684\u5be6\u9ad4\u4fdd \u5b58\u76f8\u95dc\u8a0a\u606f\uff0c\u5c0d\u65bc\u9700\u8981\u591a\u9805\u8a18\u61b6\u4ea4\u4e92\u63a8\u7406\u7684\u8907\u96dc\u4efb\u52d9\uff0c\u63a8\u7406\u6a21\u7d44\u8f03\u7121\u6cd5\u4f7f\u7528\u8db3\u5920\u7684\u8a0a\u606f \u8f38\u51fa\u6b63\u78ba\u7b54\u6848\u3002\u672c\u7814\u7a76\u5247\u900f\u904e\u8a18\u61b6\u4e4b\u9593\u7684\u95dc\u806f\u8a08\u7b97\u8207\u591a\u8df3\u8e8d\u6a5f\u5236\u63a8\u7406\uff0c\u5617\u8a66\u63d0\u9ad8\u6a21\u578b\u8a18 \u61b6\u5132\u5b58\u8207\u63a8\u7406\u80fd\u529b\uff0c\u4e26\u4ee5\u554f\u7b54\u76f8\u95dc\u4efb\u52d9\u4f5c\u70ba\u9a57\u8b49\uff0c\u6a21\u578b\u9700\u8981\u5177\u5099\u8a9e\u8a00\u7406\u89e3\u4ee5\u53ca\u63a8\u7406\u7684\u80fd \u529b\u3002\u63a5\u4e0b\u4f86\u6211\u5011\u5c07\u4ecb\u7d39\u672c\u8ad6\u6587\u6574\u9ad4\u6a21\u578b\u7684\u67b6\u69cb\uff0c\u4e26\u4ecb\u7d39\u5404\u6a21\u7d44\u5167\u7684\u8a2d\u8a08\u3002 3.1 \u6a21\u578b\u67b6\u69cb (Model Architecture) \u672c\u8ad6\u6587\u4e4b\u6a21\u578b\u67b6\u69cb\u4ee5 EntNet \u6a21\u578b(Henaff et al., 2017)\u70ba\u57fa\u790e\uff0c\u7528\u56fa\u5b9a\u5927\u5c0f\u8a18\u61b6\u55ae\u5143\u4fdd\u5b58\u8f38 \u5165\u6578\u64da\u7684\u5be6\u9ad4\u8207\u76f8\u95dc\u5c6c\u6027\uff0c\u8a18\u61b6\u5167\u5bb9\u5247\u96a8\u8457\u8f38\u5165\u53e5\u5b50\u5373\u6642\u66f4\u65b0\u3002\u6a21\u578b\u67b6\u69cb\u5728\u8a18\u61b6\u69fd (memory slot)\u4e4b\u9593\u52a0\u4e0a\u8a18\u61b6\u95dc\u806f\u7684\u8a08\u7b97\u8207\u63d0\u53d6\uff0c\u4fdd\u5b58\u65bc\u95dc\u806f\u69fd(relation slot)\u5167\u3002\u539f\u6a21\u578b\u8a18 \u61b6\u69fd\u8207\u8a18\u61b6\u69fd\u4e4b\u9593\u76f8\u4e92\u7368\u7acb\u904b\u4f5c\uff0c\u4f46\u8a18\u61b6\u8207\u8a18\u61b6\u4e4b\u9593\u61c9\u5177\u6709\u4e00\u5b9a\u7684\u95dc\u806f\uff0c\u900f\u904e\u95dc\u806f\u8a08\u7b97 \u53ef\u5c07\u5404\u8a18\u61b6\u69fd\u5167\u5bb9\u806f\u7e6b\u8d77\u4f86\u3002\u6a21\u578b\u4e3b\u8981\u53ef\u5206\u70ba\u4e09\u500b\u90e8\u5206\uff0c\u5206\u5225\u70ba Encoder \u6a21\u7d44\uff0c\u8ca0\u8cac\u5c07 \u8f38\u5165\u7684\u81ea\u7136\u8a9e\u8a00\u8a5e\u53e5\u7de8\u78bc\u70ba\u5411\u91cf\u7684\u5f62\u5f0f\uff0c\u4ee5\u5229\u96fb\u8166\u5f8c\u7e8c\u8a08\u7b97\uff1b\u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u5728\u6bcf\u6b21\u53e5\u5b50 \u8f38\u5165\u6642\uff0c\u66f4\u65b0\u8a18\u61b6\u69fd\u5167\u6240\u5132\u5b58\u7684\u8cc7\u8a0a\uff0c\u518d\u900f\u904e\u8a08\u7b97\u8a18\u61b6\u69fd\u5f7c\u6b64\u9593\u7684\u95dc\u806f\u4f86\u66f4\u65b0\u95dc\u806f\u69fd\uff0c \u8a18\u61b6\u69fd\u5927\u5c0f\u8207\u95dc\u806f\u69fd\u5927\u5c0f\u76f8\u7b49\uff1b\u6700\u5f8c\u8f38\u51fa\u6a21\u7d44\u6839\u64da\u554f\u984c\uff0c\u5f9e\u8a18\u61b6\u69fd\u8207\u95dc\u806f\u69fd\u4e2d\u63a8\u7406\u51fa\u6700 \u5f8c\u7684\u7b54\u6848\u3002\u6574\u9ad4\u67b6\u69cb\u5982\u4e0b\u65b9\u5716 1 \u6a21\u578b\u67b6\u69cb\u5716\u6240\u793a\u3002\u76f8\u8f03 EntNet\uff0c\u5728\u672c\u67b6\u69cb\u4e2d\u6211\u5011\u52a0\u5165\u4e86 Relation memory \u7684\u90e8\u4efd\uff0c\u4ee5\u5617\u8a66\u900f\u904e\u7d50\u5408\u8a18\u61b6\u95dc\u806f\u8a08\u7b97\uff0c\u589e\u52a0\u8a18\u61b6\u9593\u7684\u806f\u7e6b\u3002 \u6b64\u6a21\u578b\u61c9\u7528\u65bc\u554f\u7b54\u4efb\u52d9\u4e0a\uff0c\u6240\u4f7f\u7528\u7684\u6578\u64da\u7686\u70ba\u81ea\u7136\u8a9e\u8a00\u5f62\u5f0f\uff0c\u81ea\u7136\u8a9e\u8a00\u7121\u6cd5\u76f4\u63a5\u8f38\u5165\u81f3 \u96fb\u8166\u8a08\u7b97\uff0c\u56e0\u6b64\u5728\u8f38\u5165\u81f3\u6a21\u578b\u8a13\u7df4\u524d\u9808\u5148\u8f49\u63db\u70ba\u7de8\u78bc\u7684\u5f62\u5f0f\uff0c\u4ee5\u5229\u5f8c\u7e8c\u7684\u904b\u7b97\uff0c\u6b64\u6a21\u7d44 \u5206\u70ba\u5169\u500b\u6b65\u9a5f\u7de8\u78bc\uff0c\u900f\u904e Label encoding \u521d\u6b65\u5c07\u53e5\u5b50\u8f49\u63db\u70ba\u6578\u5b57\uff0c\u518d\u7d93\u904e Position encoding \u7d66\u4e88\u5b57\u8a5e\u65bc\u6574\u9ad4\u53e5\u5b50\u7684\u76f8\u5c0d\u4f4d\u7f6e\u8cc7\u8a0a\u3002\u9996\u5148\u5efa\u7acb\u8a5e\u5f59\u5eab\uff0c\u5c07\u6578\u64da\u96c6\u4e2d\u6240\u6709\u7528\u5230\u7684\u8a5e\u5f59\u5c0d \u61c9\u5230\u4e00\u500b\u56fa\u5b9a\u7684\u6578\u5b57\uff0c\u8a5e\u5f59\u5eab\u4e2d\u8a5e\u5f59\u91cf\u8207\u6578\u5b57\u91cf\u76f8\u7b49\uff0c\u4e0d\u6703\u65b0\u589e\u591a\u65bc\u6b04\u4f4d\u7684\u8a5e\u5f59\u3002\u5b8c\u6210 \u8a5e\u5f59\u5eab\u7684\u5efa\u7acb\u5f8c\uff0c\u5c07\u6578\u64da\u96c6\u6839\u64da\u8a5e\u5f59\u5eab\u8f49\u63db\u70ba\u6578\u5b57\u7684\u5f62\u5f0f\u8868\u793a\u3002\u7bc4\u4f8b\u5982\u4e0b\u6240\u793a:{}\u5167\u70ba\u4e0d \u540c\u8a5e\u5f59\u6240\u5c0d\u61c9\u7684\u7de8\u865f\uff0c[]\u70ba\u6bcf\u500b\u53e5\u5b50\u4f9d\u7167\u8a5e\u5f59\u5eab\u8f49\u63db\u70ba\u5c0d\u61c9\u7de8\u865f\u3002 {hallway:1,John:2,the:3,to:4,went:5,.:6} John went to the hallway.\u2192[2,5,4,3,1,6] \u57fa\u672c\u6578\u503c\u8f49\u63db\u5f8c\uff0c\u96d6\u7136\u53e5\u5b50\u90fd\u4ee5\u6578\u5b57\u5f62\u5f0f\u8868\u793a\uff0c\u4f46\u7de8\u78bc\u4e26\u7121\u76f8\u5c0d\u61c9\u7684\u610f\u7fa9\uff0c\u4ee5\u4e0a\u9762 \u4f8b\u5b50\u70ba\u4f8b\uff0challway \u6578\u503c\u70ba 1\u3001John \u6578\u503c\u70ba 2\uff0challway \u7684\u5169\u500d\u70ba John\uff0c\u9019\u4e26\u7121\u6cd5\u89e3\u91cb\u8a5e \u8207\u8a5e\u4e4b\u9593\u7684\u95dc\u4fc2\uff0c\u6240\u4ee5\u9019\u4e9b\u6578\u5b57\u5c07\u6703\u518d\u6b21\u8f49\u63db\u70ba\u6a21\u578b\u8a13\u7df4\u51fa\u4f86\u7684\u5411\u91cf\uff0c\u800c\u6578\u5b57\u662f\u70ba\u4e86\u5c07 \u76f8\u540c\u7684\u8a5e\u5f59\u8f49\u63db\u70ba\u76f8\u540c\u7684\u5411\u91cf\u3002\u9996\u5148\u6839\u64da\u8a5e\u5f59\u5eab\u7684\u5927\u5c0f\uff0c\u5efa\u7acb\u8207\u8a5e\u5f59\u91cf\u76f8\u7b49\u91cf\u7684\u53ef\u8a13\u7df4 \u5411\u91cf\uff0c\u6bcf\u500b\u8a5e\u5f59\u6709\u5c0d\u61c9\u7684\u5411\u91cf\uff0c\u4e26\u5728\u6574\u9ad4\u6a21\u578b\u8a13\u7df4\u7684\u904e\u7a0b\u4e2d\u4e00\u8d77\u66f4\u65b0\u5411\u91cf\u6578\u503c\uff0c\u900f\u904e\u4f7f \u7528\u81ea\u7136\u8a9e\u8a00\u76f8\u95dc\u6578\u64da\u96c6\uff0c\u8a13\u7df4\u8a5e\u5f59\u5c0d\u61c9\u7684\u5411\u91cf\u3002 \u4f4d\u7f6e\u7de8\u78bc(Position encoding)\u76ee\u7684\u5728\u65bc\u8ce6\u4e88\u5b57\u8a5e\u4e4b\u9593\u9806\u5e8f\u7684\u95dc\u4fc2\uff0c\u81ea\u7136\u8a9e\u8a00\u8a9e\u610f\u6703 \u6839\u64da\u8a5e\u5f59\u7684\u9806\u5e8f\u800c\u6709\u6240\u4e0d\u540c\uff0c\u82e5\u662f\u4f7f\u7528 BOW \u7684\u65b9\u5f0f\u7de8\u78bc\uff0c\u8a5e\u5f59\u51fa\u73fe\u5728\u53e5\u5b50\u4efb\u610f\u4f4d\u7f6e\u5c0d \u65bc\u7de8\u78bc\u4e26\u6c92\u6709\u4e0d\u540c\uff0c\u4f46\u65bc\u5be6\u969b\u8a9e\u8a00\u76f8\u540c\u8a5e\u5f59\u65bc\u4e0d\u540c\u4f4d\u7f6e\uff0c\u5c0d\u65bc\u8a9e\u610f\u7406\u89e3\u53ef\u80fd\u6703\u6709\u5f88\u5927\u7a0b \u5ea6\u7684\u4e0d\u4e00\u6a23\uff0c\u5982\u4e0b\u65b9\u7bc4\u4f8b\u6240\u793a\uff0c\u76f8\u540c\u7528\u8a5e\u65bc\u4e0d\u540c\u4f4d\u7f6e\u6240\u5f97\u51fa\u7684\u8a9e\u610f\u76f8\u5dee\u751a\u5927\u3002 John likes Mary.\u2260Mary likes John. \u672c\u5be6\u9a57\u4f4d\u7f6e\u7de8\u78bc\u63a1\u7528\u8a13\u7df4\u7684\u65b9\u5f0f\u9054\u6210\uff0c\u900f\u904e mask \u7684\u65b9\u6cd5\u70ba\u8a9e\u53e5\u52a0\u5165\u9806\u5e8f\u95dc\u4fc2\uff0c\u5982 \u4e0b\u65b9\u516c\u5f0f(1)\u6240\u793a\u3002{e 1 ,\u2026,e k }\u70ba\u53e5\u5b50\u4e2d\u6bcf\u500b\u8a5e\u5f59\u7684\u7de8\u78bc\u5411\u91cf\uff0c{ 1 ,\u2026, }\u662f\u9700\u8981\u5b78\u7fd2\u7684 multiplicative mask\uff0c\u70ba\u53ef\u8a13\u7df4\u7684\u5411\u91cf\u3002\u4f7f\u7528\u9019\u500b mask \u7684\u76ee\u7684\u5728\u65bc\u52a0\u5165\u4f4d\u7f6e\u8cc7\u8a0a\u3002\u900f\u904e\u8a13 \u7df4\u7684\u904e\u7a0b\u66f4\u65b0\u6b0a\u91cd\uff0c\u7576\u76f8\u540c\u8a5e\u5f59\u65bc\u4e0d\u540c\u7684\u4f4d\u7f6e\u6642\uff0c\u6240\u4e58\u4e0a\u7684 \u5411\u91cf\u4e5f\u6703\u6709\u6240\u4e0d\u540c\u3002\u900f\u904e \u9019\u6a23\u7684\u65b9\u5f0f\u5c07\u4f4d\u7f6e\u7684\u8a0a\u606f\u52a0\u5165\u81f3\u7de8\u78bc\u4e2d\uff0c\u6700\u5f8c\u5c07\u5176\u52a0\u7e3d\u8868\u793a\u6574\u9ad4\u53e5\u5b50\u7684\u5411\u91cf\u3002 \u2211 \u2a00 (1) 3.3 \u52d5\u614b\u8a18\u61b6\u6a21\u7d44 (Dynamic Memory Module) \u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u7531\u5169\u500b\u90e8\u5206\u7d44\u6210\uff0c\u5206\u5225\u70ba\u8a18\u61b6\u5132\u5b58\u69fd\u8207\u95dc\u4fc2\u5132\u5b58\u69fd\uff0c\u8a18\u61b6\u69fd\u4ee5 key-value \u7684\u5f62\u5f0f\u4fdd\u5b58\uff0ckey \u4fdd\u5b58\u5be6\u9ad4\u3001value \u4fdd\u5b58\u72c0\u614b\uff0c\u66f4\u65b0\u5b8c\u8a18\u61b6\u5f8c\u518d\u4f9d\u64da\u8a18\u61b6\u8207\u8a18\u61b6\u4e4b\u9593\u7684\u95dc \u806f\u66f4\u65b0\u95dc\u4fc2\u69fd\u5167\u5bb9\uff0c\u8a18\u61b6\u69fd\u8207\u95dc\u4fc2\u69fd\u6578\u91cf\u76f8\u7b49\uff0c\u67b6\u69cb\u5982\u4e0b\u65b9\u5716 2 \u6240\u793a\u3002\u672c\u67b6\u69cb\u8207 EntNet \u7684\u5dee\u5225\u70ba\u672c\u7814\u7a76\u52a0\u5165\u4e86\u95dc\u4fc2\u5132\u5b58\u69fd r\u3002\u65b0\u52a0\u5165\u90e8\u4efd\u5728\u5716\u4e2d\u4ee5\u8f03\u7c97\u4e4b\u7dda\u689d\u7e6a\u51fa\u3002 \u5716 2. \u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u67b6\u69cb\u5716 [Figure 2\u9ad4\u6a21\u578b\u8a13\u7df4\u6642\u4e00\u8d77\u66f4\u65b0\u3002\u516c\u5f0f(4)\u66f4\u65b0\u6bcf\u500b\u8a18\u61b6\u69fd h j \u5167\u5bb9\uff0c\u5c07\u539f\u8a18\u61b6\u69fd\u5167\u5bb9\u9580\u63a7\u8207\u65b0\u8a18\u61b6 \u76f8\u52a0\uff0c\u900f\u904e\u9580\u63a7\u6578\u503c\u63a7\u5236\u66f4\u65b0\u7684\u5e45\u5ea6\u3002\u516c\u5f0f(5)\u7528\u4ee5\u907a\u5fd8\u4e0d\u5fc5\u8981\u8cc7\u8a0a\uff0c\u82e5\u662f\u4e0d\u65b7\u5c07\u65b0\u8a18\u61b6 \u52a0\u5165\u8a18\u61b6\u69fd\u5167\uff0c\u5411\u91cf\u6578\u503c\u5c07\u6703\u8d8a\u4f86\u8d8a\u5927\uff0c\u900f\u904e\u9664\u4e0a normalization \u6578\u503c\uff0c\u4fdd\u6301\u8a18\u61b6\u5411\u91cf\u6578 \u503c\u7bc4\u570d\u3002 \u2190 (2) \u2190 \u2205 (3) \u2190 \u2299 (4) \u2190 (5) \u6a21\u578b\u4e2d\u6bcf\u500b\u95dc\u4fc2\u69fd\u4fdd\u5b58\u5c0d\u61c9\u8a18\u61b6\u8207\u6240\u6709\u5176\u4ed6\u8a18\u61b6\u7684\u95dc\u4fc2\u3002\u66f4\u65b0\u5b8c\u8a18\u61b6\u69fd\u5c07\u6703\u5f97\u5230\u6b64 \u6b21\u8f38\u5165\u5c0d\u65bc\u6bcf\u500b\u8a18\u61b6\u7684\u9580\u63a7\u6578\u503c\uff0c\u5c07\u9580\u63a7\u6578\u503c\u5169\u5169\u76f8\u4e58\u8a08\u7b97\u5f7c\u6b64\u9593\u7684\u95dc\u806f\uff0c\u76f8\u540c\u7684\u9580\u63a7 \u8a08\u7b97\u52a0\u7e3d\u65bc\u76f8\u540c\u95dc\u4fc2\u9580\u63a7\u6578\u503c \uff0c\u5982\u516c\u5f0f(6)\u6240\u793a\u3002\u6b64\u516c\u5f0f\u76ee\u7684\u5728\u65bc\u5c07\u76f8\u540c\u95dc\u806f\u5c0d\u8c61\u4fdd\u5b58 \u5728\u4e00\u8d77\u3002\u5176\u4e2d \uff0c \uff4a \u4f9d\u64da\u76f8\u540c\u95dc\u806f\u5c0d\u8c61\uff0c\u9078\u64c7\u76f8\u95dc\u7684\u8a18\u61b6\u69fd\u3002\u4f8b\u5982\u6a21\u578b\u6709 20 \u500b\u8a18\u61b6\u69fd\uff0c \u8a18\u61b6\u69fd 1 \u8207\u5176\u4ed6\u6240\u6709\u7684\u8a18\u61b6\u69fd\u8a08\u7b97\u51fa 19 \u500b\u95dc\u4fc2\uff0c\u5c07\u9019\u4e9b\u95dc\u4fc2\u4fdd\u5b58\u65bc\u7b2c\u4e00\u500b\u95dc\u4fc2\u69fd\u3002\u5982\u6b64\uff0c \u5f8c\u7e8c\u63a8\u7406\u6642\u53ef\u76f4\u63a5\u5f9e\u5c0d\u61c9\u95dc\u4fc2\u69fd\u627e\u51fa\u8207\u5176\u4ed6\u8a18\u61b6\u7684\u95dc\u806f\uff0c\u5982\u516c\u5f0f(7)\u6240\u793a\u3002 , (6) , (7) \u516c\u5f0f(8)\u8a08\u7b97\u95dc\u4fc2\u66f4\u65b0\u5167\u5bb9\uff0c\u5411\u91cf A\u3001B \u70ba\u53ef\u8a13\u7df4\u6b0a\u91cd\uff0c\u6839\u64da\u539f\u672c\u95dc\u4fc2\u69fd\u5167\u5bb9 \u8207\u8f38 \u5165\u53e5\u5b50 \u6240\u9700\u8981\u66f4\u65b0\u7684\u5167\u5bb9\uff0c\u6700\u5f8c\u4ee5 PReLU \u70ba activation function\u3002\u516c\u5f0f(9)\u70ba\u95dc\u4fc2\u66f4\u65b0\u3002 \u5c07\u95dc\u4fc2\u9580\u63a7\u6578\u503c\u4e58\u4e0a\u65b0\u7684\u95dc\u4fc2\u5167\u5bb9\uff0c\u4e26\u52a0\u4e0a\u539f\u672c\u7684\u95dc\u4fc2\u69fd\u5167\u5bb9\u7528\u4ee5\u66f4\u65b0\u95dc\u4fc2\u69fd\u8cc7\u8a0a\u3002 \u0303 \u2190 (8) \u2190 \u2299 \u0303 (9) 3.4 \u8f38\u51fa\u6a21\u7d44 (Output Module) \u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u66f4\u65b0\u5b8c\u8a18\u61b6\u69fd \u8207\u95dc\u4fc2\u69fd \uff0c\u5c07\u6700\u5f8c\u72c0\u614b\u4fdd\u5b58\u7d66\u8f38\u5165\u6a21\u7d44\u63a8\u7406\u4f7f\u7528\u3002\u516c\u5f0f(10) \u5c07\u540c\u500b\u8a18\u61b6\u69fd \u8207\u95dc\u4fc2\u69fd \u5411\u91cf\u4e26\u63a5\u5728\u4e00\u8d77\uff0c\u4e26\u4e58\u4e0a\u53ef\u8a13\u7df4\u6b0a\u91cd \u8a08\u7b97\u51fa\u8a18\u61b6 \u3002\u7136\u5f8c\u518d \u900f\u904e\u6ce8\u610f\u529b\u6a5f\u5236\u8a08\u7b97\u8207 query \u76f8\u95dc\u7684 \u6578\u503c\uff0c\u6578\u503c\u8d8a\u9ad8\u4ee3\u8868\u76f8\u95dc\u6027\u8d8a\u9ad8\uff0c\u5982\u516c\u5f0f(11)\u6240\u793a\u3002 ; (10) (11) \u5c07\u6ce8\u610f\u529b\u6578\u503c\u4e58\u4e0a\u5c0d\u61c9\u8a18\u61b6\uff0c\u8d8a\u76f8\u95dc\u8a18\u61b6\u6578\u503c\u76f8\u5c0d\u6703\u8d8a\u9ad8\u5982\u516c\u5f0f(12)\u6240\u793a\uff0c\u4ee5\u4fdd\u7559 \u8207\u554f\u984c\u76f8\u95dc\u91cd\u8981\u8cc7\u8a0a\u3002\u7cfb\u7d71\u6700\u5f8c\u6839\u64da\u516c\u5f0f(13)\u63a8\u7406\u51fa\u6700\u5f8c\u554f\u984c\u7684\u7b54\u6848\uff0c\u5176\u4e2d R \u8ddf H \u70ba\u53c3 \u6578\u77e9\u9663\u3002query \u554f\u984c\u6703\u4f9d\u7167\u8a13\u7df4\u6642\u7684\u65b9\u5f0f\u88ab\u7de8\u78bc\u6210 k \u500b\u7dad\u5ea6\u7684\u5411\u91cf q\u3002\u672c\u7814\u7a76\u4f7f\u7528\u6578\u64da\u70ba \u554f\u7b54\u4efb\u52d9\uff0c\u7cfb\u7d71\u6839\u64da\u8a5e\u5f59\u5eab\u8f38\u51fa\u6700\u6709\u53ef\u80fd\u7684\u7b54\u6848 y\u3002 (12) \u00d8 (13) \u5f9e\u7b2c\u4e8c\u7ae0\u6587\u737b\u63a2\u8a0e\u53ef\u767c\u73fe\uff0c\u591a\u8df3\u8e8d\u6a5f\u5236\u6709\u52a9\u65bc\u63d0\u5347\u6a21\u578b\u63a8\u7406\u80fd\u529b\uff0c\u672c\u7814\u7a76\u5617\u8a66\u5c07\u6b64 \u6982\u5ff5\u52a0\u5165\u63a8\u7406\u6a21\u7d44\uff0c\u5c07\u4e0a\u65b9\u8a18\u61b6\u8207\u6ce8\u610f\u529b\u6b0a\u91cd\u76f8\u4e58\u52a0\u7e3d\u7684\u5411\u91cf u\uff0c\u8207\u539f query \u5411\u91cf\u76f8\u52a0\uff0c \u4f5c\u70ba\u65b0\u7684 query \u5411\u91cf\uff0c\u91cd\u8907\u516c\u5f0f(11)(12)\u8a08\u7b97\uff0c\u6bcf\u591a\u4e00\u6b21\u8a08\u7b97 hop \u6578\u589e\u52a0 1\uff0c\u539f\u672c\u63a8\u7406\u6a21 \u7d44\u70ba hop1\uff0c\u91cd\u8907\u4e00\u6b21\u8a08\u7b97\u70ba hop2\uff0c\u4f9d\u6b64\u985e\u63a8\uff0c\u5982\u516c\u5f0f(14)\u6240\u793a\u3002 (14) 3.5 \u8a0e\u8ad6 (Discussion) \u672c\u7814\u7a76\u4ee5 EntNet \u6a21\u578b\u70ba\u7814\u7a76\u57fa\u790e\uff0c\u5617\u8a66\u900f\u904e\u7d50\u5408\u8a18\u61b6\u95dc\u806f\u8a08\u7b97\uff0c\u589e\u52a0\u8a18\u61b6\u9593\u7684\u806f\u7e6b\uff0c\u800c \u975e\u4e0d\u540c\u8a18\u61b6\u69fd\u7368\u7acb\u904b\u4f5c\u3002\u5728 3.1 \u7bc0\u4e2d\u4ecb\u7d39\u6574\u9ad4\u6a21\u578b\u67b6\u69cb\u3002\u4e3b\u8981 Encode \u6a21\u7d44\u3001\u52d5\u614b\u8a18\u61b6\u6a21 \u7d44\u4ee5\u53ca\u8f38\u51fa\u6240\u7d44\u6210\u30023.2 \u7bc0\u4ecb\u7d39\u6587\u5b57\u5982\u4f55\u8f49\u63db\u70ba\u5411\u91cf\u5f62\u5f0f\u8868\u793a\u3002\u5f9e\u5efa\u7acb\u57fa\u672c\u8a5e\u5f59\u5eab\u5230\u8a13\u7df4 \u8a5e\u5f59\u5411\u91cf\u7684\u904e\u7a0b\u30023.3 \u7bc0\u4e2d\u4ecb\u7d39\u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u7d30\u7bc0\u3002\u8ca0\u8cac\u8a18\u61b6\u4fdd\u5b58\u8207\u66f4\u65b0\u7684\u90e8\u5206\uff0c\u9664\u4e86\u539f \u8a18\u61b6\u6a21\u578b\u7684\u8a18\u61b6\u69fd\u5916\uff0c\u5c07\u95dc\u4fc2\u7684\u8a08\u7b97\u52a0\u5165\u6a21\u578b\u5167\uff0c\u4f7f\u4e0d\u540c\u7684\u8a18\u61b6\u69fd\u53ef\u900f\u904e\u95dc\u806f\u8a08\u7b97\uff0c\u8a08 \u7b97\u5f7c\u6b64\u7684\u95dc\u4fc2\uff0c\u8a18\u61b6\u9593\u7684\u95dc\u806f\u8a08\u7b97\u96a8\u8457\u8a18\u61b6\u69fd\u6578\u91cf\u800c\u5feb\u901f\u589e\u9577\uff0c\u5c07\u5176\u63d0\u53d6\u70ba\u540c\u6a23\u8a18\u61b6\u6578 \u91cf\u7684\u95dc\u806f\u69fd\uff0c\u53ef\u964d\u4f4e\u6b0a\u91cd\u8207\u8a08\u7b97\u91cf\u30023.4 \u7bc0\u70ba\u8f38\u51fa\u6a21\u7d44\u7684\u7d30\u7bc0\u3002\u7576\u8a18\u61b6\u6a21\u7d44\u5c07\u5148\u9a57\u77e5\u8b58\u4fdd \u5b58\u5f8c\uff0c\u8f38\u51fa\u6a21\u7d44\u91dd\u5c0d\u554f\u984c\u5f9e\u8a18\u61b6\u4e2d\u53d6\u51fa\u76f8\u95dc\u7684\u90e8\u5206\uff0c\u4e26\u63a8\u7406\u51fa\u6700\u5f8c\u7684\u7b54\u6848\u3002\u7814\u7a76\u65b9\u6cd5\u4e2d \u7684\u95dc\u4fc2\u8a08\u7b97\u8207\u591a\u8df3\u8e8d\u7684\u63a8\u7406\u65b9\u6cd5\uff0c\u53ef\u6cdb\u5316\u61c9\u7528\u5230\u4e0d\u540c\u7684\u8a18\u61b6\u7db2\u8def\u67b6\u69cb\uff0c\u6216\u662f\u5177\u6709\u8a18\u61b6\u4fdd \u5b58\u7684\u67b6\u69cb\u7684\u6a21\u578b\u4e0a\u3002 \u672c\u7814\u7a76\u6240\u6709\u5be6\u9a57\u9078\u64c7\u4ee5 bAbI dataset \u505a\u70ba\u5be6\u9a57\u9a57\u8b49\u7684\u6578\u64da\u96c6\uff0c\u6b64\u6578\u64da\u96c6\u70ba\u81ea Facebook AI Research (FAIR)\u6240\u63d0\u4f9b\u7684\u7d9c\u5408\u95b1\u8b80\u7406\u89e3\u548c\u554f\u7b54\u8cc7\u6599\u96c6\u3002\u9078\u64c7\u6b64\u6578\u64da\u96c6\u9a57\u8b49\u76ee\u7684\u6709\u56db\u9ede\uff0c \u5206\u5225\u5982\u4e0b\uff1a (\u4e00) \u6578\u64da\u96c6\u5305\u542b\u4e86\u4e8c\u5341\u7a2e\u4efb\u52d9\uff0c\u53ef\u5f9e\u4e0d\u540c\u9762\u5411\u6e2c\u8a66\u6a21\u578b\u7684\u512a\u52e2\u8207\u52a3\u52e2\u3002 (\u4e8c) \u70ba\u554f\u7b54\u8207\u81ea\u7136\u8a9e\u8a00\u7406\u89e3\u5e38\u7528\u6578\u64da\u96c6\uff0c\u6709\u8a31\u591a\u4e0d\u540c\u6a21\u578b\u5be6\u9a57\u6578\u64da\u53ef\u53c3\u8003\u6bd4\u8f03\u3002 (\u4e09) \u5305\u542b\u82f1\u6587\u3001\u5370\u5730\u8a9e\u8207\u6539\u7d44(\u4eba\u985e\u4e0d\u53ef\u95b1\u8b80) \u7b49\u6578\u64da\uff0c\u53ef\u4e86\u89e3\u8a9e\u8a00\u6a21\u578b\u61c9\u7528\u65bc\u4e0d\u540c\u81ea\u7136\u8a9e \u8a00\u4e4b\u6548\u679c\u3002 (\u56db) \u5be6\u9a57\u76ee\u7684\uff1a \u7531\u524d\u9762\u7684\u5be6\u9a57\u53ca\u8a0e\u8ad6\u4e2d\uff0c\u6211\u5011\u53ef\u4ee5\u770b\u51fa\u591a\u8df3\u8e8d\u91dd\u5c0d\u8907\u96dc\u7684\u554f\u7b54\u63a8\u7406\uff0c\u666e\u904d\u76f8\u8f03\u65bc\u55ae\u8df3\u8e8d \u5c0d\u65bc\u63a8\u7406\u7d50\u679c\u6548\u679c\u66f4\u597d\uff0c\u800c EntNet \u6a21\u578b\u5c6c\u65bc\u55ae\u8df3\u8e8d\u6a21\u578b\uff0c\u672c\u5be6\u9a57\u5617\u8a66\u5c07\u591a\u8df3\u8e8d\u7684\u6982\u5ff5\u61c9 \u7528\u65bc\u8f38\u51fa\u6a21\u7d44\u4e2d\uff0c\u5617\u8a66\u589e\u5f37\u905e\u6b78\u795e\u7d93\u7db2\u8def\u63a8\u7406\u80fd\u529b\u3002 \u5be6\u9a57\u5167\u5bb9\uff1a \u672c\u5be6\u9a57\u5c07\u5f15\u5165\u7aef\u5c0d\u7aef\u8a18\u61b6\u7db2\u8def\u7684\u591a\u8df3\u8e8d\u63a8\u7406\u516c\u5f0f\uff0c\u9078\u7528\u6b64\u65b9\u6cd5\u539f\u56e0\u5728\u65bc\u7aef\u5c0d\u7aef\u7db2\u8def\u8207 EntNet \u6a21\u578b\u76f8\u4f3c\uff0c\u90fd\u5177\u5099\u8a18\u61b6\u55ae\u5143\u4fdd\u5b58\u8cc7\u8a0a\uff0c\u5176\u4ed6\u6a21\u578b\u67b6\u69cb\u65b9\u6cd5\u5dee\u7570\u8f03\u5927\u3002\u8f38\u51fa\u6a21\u7d44\u8ca0 \u8cac\u63a8\u7406\u7b54\u6848\uff0c\u900f\u904e\u6ce8\u610f\u529b\u6a5f\u5236\u627e\u5c0b\u8207\u6b64\u6b21\u554f\u984c\u76f8\u95dc\u7684\u8a18\u61b6\uff0c\u4f9d\u64da\u8a18\u61b6\u5167\u5bb9\u63a8\u7406\u51fa\u6700\u7d42\u7b54 \u6848\uff0c\u800c\u7d93\u904e\u4e00\u6b21\u6ce8\u610f\u6a5f\u5236\u7684\u8a08\u7b97\u70ba\u55ae\u8df3\u8e8d\uff0c\u672c\u5be6\u9a57\u5617\u8a66\u589e\u52a0\u8df3\u8e8d\u6578\u91cf\u3002\u5982 3.4 \u5c0f\u7bc0\u4e2d\u7684 \u516c\u5f0f(14)\uff0c\u6ce8\u610f\u529b\u6a5f\u5236\u8a08\u7b97\u51fa\u7684\u6578\u503c\u8207\u8a08\u7b97\u6240\u4f7f\u7528\u7684 query \u76f8\u52a0\uff0c\u505a\u70ba\u65b0\u7684 query \u518d\u6b21\u8207 \u8a18\u61b6\u505a\u6ce8\u610f\u529b\u6a5f\u5236\u7684\u8a08\u7b97\uff0c\u6bcf\u591a\u505a\u4e00\u6b21\u8df3\u8e8d\u6578\u589e\u52a0 1\uff0c\u5be6\u9a57\u5c07\u6bd4\u8f03\u96d9\u8df3\u8e8d\u8207\u539f\u5148\u55ae\u8df3\u8e8d \u7684\u5dee\u7570\u3002\u5be6\u9a57\u7d50\u679c\u6574\u7406\u65bc\u8868 6 \u5167\u4e4b Multi hop \u6b04\u4f4d\u3002 \u7531\u5be6\u9a57\u6578\u64da\u4e2d\u53ef\u4ee5\u770b\u51fa\uff0c\u589e\u52a0\u8df3\u8e8d\u6578\u91cf\u4e26\u6c92\u6709\u589e\u52a0\u6a21\u578b\u7684\u6e96\u78ba\u7387\uff0c\u751a\u81f3\u90e8\u5206\u7684\u6e96\u78ba \u7387\u76f8\u8f03\u65bc\u539f\u6a21\u578b\u6709\u4e0b\u964d\u7684\u8da8\u52e2\uff0c\u5e73\u5747\u932f\u8aa4\u7387\u53cd\u800c\u4e0a\u5347\u3002\u5206\u6790\u8a13\u7df4\u51fa\u7684\u6a21\u578b\u65bc\u8a13\u7df4\u8cc7\u6599\u3001 \u6e2c\u8a66\u8cc7\u6599\u7684\u8868\u73fe\uff0c\u53ef\u4ee5\u770b\u51fa\u6709\u904e\u64ec\u5408\u7684\u8da8\u52e2\uff0c\u8907\u96dc\u5316\u63a8\u7406\u6a21\u7d44\u7121\u6cd5\u63d0\u5347\u6548\u679c\u3002\u63a8\u6e2c\u70ba\u8a18 \u61b6\u6a21\u7d44\u6240\u4fdd\u5b58\u7684\u8cc7\u8a0a\u4e0d\u8db3\uff0c\u7121\u6cd5\u63d0\u4f9b\u8db3\u5920\u7684\u8cc7\u8a0a\u7d66\u8207\u63a8\u7406\u6a21\u7d44\u9032\u884c\u5f8c\u7e8c\u7684\u63a8\u7406\u3002\u56e0\u6b64\u8a2d \u8a08\u5be6\u9a57\u4e8c\u900f\u904e\u8a18\u61b6\u95dc\u806f\u7684\u8a08\u7b97\uff0c\u8207\u95dc\u4fc2\u69fd\u7684\u4fdd\u5b58\u63d0\u5347\u6a21\u578b\u5916\u90e8\u8a18\u61b6\u4fdd\u5b58\u7684\u80fd\u529b\uff0c\u5617\u8a66\u4fdd \u5b58\u66f4\u95dc\u7684\u8cc7\u8a0a\u662f\u5426\u80fd\u63d0\u5347\u6a21\u578b\u7684\u6548\u679c\u3002 4.2 \u5be6\u9a57\u4e8c(\u8a18\u61b6\u95dc\u806f) \u5be6\u9a57\u76ee\u7684\uff1a \u6b64\u5be6\u9a57\u4e3b\u8981\u61c9\u7528\u65bc EntNet \u6a21\u578b\u7684\u52d5\u614b\u8a18\u61b6\u6a21\u7d44\uff0c\u6839\u64da\u5be6\u9a57\u4e00\u7684\u5be6\u9a57\u7d50\u679c\uff0c\u53ef\u767c\u73fe\u8907\u96dc\u5316 \u63a8\u7406\u6a21\u7d44\u7121\u6cd5\u63d0\u5347\u63a8\u7406\u80fd\u529b\uff0c\u56e0\u6b64\u5be6\u9a57\u4e8c\u5c07\u95dc\u806f\u8a08\u7b97\u52a0\u5165\u8a18\u61b6\u4e4b\u9593\u3002\u76f8\u8f03\u65bc\u539f\u672c\u8a18\u61b6\u5206 \u5225\u7368\u7acb\u4fdd\u5b58\u8a0a\u606f\uff0c\u8a18\u61b6\u95dc\u806f\u7684\u8a08\u7b97\u80fd\u5c07\u76f8\u95dc\u8a18\u61b6\u4e32\u9023\u8d77\u4f86\u3002\u5982\u540c\u4eba\u985e\u8a18\u61b6\u4fdd\u5b58\u4e26\u975e\u628a\u6240 \u6709\u90e8\u5206\u5b8c\u5168\u7368\u7acb\uff0c\u900f\u904e\u806f\u60f3\u53ef\u806f\u7e6b\u5230\u4e0d\u540c\u7684\u60f3\u6cd5\u6216\u8a18\u61b6\u3002\u672c\u5be6\u9a57\u984d\u5916\u4fdd\u5b58\u8a18\u61b6\u8207\u8a18\u61b6\u7684 \u9593\u7684\u95dc\u806f\uff0c\u7528\u4ee5\u589e\u52a0\u63a8\u7406\u6a21\u7d44\u53ef\u7528\u8a0a\u606f\uff0c\u589e\u52a0\u6a21\u578b\u63a8\u7406\u80fd\u529b\u3002 \u5be6\u9a57\u5167\u5bb9\uff1a \u672c\u5be6\u9a57\u5c07\u8a18\u61b6\u69fd\u5169\u5169\u6839\u64da\u516c\u5f0f(6)\u8a08\u7b97\u51fa\u95dc\u4fc2\u9580\u63a7\u6578\u503c\uff0c\u7528\u4ee5\u6c7a\u5b9a\u6b64\u6b21\u6578\u64da\u8f38\u5165\u5c0d\u65bc\u95dc\u4fc2 \u69fd\u66f4\u65b0\u591a\u5be1\u3002\u5be6\u9a57\u8a18\u61b6\u69fd\u6578\u91cf\u8207\u539f\u6a21\u578b\u76f8\u540c\uff0c\u4f7f\u7528 20 \u500b\u8a18\u61b6\u69fd\u4fdd\u5b58\u91cd\u8981\u8cc7\u8a0a\u3002\u53e6\u5916\u984d\u5916 \u52a0\u5165\u95dc\u4fc2\u69fd\u7528\u4ee5\u4fdd\u5b58\u5c0d\u61c9\u8a18\u61b6\u69fd\u7684\u95dc\u4fc2\uff0c\u5982\u7b2c\u4e00\u500b\u8a18\u61b6\u69fd\u8207\u5176\u4ed6\u6240\u6709\u8a18\u61b6\u69fd\u95dc\u806f\u8a08\u7b97\uff0c \u4fdd\u5b58\u65bc\u7b2c\u4e00\u500b\u95dc\u4fc2\u69fd\uff0c\u95dc\u4fc2\u69fd\u6578\u91cf\u8207\u8a18\u61b6\u69fd\u6578\u91cf\u76f8\u7b49\uff0c\u78ba\u4fdd\u95dc\u4fc2\u4fdd\u5b58\u4e0d\u6703\u96a8\u8a18\u61b6\u69fd\u589e\u9577 \u800c\u5927\u91cf\u589e\u52a0\u3002\u95dc\u806f\u8a08\u7b97\u900f\u904e Cm 2 \u6392\u5217\u7d44\u5408\u8a08\u7b97\uff0c\u5982\u516c\u5f0f(15)\u6240\u793a\u300220 \u500b\u8a18\u61b6\u69fd\u8a08\u7b97\u51fa 190 \u500b\u95dc\u4fc2\uff0c\u518d\u5c07 190 \u500b\u95dc\u4fc2\u5206\u5225\u4fdd\u5b58\u5230\u5c0d\u61c9\u7684\u95dc\u4fc2\u69fd\u5167\u3002\u5be6\u9a57\u7d50\u679c\u5982\u8868 6 \u4e2d\u4e4b Relation slot \u6b04\u4f4d\u6240\u793a\u3002 (15) bAbI \u6578\u64da\u96c6\u65bc\u4e0d\u540c\u4efb\u52d9\u6709\u4e0d\u540c\u7684\u63a8\u7406\u96e3\u5ea6\uff0c\u6709\u4e9b\u4efb\u52d9\u9700\u8981\u7d50\u5408\u591a\u9805\u5148\u9a57\u77e5\u8b58\u4ea4\u53c9\u63a8 \u7406\u624d\u80fd\u5f97\u51fa\u7b54\u6848\u3002\u5f9e\u5be6\u9a57\u6578\u64da\u4e2d\u53ef\u4ee5\u770b\u51fa\uff0c\u900f\u904e\u95dc\u806f\u8a08\u7b97\u80fd\u6709\u6548\u964d\u4f4e\u5e73\u5747\u932f\u8aa4\u3002\u76f8\u8f03\u65bc \u539f\u5148\u8a18\u61b6\u7368\u7acb\u4fdd\u5b58\uff0c\u6b64\u65b9\u6cd5\u53ef\u4ee5\u66f4\u6709\u6548\u5730\u5f9e\u6578\u64da\u4e2d\u5b78\u7fd2\u8a5e\u53e5\u9593\u7684\u95dc\u806f\u3002\u4f46\u4e5f\u4e26\u975e\u6240\u6709\u4efb \u52d9\u90fd\u6709\u660e\u986f\u6539\u5584\u3002\u4efb\u52d9 2 \u6539\u5584\u6548\u679c\u6700\u70ba\u660e\u986f\uff0c\u6578\u64da\u7279\u6027\u662f\u627e\u5230\u5169\u9805\u652f\u6301\u4e8b\u5be6\u7684\u53e5\u5b50\uff0c\u624d \u80fd\u63a8\u7406\u51fa\u554f\u984c\u7684\u7b54\u6848\uff0c\u800c\u95dc\u806f\u7684\u8a08\u7b97\u525b\u597d\u662f\u5169\u5169\u8a18\u61b6\u69fd\u7684\u8a08\u7b97\uff0c\u6548\u679c\u986f\u8457\u65bc\u63d0\u5347\u4efb\u52d9 2\u3002 \u4f46\u4efb\u52d9 3 \u66f4\u591a\u7684\u652f\u6301\u4e8b\u5be6\u53e5\u5b50\u537b\u7121\u6cd5\u63d0\u5347\u3002\u63a8\u8ad6\u70ba\u6578\u64da\u96c6\u592a\u5c0f\u4ee5\u53ca\u95dc\u806f\u8a08\u7b97\u7684\u65b9\u6cd5\u7684\u5f71 \u97ff\u3002 \u53e6\u65bc\u8868 5 Task All relation method Relation slot Task 1: Single Supporting Fact 110000 80000 Task 2: Two Supporting Facts 112900 82900 Task 3: Three Supporting Facts 113400 83400 Task 4: Two Argument Relations 109400 79400 Task 5: Three Argument Relations 115200 85200 Task 6: Yes/No Questions 113400 83400 Task 7: Counting 115100 85100 Task 8: Lists/Sets 115100 85100 Task 9: Simple Negation 111100 81100 Task 10: Indefinite Knowledge 111500 81500 Task 11: Basic Coreference 111600 81600 Task 12: Conjunction 110400 80400 Task 13: Compound Coreference 111600 81600 Task 14: Time Reasoning 111700 81700 Task 15: Basic Deduction 109700 79700 Task 16: Basic Induction 109500 79500 Task 17: Positional Reasoning 111100 81100 Task 18: Size Reasoning 110700 80700 Task 19: Path Finding 113200 83200 Task 20: Agent's Motivations 113600 83600 Sum of all task parameters 2240200 1640200 Mean parameters 112010 82010 4.3 \u5be6\u9a57\u4e09(\u81ea\u6211\u8a18\u61b6\u95dc\u806f) (Self memory Relation) \u5be6\u9a57\u76ee\u7684\uff1a \u5be6\u9a57\u4e8c\u4e2d\u5c07\u8a08\u7b97\u51fa\u7684 190 \u500b\u95dc\u4fc2\u63d0\u53d6\u70ba\u8207\u8a18\u61b6\u69fd\u6578\u91cf\u76f8\u7b49\u7684\u95dc\u4fc2\u69fd\uff0c\u900f\u904e\u9019\u6a23\u7684\u65b9\u5f0f\u8f38 \u51fa\u6a21\u7d44\u4e0d\u9700\u8981\u5c07\u6240\u6709\u95dc\u4fc2\u90fd\u8a08\u7b97\u904e\uff0c\u5728\u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u9032\u884c\u4fdd\u5b58\u6642\uff0c\u5373\u53ef\u7be9\u9078\u51fa\u91cd\u8981\u7684\u95dc \u806f\u8cc7\u8a0a\u9032\u884c\u4fdd\u5b58\uff0c\u4e26\u975e\u6240\u6709\u95dc\u4fc2\u6578\u503c\u90fd\u9700\u8981\u88ab\u4fdd\u5b58\uff0c\u8f38\u51fa\u53ef\u4ee5\u53ea\u5c08\u6ce8\u65bc\u91cd\u8981\u7684\u8cc7\u8a0a\u63a8\u7406 \u7b54\u6848\u3002\u6b64\u5be6\u9a57\u5617\u8a66\u5c07\u8a18\u61b6\u95dc\u806f\u76f4\u63a5\u66f4\u65b0\u65bc\u539f\u8a18\u61b6\u69fd\u5167\uff0c\u800c\u4e0d\u53e6\u5916\u900f\u904e\u95dc\u4fc2\u69fd\u4fdd\u5b58\u95dc\u4fc2\u8cc7 \u8a0a\uff0c\u5be6\u9a57\u662f\u5426\u900f\u904e\u8a18\u61b6\u7684\u81ea\u6211\u95dc\u806f\u66f4\u65b0\uff0c\u5373\u53ef\u63d0\u5347\u8a18\u61b6\u69fd\u4fdd\u5b58\u5167\u5bb9\u7684\u54c1\u8cea\u3002 \u5be6\u9a57\u5167\u5bb9\uff1a \u95dc\u806f\u8a08\u7b97\u65b9\u6cd5\u540c\u5be6\u9a57\u4e8c\uff0c\u5dee\u5225\u5728\u65bc\u5be6\u9a57\u4e09\u66f4\u65b0\u7684\u76ee\u6a19\uff0c\u70ba\u8a18\u61b6\u69fd\u672c\u8eab\u6240\u4fdd\u5b58\u7684\u5167\u5bb9\uff0c\u539f \u8f38\u5165\u95dc\u806f\u69fd\u7684\u90e8\u5206\u6539\u6210\u8a18\u61b6\u69fd\u672c\u8eab\uff0c\u8f38\u51fa\u6240\u66f4\u65b0\u7684\u76ee\u6a19\u4e5f\u662f\u8a18\u61b6\u69fd\u3002\u5982\u516c\u5f0f(16)~(18)\u6240 \u793a\uff0c\u8a08\u7b97\u6b64\u6b21\u8f38\u5165\u53e5\u5b50\u5728\u5169\u5169\u8a18\u61b6\u69fd\u9593\u7684\u95dc\u4fc2\uff0c\u65b9\u6cd5\u8207\u5be6\u9a57\u4e8c\u76f8\u4f3c\uff0c\u800c\u516c\u5f0f(19)\u900f\u904e\u9580 \u63a7\u6578\u503c\u6c7a\u5b9a\u6b64\u6b21\u66f4\u65b0\u7684\u591a\u5be1\uff0c\u800c\u66f4\u65b0\u7684\u76ee\u6a19\u662f\u8a18\u61b6\u672c\u8eab\u3002\u5be6\u9a57\u7d50\u679c\u5982\u8868 6 \u4e2d\u4e4b Self memory \u6b04\u4f4d\u6240\u793a\u3002 , (16) , , (17) \u2190 (18) \u2190 \u2299 \u0303 (19) \u8868 6. \u539f\u6a21\u578b\u8207\u591a\u8df3\u8e8d\u3001\u95dc\u4fc2\u8a08\u7b97\u3001\u81ea\u6211\u95dc\u806f\u66f4\u65b0\u5be6\u9a57\u7d50\u679c(\u932f\u8aa4\u7387) [Table 6. Error rate of different models] Task Original model Multi hop (hop2) Relation slot Self \u5f9e\u5be6\u9a57\u6578\u64da\u4e2d\u53ef\u770b\u51fa\u5be6\u9a57\u4e8c\u7684\u6539\u5584\u6548\u679c\u8f03\u70ba\u660e\u986f\uff0c\u7279\u5225\u662f\u4efb\u52d9 2 \u7684\u6e96\u78ba\u7387\u5927\u5e45\u63d0\u5347\u3002 \u5927\u91cf\u6587\u672c\u8cc7\u6599\u4f86\u8a13\u7df4\u81ea\u7136\u8a9e\u8a00\u8a5e\u53e5\u7684\u95dc\u4fc2\u3002\u900f\u904e\u672a\u6a19\u8a3b\u7684\u5927\u91cf\u8cc7\u6599\u8a13\u7df4\uff0c\u4f7f\u7de8\u78bc\u7684\u5411\u91cf \u76f8\u95dc\u51fa\u7248\u54c1\u50f9\u683c\u8868\u53ca\u8a02\u8cfc\u55ae \u8868 7Task Relation slot Self memory update \u53ef\u4ee5\u66f4\u6e96\u78ba\u7684\u8868\u793a\u8a5e\u53e5\u7684\u610f\u601d\u3002 \u7de8\u865f \u66f8\u76ee \u6703 \u54e1 \u975e\u6703\u54e1 \u518a\uf969 \uf90a\u984d \u5be6\u9a57\u4e09\u6548\u679c\u4e0b\u964d\u6700\u591a\u3002\u5c07\u95dc\u806f\u8a08\u7b97\u8207\u672c\u8eab\u4fdd\u5b58\u7684\u8a18\u61b6\u540c\u6642\u66f4\u65b0\u65bc\u540c\u500b\u8a18\u61b6\u69fd\uff0c\u53cd\u800c \u9020\u6210\u6a21\u578b\u6574\u9ad4\u6548\u679c\u4e0b\u964d\u3002\u63a8\u8ad6\u5c07\u95dc\u806f\u8a08\u7b97\u66f4\u65b0\u8a18\u61b6\u69fd\uff0c\u6703\u9020\u6210\u8a18\u61b6\u4fdd\u5b58\u7684\u6df7\u4e82\u3002\u76ee\u524d\u65b9 \u672c\u7814\u7a76\u7684\u5be6\u9a57\u7de8\u78bc\u65b9\u5f0f\u90fd\u662f\u8207\u6574\u9ad4\u6a21\u578b\u4e00\u8d77\u8a13\u7df4\uff0c\u5305\u542b\u7de8\u78bc\u3001\u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u4ee5\u53ca\u63a8 AIR no.92-01, no. 92-04 (\u5408\u8a02\u672c) ICG \u4e2d\u7684\uf941\u65e8\u89d2\u8272 \u8207 AIR 1. A conceptual Structure for Parsing Mandarin--its memory Task 1: Single Supporting Fact 0.00% 0.00% 0.00% 0.00% Task 2: Two Supporting Facts 20.80% 28.40% 11.60% 52.30% Task 3: Three Supporting Facts 58.70% 56.70% 62.90% 62.10% Task 4: Two Argument Relations 0.10% 0.20% 0.00% 0.00% Task 5: Three Argument Relations 1.20% 1.20% 1.40% 17.20% Task 6: Yes/No Questions 3.60% 3.50% 1.90% 11.40% Task 7: Counting 10.10% 10.10% 6.90% Task 1: Single Supporting Fact 80000 70000 Task 2: Two Supporting Facts 82900 72900 Task 3: Three Supporting Facts 83400 73400 Task 4: Two Argument Relations 79400 69400 Task 5: Three Argument Relations 85200 75200 Task 6: Yes/No Questions 83400 73400 Task 7: Counting 85100 75100 \u6cd5\u4ecd\u662f\u9700\u8981\u5206\u958b\u4fdd\u5b58\u95dc\u806f\u8cc7\u8a0a\u8207\u8a18\u61b6\u672c\u8eab\uff0c\u4f46\u4e5f\u4e0d\u4ee3\u8868\u8a18\u61b6\u7684\u95dc\u4fc2\u81ea\u6211\u66f4\u65b0\u4e0d\u53ef\u884c\uff0c\u800c \u662f\u9700\u8981\u8a73\u7d30\u7814\u7a76\u8a18\u61b6\u69fd\u8207\u95dc\u806f\u69fd\u7684\u5167\u5bb9\u8207\u7279\u6027\uff0c\u5f9e\u800c\u627e\u51fa\u66f4\u597d\u7684\u8a18\u61b6\u4fdd\u5b58\u65b9\u6cd5\u3002 \u5f9e\u6b0a\u91cd\u8a08\u7b97\u91cf\u65b9\u9762\u4f86\u770b\uff0c\u5be6\u9a57\u4e00\u6b0a\u91cd\u4f7f\u7528\u91cf\u6700\u5c11\u3002\u56e0\u6a21\u578b\u5c1a\u672a\u52a0\u9032\u8a18\u61b6\u95dc\u806f\u7684\u8a08\u7b97\uff0c \u4e14\u56e0\u6a21\u578b\u91cd\u8907\u4f7f\u7528\u76f8\u540c\u6ce8\u610f\u529b\u6a5f\u5236\u91cd\u8907\u8a08\u7b97\uff0c\u6b0a\u91cd\u7528\u91cf\u8207\u539f\u6a21\u578b\u5dee\u7570\u4e0d\u5927\u3002 \u800c\u5be6\u9a57\u4e8c\u95dc\u806f\u63d0\u53d6\u8207\u6240\u6709\u95dc\u806f\u8a08\u7b97\u6bd4\u8f03\uff0c\u6b0a\u91cd\u4e0b\u964d\u5e45\u5ea6\u6700\u591a\uff0c\u6240\u6709\u4efb\u52d9\u6574\u9ad4\u4e0b\u964d\u4e86 Surface (US&EURP) (ASIA) VOLUME AMOUNT Frame and General Applications--NT$ 80 NT$ _____ _____ \u7406\u6a21\u7d44\u7684\u6b0a\u91cd\uff0c\u6578\u64da\u91cf\u7684\u4e0d\u8db3\u8f03\u7121\u6cd5\u6df1\u5165\u5b78\u7fd2\u8a5e\u53e5\u610f\u6db5\uff0c\u800c\u76ee\u524d\u7db2\u8def\u6587\u672c\u8cc7\u6599\u91cf\u5927\uff0c\u672a \u4f86\u53ef\u5c07\u6a21\u578b\u7684 Encoder \u6a21\u7d44\u7d93\u904e\u9810\u8a13\u7df4\uff0c\u63d0\u5347\u7de8\u78bc\u6548\u679c\uff0c\u6216\u662f\u91dd\u5c0d\u7de8\u78bc\u65b9\u5f0f\u505a\u6539\u9032\uff0c\u63d0 \u9ad8\u6574\u9ad4\u6a21\u578b\u9810\u6e2c\u7684\u6548\u679c\u3002 2. no.92-02, no. 92-03 (\u5408\u8a02\u672c) no.92-01, no. 92-04(\u5408\u8a02\u672c) ICG \u4e2d\u7684\u8ad6\u65e8\u89d2\u8272\u8207 A Conceptual V-N \u8907\u5408\u540d\u8a5e\u8a0e\uf941\u7bc7 \u8207V-R \u8907\u5408\u52d5\u8a5e\u8a0e\uf941\u7bc7 120 _____ _____ Structure for Parsing Mandarin --Its Frame and General Applications--US$ 9 US$ 19 US$15 _____ _____ 3. no.93-01 \u65b0\u805e\u8a9e\uf9be\u5eab\u5b57\u983b\u7d71\u8a08\u8868 120 _____ _____ no.92-02 V-N \u8907\u5408\u540d\u8a5e\u8a0e\u8ad6\u7bc7 & 92-03 V-R \u8907\u5408\u52d5\u8a5e\u8a0e\u8ad6\u7bc7 12 21 17 _____ 4. no.93-02 \u65b0\u805e\u8a9e\uf9be\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 360 _____ _____ _____ \u53c3\u8003\u6587\u737b(References) 3. no.93-01 \u65b0\u805e\u8a9e\u6599\u5eab\u5b57\u983b\u7d71\u8a08\u8868 1. 2. 8 13 11 _____ 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\uf9d0 180 _____ _____ _____ 4. no.93-02 \u65b0\u805e\u8a9e\u6599\u5eab\u8a5e\u983b\u7d71\u8a08\u8868 18 30 24 _____ _____ 6. no.93-05 \u4e2d\u6587\u8a5e\uf9d0\u5206\u6790 185 _____ _____ 60 \u842c\u6b0a\u91cd\u6578\u91cf\uff0c\u8f03\u5be6\u9a57\u4e00\u4e0b\u964d\u4e86 26.8%\u7684\u6b0a\u91cd\u91cf\u3002\u5be6\u9a57\u4e09\u96d6\u6e96\u78ba\u7387\u4e0d\u9ad8\uff0c\u4f46\u76f8\u8f03\u5be6\u9a57\u4e8c 5. no.93-03 \u65b0\u805e\u5e38\u7528\u52d5\u8a5e\u8a5e\u983b\u8207\u5206\u985e 10 15 13 _____ _____ 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 40 _____ _____ \u6574\u9ad4\u4efb\u52d9\u53c8\u4e0b\u964d\u4e86 20 \u842c\u6b0a\u91cd\u3002 6. no.93-05 \u4e2d\u6587\u8a5e\u985e\u5206\u6790 10 15 13 _____ _____ 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\uf961\u8a5e\u5178(\u65b0\u805e\u8a9e\uf9be\u8a5e\u983b\u7d71\u8a08) 380 _____ _____ 23.40% Task 8: Lists/Sets 1.30% 2.20% 1.70% 9.10% Task 9: Simple Negation 0.40% 0.00% 0.00% 35.60% Task 10: Indefinite Knowledge 0.50% 3.70% 0.80% 3.80% Task 11: Basic Coreference 8.90% 8.00% 4.20% 7.50% Task 12: Conjunction 0.00% 0.00% 0.00% 0.60% Task 13: Compound Coreference 5.60% 5.60% 6.20% 5.80% Task 14: Time Reasoning 20.50% 21.30% 20.60% 55.90% Task 15: Basic Deduction 5.10% 29.70% 0.00% 45.80% Task 16: Basic Induction 50.00% 50.70% 51.00% 51.20% Task 17: Positional Reasoning 41.20% 39.00% 37.70% 39.40% Task 18: Size Reasoning 8.00% 7.60% 6.20% 8.50% Task 19: Path Finding 87.80% 86.80% 85.30% 87.60% Task 20: Agent's Motivations 0.90% 0.20% 0.90% 2.90% Mean Error 16.24% 17.74% 14.96% 26.00% Failed Tasks(error>5%) 11 11 9 15 \u76f8\u8f03\u65bc\u5be6\u9a57\u4e8c\u52a0\u5165\u95dc\u806f\u8a08\u7b97\u7684\u63d0\u5347\uff0c\u6b64\u5be6\u9a57\u5e73\u5747\u6e96\u78ba\u7387\u53cd\u800c\u4e0b\u964d\u4e0d\u5c11\u3002\u63a8\u8ad6\u70ba\u95dc\u806f \u8a08\u7b97\u96d6\u80fd\u63d0\u5347\u6a21\u578b\u63a8\u7406\u6548\u679c\uff0c\u4f46\u82e5\u662f\u76f4\u63a5\u66f4\u65b0\u8a18\u61b6\u69fd\u672c\u8eab\uff0c\u53cd\u800c\u6703\u9020\u6210\u8a18\u61b6\u4fdd\u5b58\u7684\u6548\u679c \u4e0b\u964d\uff0c\u76ee\u524d\u4ecd\u662f\u5c07\u5169\u8005\u5206\u958b\u4fdd\u5b58\u6548\u679c\u8f03\u597d\u3002\u4f46\u6839\u64da\u8868 7 \u6b0a\u91cd\u7684\u6578\u91cf\u6bd4\u8f03\uff0c\u53ef\u4ee5\u767c\u73fe\u81ea\u6211 \u8a18\u61b6\u95dc\u806f\u66f4\u65b0\u53ef\u4ee5\u4e0b\u964d\u4e0d\u5c11\u6b0a\u91cd\u904b\u7b97\uff0c\u55ae\u500b\u4efb\u52d9\u53ef\u4e0b\u964d 1 \u842c\u6b0a\u91cd\u91cf\uff0c\u6240\u6709\u4efb\u52d9\u70ba 20 \u842c\u6b0a \u91cd\u3002 Task 8: Lists/Sets 85100 75100 Task 9: Simple Negation 81100 71100 Task 10: Indefinite Knowledge 81500 71500 Task 11: Basic Coreference 81600 71600 Task 12: Conjunction 80400 70400 Task 13: Compound Coreference 81600 71600 Task 14: Time Reasoning 81700 71700 Task 15: Basic Deduction 79700 69700 Task 16: Basic Induction 79500 69500 Task 17: Positional Reasoning 81100 71100 Task 18: Size Reasoning 80700 70700 Task 19: Path Finding 83200 5. \u7d50\u8ad6 (Conclusions) 7. no.93-06 \u73fe\u4ee3\u6f22\u8a9e\u4e2d\u7684\u6cd5\u76f8\u8a5e 5 10 8 _____ _____ 8. no.94-01 \u4e2d\u6587\u66f8\u9762\u8a9e\u983b\u7387\u8a5e\u5178(\u65b0\u805e\u8a9e\u6599\u8a5e\u983b\u7d71\u8a08) 18 30 24 _____ 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 180 _____ _____ _____ 9. no.94-02 \u53e4\u6f22\u8a9e\u5b57\u983b\u8868 11 16 14 _____ _____ 10. no.95-01 \u6ce8\u97f3\u6aa2\uf96a\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 75 _____ _____ \u672c\u8ad6\u6587\u900f\u904e\u95dc\u4fc2\u7684\u8a08\u7b97\u4f7f\u8a18\u61b6\u5167\u4e0d\u540c\u8a18\u61b6\u69fd\u5177\u6709\u95dc\u806f\uff0c\u5982\u540c\u4eba\u985e\u8a18\u61b6\u4e2d\u4e0d\u540c\u5be6\u9ad4\u4e26\u975e\u55ae 10. no.95-01 \u6ce8\u97f3\u6aa2\u7d22\u73fe\u4ee3\u6f22\u8a9e\u5b57\u983b\u8868 8 13 10 _____ _____ 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\uf9be\u5eab\u7684\u5167\u5bb9\u8207\uf96f\u660e 75 _____ _____ \u7368\u4fdd\u5b58\u5404\u81ea\u7684\u8a0a\u606f\u3002\u4f8b\u5982\u5be6\u9ad4\u7684\u8a0a\u606f\u6216\u5c6c\u6027\u53ef\u4ee5\u806f\u60f3\u5230\u5176\u4ed6\u5be6\u9ad4\u6216\u4e8b\u4ef6\u3002\u95dc\u4fc2\u7684\u6982\u5ff5\u9996 \u5148\u7531 Google Deepmind \u5718\u968a\u65bc\u8ad6\u6587(Santoro et al., 2017)\u4e2d\u6240\u63d0\u51fa\uff0c\u61c9\u7528\u65bc\u5716\u50cf\u554f\u7b54\u4efb\u52d9 11. no.95-02/98-04 \u4e2d\u592e\u7814\u7a76\u9662\u5e73\u8861\u8a9e\u6599\u5eab\u7684\u5167\u5bb9\u8207\u8aaa\u660e 3 8 6 _____ 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 75 _____ _____ _____ 12. no.95-03 \u8a0a\u606f\u70ba\u672c\u7684\u683c\u4f4d\u8a9e\u6cd5\u8207\u5176\u5256\u6790\u65b9\u6cd5 3 8 6 _____ _____ 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 110 _____ _____ (Visual Question Answering, VQA)\uff0c\u8a08\u7b97\u5169\u5169\u7269\u9ad4\u9593\u7684\u95dc\u4fc2\u3002\u800c\u8a18\u61b6\u7db2\u8def\u7684\u6982\u5ff5\u76ee\u7684\u5728\u65bc 13. no.96-01 \u300c\u641c\u300d\u6587\u89e3\u5b57-\u4e2d\u6587\u8a5e\u754c\u7814\u7a76\u8207\u8cc7\u8a0a\u7528\u5206\u8a5e\u6a19\u6e96 8 13 11 _____ _____ 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 400 _____ _____ \u900f\u904e\u4e0d\u540c\u7684\u8a18\u61b6\u4fdd\u5b58\u8207\u63a8\u7406\u65b9\u5f0f\uff0c\u63d0\u5347\u6a21\u578b\u7684\u9577\u671f\u8a18\u61b6\u80fd\u529b\uff0cRelNet \u6a21\u578b\u9996\u5148\u5c07\u95dc\u4fc2\u7684 14. no.97-01 \u53e4\u6f22\u8a9e\u8a5e\u983b\u8868 (\u7532) 19 31 25 _____ _____ 15. no.97-02 \uf941\u8a9e\u8a5e\u983b\u8868 90 _____ _____ \u6982\u5ff5\u5e36\u5165\u8a18\u61b6\u7db2\u8def\u4e2d\uff0c\u63d0\u5347\u6a21\u578b\u7684\u6e96\u78ba\u6027\uff0c\u4f46\u5176\u7f3a\u9ede\u4e5f\u5f88\u660e\u986f:\u5927\u91cf\u7684\u63d0\u9ad8\u6a21\u578b\u7684\u6b0a\u91cd\u8207 \u8a08\u7b97\u91cf\u3002\u672c\u8ad6\u6587\u63d0\u51fa\u95dc\u4fc2\u63d0\u53d6\u7684\u6982\u5ff5\u53ef\u4ee5\u5927\u91cf\u6e1b\u5c11\u6b0a\u91cd\u7684\u8a08\u7b97\u91cf\u3002 15. no.97-02 \u8ad6\u8a9e\u8a5e\u983b\u8868 9 14 12 _____ _____ 16. no.98-01 \u8a5e\u983b\u8a5e\u5178 18 30 26 _____ 16 no.98-01 \u8a5e\u983b\u8a5e\u5178 395 _____ _____ _____ 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 15 25 21 _____ _____ 17. no.98-02 Accumulated Word Frequency in CKIP Corpus 340 _____ _____ \u5be6\u9a57\u4e2d\u6240\u63a1\u7528\u7684\u554f\u7b54\u4efb\u52d9\u4f7f\u7528 20 \u500b\u8a18\u61b6\u69fd\uff0c\u95dc\u4fc2\u7684\u7e2e\u6e1b\u5f9e 190 \u500b\u95dc\u4fc2\u63d0\u53d6\u5230 20 \u500b 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\u7406\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 4 9 7 _____ _____ 18. no.98-03 \u81ea\u7136\u8a9e\u8a00\u8655\uf9e4\u53ca\u8a08\u7b97\u8a9e\u8a00\u5b78\u76f8\u95dc\u8853\u8a9e\u4e2d\u82f1\u5c0d\u8b6f\u8868 90 _____ _____ \u95dc\u4fc2\u69fd\u4e2d\uff0c\u5373\u4f7f\u662f\u5c0f\u578b\u4efb\u52d9\u4e5f\u53ef\u4ee5\u767c\u73fe\u8a08\u7b97\u91cf\u5927\u5e45\u4e0b\u964d\u3002\u800c\u5728\u8d8a\u5927\u578b\u81ea\u7136\u8a9e\u8a00\u4efb\u52d9\u6240\u904b 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\u6599\u5eab\u6a19\u8a3b\u7cfb\u7d71\u8aaa\u660e 8 13 11 _____ _____ 19. no.02-01 \u73fe\u4ee3\u6f22\u8a9e\u53e3\u8a9e\u5c0d\u8a71\u8a9e\uf9be\u5eab\u6a19\u8a3b\u7cfb\u7d71\uf96f\u660e 75 _____ _____ \u7528\u7684\u8a18\u61b6\u69fd\u6578\u91cf\u76f8\u5c0d\u8d8a\u9ad8\uff0c\u5176\u4e2d\u5169\u5169\u76f8\u5c0d\u7684\u95dc\u4fc2\u8a08\u7b97\u4e5f\u6703\u5927\u91cf\u589e\u9577\u3002\u5c07\u95dc\u4fc2\u63d0\u53d6\u51fa\u91cd\u8981 \u8cc7\u8a0a\u65bc\u95dc\u4fc2\u69fd\u5167\uff0c\u53ef\u4ee5\u5927\u91cf\u6e1b\u5c11\u8a18\u61b6\u5132\u5b58\u6240\u9700\u8981\u5360\u7528\u7684\u5927\u5c0f\uff0c\u4ee5\u53ca\u8f38\u51fa\u6a21\u7d44\u6240\u9700\u8981\u63a8\u7406 20. Computational Linguistics & Chinese Languages Processing (One year) (Back issues of IJCLCLP: US$ 20 per copy) ---100 100 _____ 20 \uf941\u6587\u96c6 COLING 2002 \u7d19\u672c 100 _____ _____ _____ 21. Readings in Chinese Language Processing 25 25 21 _____ _____ 21. \uf941\u6587\u96c6 COLING 2002 \u5149\u789f\u7247 300 _____ _____ \u7684\u7684\u8a08\u7b97\u91cf\u3002 22. \uf941\u6587\u96c6 COLING 2002 Workshop \u5149\u789f\u7247 300 _____ _____ 73200 Task 20: Agent's Motivations 83600 \u900f\u904e\u5be6\u9a57\u4e8c\u5be6\u9a57\u7d50\u679c\u53ef\u4ee5\u770b\u51fa\u6574\u9ad4\u6e96\u78ba\u7387\u53ef\u4ee5\u6709\u6548\u7684\u63d0\u5347\uff0c\u800c\u9019\u6a23\u7684\u65b9\u6cd5\u4e0d\u4fb7\u9650\u65bc TOTAL _____ _____ 23. \uf941\u6587\u96c6 ISCSLP 2002 \u5149\u789f\u7247 300 _____ _____ 73600 Sum of all task parameter 1640200 1440200 Mean parameter 82010 72010 4.4 \u5be6\u9a57\u7e3d\u7d50 (Experiment summary) \u524d\u9762\u7684\u4e09\u500b\u5be6\u9a57\u4e2d\u4e3b\u8981\u63a2\u8a0e\u5169\u500b\u65b9\u5411\uff1a\u6e96\u78ba\u7387\u8207\u6b0a\u91cd\u8a08\u7b97\u91cf\u3002\u5f9e\u6e96\u78ba\u7387\u65b9\u9762\u4f86\u770b\uff0c\u5be6\u9a57 \u4e00\u65bc\u8f38\u51fa\u6a21\u7d44\u4e2d\u505a\u66f4\u52d5\uff0c\u900f\u904e\u91cd\u8907\u6027\u7684\u6ce8\u610f\u529b\u6a5f\u5236\u8a08\u7b97\uff0c\u5617\u8a66\u63d0\u5347\u8907\u96dc\u4efb\u52d9\u7684\u63a8\u7406\u80fd\u529b\u3002 \u5be6\u9a57\u6240\u4f7f\u7528\u7684 EntNet \u6a21\u578b\uff0c\u4e5f\u53ef\u4ee5\u904b\u7528\u65bc\u4e0d\u540c\u8a18\u61b6\u7db2\u8def\u67b6\u69cb\u5167\u3002\u800c\u975e\u8a18\u61b6\u7db2\u8def\u6a21\u578b\u4e5f\u53ef 10% member discount: ___________Total Due:__________ \u4ea4\u8ac7\u7cfb\u7d71\u66a8\u8a9e\u5883\u5206\u6790\u7814\u8a0e\u6703\u8b1b\u7fa9 24. (\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u67031997\u7b2c\u56db\u5b63\u5b78\u8853\u6d3b\u52d5) 130 _____ _____ \u4ee5\u7528\u540c\u6a23\u7684\u6982\u5ff5\u8a08\u7b97\u7269\u9ad4\u3001\u6578\u64da\u3001\u8a5e\u53e5\u4ee5\u53ca\u6642\u9593\u4e0a\u7684\u95dc\u4fc2\u3002 \u4ee5\u672c\u8ad6\u6587\u70ba\u57fa\u790e\uff0c\u672a\u4f86\u9084\u53ef\u671d\u95dc\u4fc2\u8a08\u7b97\u65b9\u6cd5\u9032\u884c\u6539\u5584\u3002\u5f9e\u5be6\u9a57\u4e8c\u4e2d\u53ef\u4ee5\u770b\u51fa\u4e0d\u540c\u4efb \u52d9\u63d0\u5347\u7684\u6548\u679c\u4e0d\u540c\uff0c\u5176\u4e2d\u4efb\u52d9 2 \u63d0\u5347\u6548\u679c\u6700\u70ba\u660e\u986f\u3002\u8207\u4efb\u52d9\u672c\u8eab\u7684\u7279\u6027\u6709\u95dc\uff0c\u900f\u904e\u66f4\u591a \u95dc\u806f\u8a08\u7b97\u6216\u8a31\u53ef\u63d0\u5347\u5176\u4ed6\u4efb\u52d9\u7684\u6e96\u78ba\u7387\u3002\u672c\u7814\u7a76\u7684\u95dc\u4fc2\u8a08\u7b97\u6240\u63a1\u7528\u7684\u662f\u5169\u5169\u8a18\u61b6\u69fd\u7684\u8a08 \u7b97\uff0c\u4f46\u73fe\u5be6\u4e16\u754c\u4e0d\u540c\u7684\u5be6\u9ad4\u95dc\u4fc2\u53ef\u4ee5\u662f\u5169\u500b\u3001\u4e09\u500b\u6216\u662f\u7fa4\u9ad4\u9593\u5177\u6709\u4e00\u5b9a\u7684\u95dc\u806f\uff0c\u4f8b\u5982\u7c43 \u7403\u3001\u7fbd\u6bdb\u7403\u8207\u8db3\u7403\u4e09\u8005\u7686\u5c6c\u65bc\u7403\u985e\u3002bAbI \u6578\u64da\u96c6\u4e2d\u7684\u4efb\u52d9 3 \u4e5f\u9700\u8981\u9700\u8981\u66f4\u591a\u5148\u9a57\u77e5\u8b58\u4ea4 \u4e92\u63a8\u7406\u3002\u672a\u4f86\u82e5\u662f\u95dc\u806f\u8a08\u7b97\u80fd\u5e36\u5165\u7fa4\u7d44\u95dc\u806f\u8a08\u7b97\uff0c\u589e\u5f37\u8a18\u61b6\u9593\u7684\u9023\u7d50\u6027\uff0c\u61c9\u80fd\u518d\u6b21\u63d0\u5347 \u2027 OVERSEAS USE ONLY \u4e2d\u6587\u8a08\u7b97\u8a9e\u8a00\u5b78\u671f\u520a (\u4e00\uf98e\uf978\u671f) \uf98e\u4efd\uff1a______ 25. (\u904e\u671f\u671f\u520a\u6bcf\u672c\u552e\u50f9500\u5143) ---2,500 _____ _____ \u2027 PAYMENT\uff1a \u25a1 Credit Card ( Preferred ) 26. Readings of Chinese Language Processing 675 _____ _____ 27. \u5256\u6790\u7b56\uf976\u8207\u6a5f\u5668\u7ffb\u8b6f 1990 150 _____ _____ \u5408 \u8a08 _____ _____ \u25a1 Name (please print): \u203b \u6b64\u50f9\u683c\u8868\u50c5\u9650\u570b\u5167 (\u53f0\u7063\u5730\u5340) \u4f7f\u7528 Signature: \u5283\u64a5\u5e33\u6236\uff1a\u4e2d\u83ef\u6c11\u570b\u8a08\u7b97\u8a9e\u8a00\u5b78\u5b78\u6703 \u5283\u64a5\u5e33\u865f\uff1a19166251 \u800c\u5be6\u9a57\u4e00\u7684\u5be6\u9a57\u7d50\u679c\u5e73\u5747\u932f\u8aa4\u7387\u53cd\u800c\u7565\u70ba\u63d0\u5347\uff0c\u63a8\u8ad6\u70ba\u8a18\u61b6\u4fdd\u5b58\u7684\u5167\u5bb9\u4e0d\u8db3\u4ee5\u652f\u6490\u8907\u96dc \u7684\u63a8\u7406\u904e\u7a0b\u3002 \u5be6\u9a57\u4e8c\u65bc\u52d5\u614b\u8a18\u61b6\u6a21\u7d44\u4e2d\u505a\u6539\u5584\u3002\u900f\u904e\u8a18\u61b6\u9593\u7684\u95dc\u806f\uff0c\u8a08\u7b97\u9023\u7d50\u6b65\u52d5\u8a18\u61b6\u9593\u7684\u95dc\u4fc2\u3002 \u6a21\u578b\u8a18\u61b6\u4fdd\u5b58\u8207\u63a8\u7406\u80fd\u529b\u3002 Fax: \uf997\u7d61\u96fb\u8a71\uff1a(02) 2788-3799 \u8f491502 E-mail: \uf997\u7d61\u4eba\uff1a \u9ec3\u742a \u5c0f\u59d0\u3001\u4f55\u5a49\u5982 \u5c0f\u59d0 E-mail:aclclp@aclclp.org.tw \u672c\u8ad6\u6587\u6240\u6539\u5584\u7684\u90e8\u5206\u7686\u843d\u65bc\u8a18\u61b6\u4fdd\u5b58\u8207\u63a8\u7406\u7684\u90e8\u5206\uff0c\u7de8\u78bc\u7684\u90e8\u5206\u61c9\u80fd\u900f\u904e\u9810\u8a13\u7df4\u7684 \u8a02\u8cfc\u8005\uff1a \u6536\u64da\u62ac\u982d\uff1a Address\uff1a \u65b9\u5f0f\u6539\u5584\u3002\u8fd1\u5e7e\u5e74\u7684\u8a9e\u8a00\u6a21\u578b\u7814\u7a76\u591a\u70ba\u9810\u8a13\u7df4(Pre-train)\u8207\u5fae\u8abf(Fine-tuning)\u7684\u65b9\u6cd5\uff0c\u900f\u904e \u5730 \u5740\uff1a \u5716 1. \u6a21\u578b\u67b6\u69cb\u5716. \u96fb \u8a71\uff1a E-mail:
", "num": null } } } }