{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:07:54.654642Z" }, "title": "Reducing the Search Space for Parallel Sentences in Comparable Corpora", "authors": [ { "first": "R\u00e9mi", "middle": [], "last": "Cardon", "suffix": "", "affiliation": { "laboratory": "UMR 8163 STL -Savoirs Textes Langage", "institution": "Univ. Lille", "location": { "postCode": "F-59000", "settlement": "Lille", "country": "France" } }, "email": "remi.cardon@univ-lille.fr" }, { "first": "Natalia", "middle": [], "last": "Grabar", "suffix": "", "affiliation": { "laboratory": "UMR 8163 STL -Savoirs Textes Langage", "institution": "Univ. Lille", "location": { "postCode": "F-59000", "settlement": "Lille", "country": "France" } }, "email": "natalia.grabar@univ-lille.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes and evaluates three methods for reducing the research space for parallel sentences in monolingual comparable corpora. Basically, when searching for parallel sentences between two comparable documents, all the possible sentence pairs between the documents have to be considered, which introduces a great degree of imbalance between parallel pairs and non-parallel pairs. This is a problem because, even with a highly performing algorithm, a lot of noise will be present in the extracted results, thus introducing a need for an extensive and costly manual check phase. We propose to study how we can drastically reduce the number of sentence pairs that have to be fed to a classifier so that the results can be manually handled. We work on a manually annotated subset obtained from a French comparable corpus.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper describes and evaluates three methods for reducing the research space for parallel sentences in monolingual comparable corpora. Basically, when searching for parallel sentences between two comparable documents, all the possible sentence pairs between the documents have to be considered, which introduces a great degree of imbalance between parallel pairs and non-parallel pairs. This is a problem because, even with a highly performing algorithm, a lot of noise will be present in the extracted results, thus introducing a need for an extensive and costly manual check phase. We propose to study how we can drastically reduce the number of sentence pairs that have to be fed to a classifier so that the results can be manually handled. We work on a manually annotated subset obtained from a French comparable corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Monolingual parallel corpora are useful for a variety of sequence-to-sequence tasks in natural language processing, such as text simplification (Xu et al., 2015) , paraphrase acquisition (Del\u00e9ger and Zweigenbaum, 2009) or style transfer (Jhamtani et al., 2017) . In order to build such parallel corpora, the typical approach is to start from comparable corpora and extract sentence pairs that share the same meaning. For instance, the participants of the BUCC 2017 shared task had to address this problem using bilingual corpora (Zweigenbaum et al., 2017) . One major obstacle is that, when considering two documents A and B, every single sentence from A has to be evaluated against every single sentence of B, when document metadata cannot be used to make assumptions as to where to look for corresponding sentences. This produces a large amount of noise, and even with highly performing algorithms, the result of the extraction has to be manually checked for quality. With large volumes of data, this can be extremely costly. This is a known issue when working with comparable corpora (Zhang and Zweigenbaum, 2017 ). Yet, the issue is either not mentioned in works on parallel corpora creation from comparable corpora, or external information is used, such as metadata (Smith et al., 2010) , which helps a lot the task. In our work, we propose and evaluate methods for filtering out sentences and sentence pairs that have no chance of being of interest for the building of a parallel corpus. Hence, the purpose is to reduce the amount of manual check that needs to be performed on the output of a classifier.", "cite_spans": [ { "start": 144, "end": 161, "text": "(Xu et al., 2015)", "ref_id": "BIBREF6" }, { "start": 187, "end": 218, "text": "(Del\u00e9ger and Zweigenbaum, 2009)", "ref_id": "BIBREF2" }, { "start": 237, "end": 260, "text": "(Jhamtani et al., 2017)", "ref_id": "BIBREF4" }, { "start": 529, "end": 555, "text": "(Zweigenbaum et al., 2017)", "ref_id": "BIBREF8" }, { "start": 1087, "end": 1115, "text": "(Zhang and Zweigenbaum, 2017", "ref_id": "BIBREF7" }, { "start": 1271, "end": 1291, "text": "(Smith et al., 2010)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To perform our experiments, we work with a French comparable corpus containing biomedical documents with technical and simplified contents (Grabar and Cardon, 2018) . The corpus is composed of three subcorpora: drug information for medical practitioners and patients released by the French Ministry of Health 1 , medical literature reviews and 1 http://base-donnees-publique. medicaments.gouv.fr/ their manual simplification released by the Cochrane foundation 2 , and encyclopedia articles from Wikipedia 3 and Vikidia 4 . The documents are organised in pairs where the texts address the same topic for different audiences, so that the delivered information and the phrasing are not identical. More importantly, the order in which the information is delivered is not the same, which means that the document structure cannot be used for assuming where to look for parallel sentences. For our experiments, we took 39 randomly selected document pairs from that corpus and manually annotated them for two types of sentence pairs :", "cite_spans": [ { "start": 139, "end": 164, "text": "(Grabar and Cardon, 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection and pre-processing", "sec_num": "2." }, { "text": "\u2022 Equivalence : the sentences mean the same, but they are not identical;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection and pre-processing", "sec_num": "2." }, { "text": "\u2022 Inclusion : the meaning of one sentence is included in the other one, where additional information can also be found. This retains information about sentence splitting or merging and about information deletion or addition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection and pre-processing", "sec_num": "2." }, { "text": "The documents are pre-processed for syntactic POStagging and syntactic analysis into constituents (Kitaev and Klein, 2018) . In the manually annotated set, only sentences that have a verb are kept. This yields 266 sentence pairs: 136 equivalent pairs, and 130 inclusion pairs (56 in one direction, 74 in the other one). For the automatic processing, we produced the whole possible combinations of sentences within each of the 39 document pairs, and ended up with 1,164,407 sentence pairs. Thus, given that, out of more than one million possible pairs, only 266 sentence pairs are considered as useful for the parallel corpus creation, we observe a high degree of imbalance: little less than 4,400:1. Our purpose is to reduce this imbalance for facilitating the search of parallel sentences and improving the overall quality of the results.", "cite_spans": [ { "start": 98, "end": 122, "text": "(Kitaev and Klein, 2018)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection and pre-processing", "sec_num": "2." }, { "text": "In order to address that extremely high degree of imbalance, we propose to investigate three methods using formal and syntactic indicators:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "\u2022 First method is based on the number of tokens in sentences. Hence, each candidate sentence must contain at least five tokens. This permits to consider sentences that are grammatically complete and convey some semantics. We set that value to five because that is the length of the shortest sentence in the set with the manual annotations;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "\u2022 Second method prevents from producing pairs with identical sentences;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "\u2022 Third method relies on syntactic information. We base our work on a method that uses constituency parsing for measuring similarity between sentences in a monolingual setting (Duran et al., 2014) . In the original work, the authors detect similar words in sentences and assign a similarity score that is computed by looking at similar labels of nodes that contain similar words. The process is described in Figure 1 . It is difficult to adapt that method as it is described in the paper. The main reason is that it relies heavily on a table that establishes which grammatical categories for constituents are similar to one another. It is made for English and there is no indication as to how it was built. Nonetheless, we make the assumption that adopting a similar approach could help in the process of weeding out undesired pairs for building a parallel corpus. Hence, instead of calculating a similarity score, we just choose between keeping the sentence pair as a candidate for a classifier, or rejecting it. For a given pair, we produce a syntactic tree for each of the two sentences. Then, if both sentences contain a verb, we compare all the leaves (i.e. words) of the trees, except the ones that are part of the stop words list. The list contains 83 items that are grammatical words, such as determiners or prepositions for example. If we find two identical words, we look at their parents nodes' labels. If those are identical, we keep the sentence in the candidates list. That process is illustrated in Algorithm 1 below. We also perform the same approach but instead of stopping if the parents nodes' labels are not identical, we go up a level to perform the same comparison, and up another level if the previous comparison was not successful. As soon as one comparison succeeds, we keep the sentence pair in the candidates list. This other approach is illustrated in Algorithm 2. That movement to the third parent of the leaves is what is chosen in the method which inspires this work, we chose to implement it to learn how the depth of exploration influences our filtering.", "cite_spans": [ { "start": 176, "end": 196, "text": "(Duran et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 408, "end": 416, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "To parse the sentences in order to obtain their syntactic tree with constituents, we use the Berkeley Neural Parser and the language model that is provided with it for French, with the benepar Python library (Kitaev and Klein, 2018) . The, we use the NLTK's Tree library (Bird et al., 2009) for tree manipulation and exploration.", "cite_spans": [ { "start": 208, "end": 232, "text": "(Kitaev and Klein, 2018)", "ref_id": "BIBREF5" }, { "start": 271, "end": 290, "text": "(Bird et al., 2009)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "Data: A pair of syntactic trees (T 1 and T 2 ), a list of stop words (SW) Result: Boolean Boolean \u2190 False; if one verb is found in both sentences then Figure 1 : The similarity method described in (Duran et al., 2014) ", "cite_spans": [ { "start": 197, "end": 217, "text": "(Duran et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 151, "end": 159, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "foreach leaf in T 1 (L 1 ) not found in SW do foreach leaf in T 2 (L 2 ) not found in SW do if L 1 is identical to L 2 then if L 1 '", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3." }, { "text": "We evaluate the results obtained in three different ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "\u2022 we compare the number of initial sentence pairs to the number of remaining sentence pairs after the filtering,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "\u2022 we check whether the removed pairs are manually an- Unfiltered FI Syntax Depth 1 Syntax Depth 3 Total 1,164,407 409,530 16,879 21,428 Equivalent 136 136 94 94 Inclusion 130 130 94 100 Table 1 : Pairs remaining after the various filtering methods.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 211, "text": "Unfiltered FI Syntax Depth 1 Syntax Depth 3 Total 1,164,407 409,530 16,879 21,428 Equivalent 136 136 94 94 Inclusion 130 130 94 100 Table 1", "ref_id": null } ], "eq_spans": [], "section": "Evaluation", "sec_num": "4." }, { "text": "Data: A pair of syntactic trees (T 1 and T 2 ), a list of stop words (SW) Result: Boolean Boolean \u2190 False; if one verb is found in both sentences then \u2022 we give the remaining data to a random forest classifier algorithm, such as done in a previous work (Cardon and Grabar, 2019), and evaluate recall and precision of the output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Pairs", "sec_num": null }, { "text": "foreach leaf in T 1 (L 1 ) not found in SW do foreach leaf in T 2 (L 2 ) not found in SW do if L 1 is identical to L 2 then if L 1 's parent node's label (P 1 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Pairs", "sec_num": null }, { "text": "The overall goal is to remove as many negative examples as possible, while preserving the positive examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Remaining Pairs", "sec_num": null }, { "text": "We first look at how the volume of data is reduced further to the filtering operations. The first column in Table 1 shows the number of raw sentence pairs, the second colum indicates the number of pairs after using the formal indicators (FI), the third and fourth columns show the number of pairs remaining when using the syntactic filter, respectively with looking at the first syntactic parent node and up to the third parent node. The formal indicators are applied before the syntactic filters. The syntactic filters are used independently from one another.", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "We can see that the simple formal indicators reduce the total number of sentence pairs by 65% (from 1,164,407 to 409,530 sentence pairs). These two indicators were defined on the basis of observation of our data. They are very straightforward and we expected that no positive example (equivalent and inclusion pairs) would be lost in the process. This hypothesis is verified indeed: all the good candidates for parallel pairs are kept at this step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "Starting from the 409,530 pairs obtained after this first filter, we can see that both syntactic filters lead to a huge reduction of the volume of remaining sentence pairs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 when using depth 1 leaves 16,879 pairs (\u223c96% reduction) remain,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 when using depth 3 leaves 21,428 pairs (\u223c95% reduction) remain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "The downside is that a substantial amount of positive examples is also lost in the process:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 42 out of 136 (\u223c30%) for equivalent pairs with both depths used,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 36 out of 130 (\u223c27%) for inclusion pairs with depth 1, 32 out of 130 (\u223c24%) for inclusion pairs with depth 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "The over 95% reduction with the syntax filter on data that were already greatly reduced complies with our initial goal. Yet, we lose several good candidates for parallel sentences. Hence, we look at the positive examples that were rejected by the syntactic filter in order to understand why it is the case and how we can address this issue. For instance, consider the following sentence pair:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 Dans le cas o\u00f9 le patient devrait arr\u00eater le traitement, il est recommand\u00e9 de r\u00e9duire progressivement la posologie. (In case the patient should stop the treatment, it is recommended to decrease the dose progressively.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 L'arr\u00eat du traitement doit se faire de mani\u00e8re progressive. (The cessation of treatment must be done progressively.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "Unfiltered FI Syntax Depth 1 Syntax Depth 3 Set P R F1 P R F1 P R F1 P R F1 Equivalent Neg. 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Equivalent Pos. 0.79 0.43 0.55 0.82 0.32 0.46 0.75 0.39 0.51 0.84 0.40 0.54 Inclusion Neg.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Inclusion Pos. 0.71 0.09 0.17 0.50 0.16 0.24 0.71 0.15 0.24 0.56 0.15 0.24 Table 2 : Precision, Recall and F1 scores on the different sets of sentence pairs with classification.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "The reason why this kind of sentence pairs is rejected is because the labels of parent nodes for identical words (such as traitement (treatment) in this example) differ in the trees produced by the syntactic parser. Indeed, in the first sentence, le traitement (the treatment) is labelled as an NP-OBJ, while it is labelled as an NP in the second sentence. The error is caused by the fact that le traitement from the second sentence (in du traitement, which is correctly analyzed as de le traitement) is an NP in a PP that depends on the noun arr\u00eat. The parser that we use sometimes adds the information about the function of a phrase, this is the case in the first sentence here where le traitement is the object of the verb arr\u00eater. This kind of examples suggests to put together similar node labels, such as NP and NP-OBJ. It would also be interesting to see whether some nodes are consistently similar in the parallel pairs, and hopefully find that those consistencies do not appear in pairs that should not be retained in a parallel corpus. Let's analyze another typical example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 La prudence est recommand\u00e9e chez les sujets atteints d'ulc\u00e8res gastroduod\u00e9naux. (The vigilance is recommended in subjects suffering from gastroduodenal ulcers.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "\u2022 Ce m\u00e9dicament doit \u00eatre utilis\u00e9 avec prudence en cas d'ulc\u00e8re de l'estomac ou du duod\u00e9num. (This medication must be used with vigilance in case of ulcers of the stomach and duodenum)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "There is only one pair of identical words here : prudence (vigilance). This work is labelled as an NP in the first sentence and as a PP in the second sentence. The presence of ulc\u00e8re (ulcer) in both sentences is not detected: the filter is currently looking for strictly identical words, while in these two sentences, ulc\u00e8re (ulcer) occurs in its plural form in the first sentence and in its singular form in the second sentence. Hence, the filter must be more permissive in order to detect such occurrences. One solution is to work with a lemmatizer, another solution is to propose a more sophisticated word comparison function. This is a task where word embeddings could also be useful. We intend to test this possibility in future works. Table 2 shows the results of classification with the different sentence pairs sets. For each experiment, the data were divided in two thirds for training and one third for testing. The results are reported by class (negative and positive) and positive class type (either equivalence or inclusion). The negative class has a perfect score in every metric because of the high degree of imbalance, the false negatives are not numerous enough to have an influence on the score. We can see that the syntactic method with a depth of exploration of three levels has a positive influence on precision, compared to unfiltered data, and recall is negatively impacted. We believe that being deprived of one third of such a small set of positive examples has a strong negative impact on performance. We should be able to improve recall if we prevent the positive examples from being filtered out, as we mentioned in the error analysis above. The results for inclusion show that this type of sentence pair is hard to recognize automatically. There is some improvement with filtered data, but the scores are low, especially recall. What we draw from those results is that the different sentence pairs types should be handled differently. It seems that we cannot expect to extract inclusion pairs in the same way as we extract equivalent pairs.", "cite_spans": [], "ref_spans": [ { "start": 741, "end": 748, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "5." }, { "text": "In this work, we proposed to address the problem of imbalance in the process of extracting parallel sentences from comparable corpora. We worked on a French comparable corpus made for biomedical text simplification. We showed that we could drastically reduce the number of negative examples (>98%) with simple heuristics and a syntactic comparison of sentence pairs, at the cost of losing some positive examples. Analyzing the errors, we showed that there were consistencies in what was left out and that should be kept, that can be addressed with improvements to the method, such as a better word comparison function and a more careful work on syntactic node label similarity. Even with those issues, we reduce the imbalance and improve precision on a classification task for equivalent sentences, thus reducing the manual work needed to check the output, which was the main objective. We also showed that inclusion pairs are much harder to process and that another method should be used for extracting that type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "https://france.cochrane.org/ revues-cochrane 3 https://fr.wikipedia.org/ 4 https://fr.vikidia.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the reviewers for their comments. This work was funded by the French National Agency for Research (ANR) as part of the CLEAR project (Communication, Literacy, Education, Accessibility, Readability), ANR-17-CE19-0016-01.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Natural Language Processing with Python", "authors": [ { "first": "S", "middle": [], "last": "Bird", "suffix": "" }, { "first": "E", "middle": [], "last": "Klein", "suffix": "" }, { "first": "E", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bird, S., Klein, E., and Loper, E. (2009). Natural Lan- guage Processing with Python. O'Reilly Media, Inc., 1st edition.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Parallel sentence retrieval from comparable corpora for biomedical text simplification", "authors": [ { "first": "R", "middle": [], "last": "Cardon", "suffix": "" }, { "first": "N", "middle": [], "last": "Grabar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "168--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cardon, R. and Grabar, N. (2019). Parallel sentence re- trieval from comparable corpora for biomedical text sim- plification. In Proceedings of Recent Advances in Natu- ral Language Processing, pages 168-177, Varna, Bul- garia, september.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Extracting lay paraphrases of specialized expressions from monolingual comparable medical corpora", "authors": [ { "first": "L", "middle": [], "last": "Del\u00e9ger", "suffix": "" }, { "first": "P", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Non-parallel Corpora (BUCC)", "volume": "", "issue": "", "pages": "2--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Del\u00e9ger, L. and Zweigenbaum, P. (2009). Extracting lay paraphrases of specialized expressions from monolin- gual comparable medical corpora. In Proceedings of the 2nd Workshop on Building and Using Comparable Cor- pora: from Parallel to Non-parallel Corpora (BUCC), pages 2-10, Singapore, August. Association for Compu- tational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Similarity of sentences through comparison of syntactic trees with pairs of similar words", "authors": [ { "first": "K", "middle": [], "last": "Duran", "suffix": "" }, { "first": "J", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "M", "middle": [], "last": "Bravo", "suffix": "" } ], "year": 2014, "venue": "11th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duran, K., Rodriguez, J., and Bravo, M. (2014). Similarity of sentences through comparison of syntactic trees with pairs of similar words. In 11th International Conference on Electrical Engineering, Computing Science and Au- tomatic Control (CCE), pages 1-6, Campeche, 09.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Shakespearizing modern language using copy-enriched sequence to sequence models", "authors": [ { "first": "N", "middle": [], "last": "Grabar", "suffix": "" }, { "first": "R", "middle": [], "last": "Cardon", "suffix": "" }, { "first": "H", "middle": [], "last": "Jhamtani", "suffix": "" }, { "first": "V", "middle": [], "last": "Gangal", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "E", "middle": [], "last": "Nyberg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Workshop on Stylistic Variation", "volume": "", "issue": "", "pages": "10--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grabar, N. and Cardon, R. (2018). CLEAR -Simple Cor- pus for Medical French. In Workshop on Automatic Text Adaption (ATA), pages 1-11, Tilburg, Netherlands. Jhamtani, H., Gangal, V., Hovy, E., and Nyberg, E. (2017). Shakespearizing modern language using copy-enriched sequence to sequence models. In Proceedings of the Workshop on Stylistic Variation, pages 10-19, Copen- hagen, Denmark, September. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Extracting parallel sentences from comparable corpora using document level alignment", "authors": [ { "first": "N", "middle": [], "last": "Kitaev", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Smith", "suffix": "" }, { "first": "C", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10", "volume": "1", "issue": "", "pages": "403--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kitaev, N. and Klein, D. (2018). Constituency parsing with a self-attentive encoder. In Proceedings of the 56th An- nual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), Melbourne, Australia, July. Association for Computational Linguistics. Smith, J. R., Quirk, C., and Toutanova, K. (2010). Extract- ing parallel sentences from comparable corpora using document level alignment. In Human Language Tech- nologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, page 403-411, USA. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Problems in current text simplification research: New data can help", "authors": [ { "first": "W", "middle": [], "last": "Xu", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "C", "middle": [], "last": "Napoles", "suffix": "" } ], "year": 2015, "venue": "Transactions of the Association for Computational Linguistics", "volume": "3", "issue": "", "pages": "283--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu, W., Callison-Burch, C., and Napoles, C. (2015). Prob- lems in current text simplification research: New data can help. Transactions of the Association for Computa- tional Linguistics, 3:283-297.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "zNLP: Identifying parallel sentences in Chinese-English comparable corpora", "authors": [ { "first": "Z", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "P", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 10th Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "51--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhang, Z. and Zweigenbaum, P. (2017). zNLP: Identi- fying parallel sentences in Chinese-English comparable corpora. In Proceedings of the 10th Workshop on Build- ing and Using Comparable Corpora, pages 51-55, Van- couver, Canada, August. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Overview of the second BUCC shared task: Spotting parallel sentences in comparable corpora", "authors": [ { "first": "P", "middle": [], "last": "Zweigenbaum", "suffix": "" }, { "first": "S", "middle": [], "last": "Sharoff", "suffix": "" }, { "first": "R", "middle": [], "last": "Rapp", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 10th Workshop on Building and Using Comparable Corpora", "volume": "", "issue": "", "pages": "60--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zweigenbaum, P., Sharoff, S., and Rapp, R. (2017). Overview of the second BUCC shared task: Spotting par- allel sentences in comparable corpora. In Proceedings of the 10th Workshop on Building and Using Comparable Corpora, pages 60-67, Vancouver, Canada, August. As- sociation for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "num": null, "content": "
s parent node's label is identical to
L 2 's parent node's label then
Boolean \u2190 True;
else
nothing;
end
else
nothing;
end
end
end
else
nothing;
end
return Boolean;
Algorithm 1:
", "type_str": "table", "text": "Filtering method only looking at the immediate parent nodes of the leaves" } } } }